Leveraging Deep Neural Network Activation Entropy to cope with Unseen Data in Speech Recognition

by   Vikramjit Mitra, et al.

Unseen data conditions can inflict serious performance degradation on systems relying on supervised machine learning algorithms. Because data can often be unseen, and because traditional machine learning algorithms are trained in a supervised manner, unsupervised adaptation techniques must be used to adapt the model to the unseen data conditions. However, unsupervised adaptation is often challenging, as one must generate some hypothesis given a model and then use that hypothesis to bootstrap the model to the unseen data conditions. Unfortunately, reliability of such hypotheses is often poor, given the mismatch between the training and testing datasets. In such cases, a model hypothesis confidence measure enables performing data selection for the model adaptation. Underlying this approach is the fact that for unseen data conditions, data variability is introduced to the model, which the model propagates to its output decision, impacting decision reliability. In a fully connected network, this data variability is propagated as distortions from one layer to the next. This work aims to estimate the propagation of such distortion in the form of network activation entropy, which is measured over a short- time running window on the activation from each neuron of a given hidden layer, and these measurements are then used to compute summary entropy. This work demonstrates that such an entropy measure can help to select data for unsupervised model adaptation, resulting in performance gains in speech recognition tasks. Results from standard benchmark speech recognition tasks show that the proposed approach can alleviate the performance degradation experienced under unseen data conditions by iteratively adapting the model to the unseen datas acoustic condition.


Interpreting DNN output layer activations: A strategy to cope with unseen data in speech recognition

Unseen data can degrade performance of deep neural net acoustic models. ...

Learning to adapt: a meta-learning approach for speaker adaptation

The performance of automatic speech recognition systems can be improved ...

Unsupervised Domain Adaptation by Adversarial Learning for Robust Speech Recognition

In this paper, we investigate the use of adversarial learning for unsupe...

Senone-aware Adversarial Multi-task Training for Unsupervised Child to Adult Speech Adaptation

Acoustic modeling for child speech is challenging due to the high acoust...

Lattice-Based Unsupervised Test-Time Adaptation of Neural Network Acoustic Models

Acoustic model adaptation to unseen test recordings aims to reduce the m...

Reinforcement Learning of Speech Recognition System Based on Policy Gradient and Hypothesis Selection

Speech recognition systems have achieved high recognition performance fo...

Investigation and Analysis of Hyper and Hypo neuron pruning to selectively update neurons during Unsupervised Adaptation

Unseen or out-of-domain data can seriously degrade the performance of a ...

Please sign up or login with your details

Forgot password? Click here to reset