Back to Search
Start Over
Noisy hidden Markov models for speech recognition.
- Source :
- 2013 International Joint Conference on Neural Networks (IJCNN); 2013, p1-6, 6p
- Publication Year :
- 2013
-
Abstract
- We show that noise can speed training in hidden Markov models (HMMs). The new Noisy Expectation-Maximization (NEM) algorithm shows how to inject noise when learning the maximum-likelihood estimate of the HMM parameters because the underlying Baum-Welch training algorithm is a special case of the Expectation-Maximization (EM) algorithm. The NEM theorem gives a sufficient condition for such an average noise boost. The condition is a simple quadratic constraint on the noise when the HMM uses a Gaussian mixture model at each state. Simulations show that a noisy HMM converges faster than a noiseless HMM on the TIMIT data set. [ABSTRACT FROM PUBLISHER]
Details
- Language :
- English
- ISBNs :
- 9781467361293
- Database :
- Complementary Index
- Journal :
- 2013 International Joint Conference on Neural Networks (IJCNN)
- Publication Type :
- Conference
- Accession number :
- 94558363
- Full Text :
- https://doi.org/10.1109/IJCNN.2013.6707088