Back to Search Start Over

Orthogonal Mixture of Hidden Markov Models

Publication Year :
2021

Abstract

Mixtures of Hidden Markov Models (MHMM) are widely used for clustering of sequential data, by letting each cluster correspond to a Hidden Markov Model (HMM). Expectation Maximization (EM) is the standard approach for learning the parameters of an MHMM. However, due to the non-convexity of the objective function, EM can converge to poor local optima. To tackle this problem, we propose a novel method, the Orthogonal Mixture of Hidden Markov Models (oMHMM), which aims to direct the search away from candidate solutions that include very similar HMMs, since those do not fully exploit the power of the mixture model. The directed search is achieved by including a penalty in the objective function that favors higher orthogonality between the transition matrices of the HMMs. Experimental results on both simulated and real-world datasets show that the oMHMM consistently finds equally good or better local optima than the standard EM for an MHMM; for some datasets, the clustering performance is significantly improved by our novel oMHMM (up to 55 percentage points w.r.t. the v-measure). Moreover, the oMHMM may also decrease the computational cost substantially, reducing the number of iterations down to a fifth of those required by MHMM using standard EM.<br />QC 20211203Conference ISBN 978-3-030-67658-2; 978-3-030-67657-5

Details

Database :
OAIster
Notes :
Safinianaini, Negar, de Souza, Camila P. E., Bostròˆm, Henrik, Lagergren, Jens
Publication Type :
Electronic Resource
Accession number :
edsoai.on1312826711
Document Type :
Electronic Resource
Full Text :
https://doi.org/10.1007.978-3-030-67658-2_29