Abstract for rosti_thesis

PhD thesis, University of Cambridge, 2004

LINEAR GAUSSIAN MODELS FOR SPEECH RECOGNITION

A-V.I. Rosti

May, 2004

Currently the most popular acoustic model for speech recognition is the hidden Markov model (HMM). However, HMMs are based on a series of assumptions, some of which are known to be poor. In particular, the assumption that successive speech frames are conditionally independent given the discrete state that generated them is not a good assumption for speech recognition. State space models may be used to address some shortcomings of this assumption. State space models are based on a continuous state vector evolving through time according to a state evolution process. The observations are then generated by an observation process, which maps the current continuous state vector onto the observation space. In this work, the state evolution and observation processes are assumed to be linear and noise sources are distributed according to Gaussians or Gaussian mixture models. Two forms of state evolution processes are considered.

First, the state evolution process is assumed to be piece-wise constant. All the variations of the state vector about these constant values are modelled as noise. Using this approximation, a new acoustic model called the factor analysed HMM (FAHMM) is presented. In the FAHMM a discrete Markov random variable chooses the continuous state and the observation process parameters. The FAHMM generalises a number of standard covariance models such as the independent factor analysis, shared factor analysis and semi-tied covariance matrix HMMs. Efficient training and recognition algorithms for the FAHMMs are presented along with speech recognition results using various configurations.

Second, the state evolution process is assumed to be a linear first-order Gauss-Markov random process. Using Gaussian distributed noise sources and a factor analysis observation process this model corresponds to a linear dynamical system (LDS). For acoustic modelling a discrete Markov random variable is required to choose the LDS parameters. This hybrid model is called the switching linear dynamical system (SLDS). The SLDS is related to the stochastic segment model, which assumes that the segments are independent. In contrast, for the SLDS the continuous state vector is propagated over the segment boundaries, thus providing a better model for co-articulation. Unfortunately, exact inference for the SLDS is intractable due to the exponential growth of posterior components in time. In this work, approximate methods based on both deterministic and stochastic algorithms are described. An efficient proposal mechanism for Gibbs sampling is introduced along with application to parameter optimisation and N-best rescoring. The results of medium vocabulary speech recognition experiments are presented.

Keywords: Speech recognition, acoustic modelling, hidden Markov models, state space models, linear dynamical systems, expectation maximisation, Markov chain Monte Carlo methods


| (ftp:) rosti_thesis.pdf | (http:) rosti_thesis.pdf | (ftp:) rosti_thesis.ps.gz | (http:) rosti_thesis.ps.gz |

If you have difficulty viewing files that end '.gz', which are gzip compressed, then you may be able to find tools to uncompress them at the gzip web site.

If you have difficulty viewing files that are in PostScript, (ending '.ps' or '.ps.gz'), then you may be able to find tools to view them at the gsview web site.

We have attempted to provide automatically generated PDF copies of documents for which only PostScript versions have previously been available. These are clearly marked in the database - due to the nature of the automatic conversion process, they are likely to be badly aliased when viewed at default resolution on screen by acroread.