Convergence of DLLR rapid speaker adaptation algorithms

Download: PDF.

“Convergence of DLLR rapid speaker adaptation algorithms” by A. Gunawardana and W. Byrne. In ISCA ITR-Workshop on Adaptation Methods for Automatic Speech Recognition, 2001.


Discounted Likelihood Linear Regression (DLLR) is a speaker adaptation technique for cases where there is insufficient data for MLLR adaptation. Here, we provide an alternative derivation of DLLR by using a censored EM formulation which postulates additional adaptation data which is hidden. This derivation shows that DLLR, if allowed to converge, provides maximum likelihood solutions. Thus the robustness of DLLR to small amounts of data is obtained by slowing down the convergence of the algorithm and by allowing termination of the algorithm before overtraining occurs. We then show that discounting the observed adaptation data by postulating additional hidden data can also be extended to MAP estimation of MLLR-type adaptation transformations.

Download: PDF.

BibTeX entry:

   author = {A. Gunawardana and W. Byrne},
   title = {Convergence of {DLLR} rapid speaker adaptation algorithms},
   booktitle = {{ISCA ITR-Workshop} on Adaptation Methods for Automatic
	Speech Recognition},
   pages = {(4 pages)},
   year = {2001}

Back to Bill Byrne publications.