Segmentation and Alignment of Parallel Text for Statistical Machine Translation

Download: PDF.

“Segmentation and Alignment of Parallel Text for Statistical Machine Translation” by Y. Deng, S. Kumar, and W. Byrne. Journal of Natural Language Engineering, vol. 13, no. 3, 2006, pp. 235-260 (26 pages).


We address the problem of extracting bilingual chunk pairs from parallel text to create training sets for statistical machine translation. We formulate the problem in terms of a stochastic generative process over text translation pairs, and derive two different alignment procedures based on the underlying alignment model. The first procedure is a now-standard dynamic programming alignment model which we use to generate an initial coarse alignment of the parallel text. The second procedure is a divisive clustering parallel text alignment procedure which we use to refine the first-pass alignments. This latter procedure is novel in that it permits the segmentation of the parallel text into sub-sentence units which are allowed to be reordered to improve the chunk alignment. The quality of chunk pairs are measured by the performance of machine translation systems trained from them. We show practical benefits of divisive clustering as well as how system performance can be improved by exploiting portions of the parallel text that otherwise would have to be discarded. We also show that chunk alignment as a first step in word alignment can significantly reduce word alignment error rate.

Download: PDF.

BibTeX entry:

   author = {Y. Deng and S. Kumar and W. Byrne},
   title = {Segmentation and Alignment of Parallel Text for Statistical
	Machine Translation},
   journal = {Journal of Natural Language Engineering},
   volume = {13},
   number = {3},
   pages = {235--260 (26 pages)},
   year = {2006}

Back to Bill Byrne publications.