Abstract
Phrase-based language models have grown in popularity since they allow the speech recognition process to make use of more context in recognizing the words. Previous approaches have used perplexity reduction to identify groups of words to be linked into phrases and have used these phrases as the basis for computing the language model probabilities. In this paper, we argue that perplexity reduction is only one of three aspects to be considered in choosing the phrases. We also argue that the chosen phrases should not be the basis for computing the language model probabilities. Rather, the probabilities should be derived from a language model built at the lexical level.
Original language | English (US) |
---|---|
Pages | 41-48 |
Number of pages | 8 |
State | Published - 1997 |
Event | Proceedings of the 1997 IEEE Workshop on Automatic Speech Recognition and Understanding - Santa Barbara, CA, USA Duration: Dec 14 1997 → Dec 17 1997 |
Other
Other | Proceedings of the 1997 IEEE Workshop on Automatic Speech Recognition and Understanding |
---|---|
City | Santa Barbara, CA, USA |
Period | 12/14/97 → 12/17/97 |
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition