本文针对传统统计语言模型的离线自适应方法,提出了一种在线实时的递增式自适应方法。该自适应方法需要解决几个问题。第一是要设计一种语言模型结构以适应在线的自适应;第二是如何利用在线收集到的语料对语言模型进行实时的参数修改;在我们设计的中文音转字平台中,将语言模型分成两个部分,分别是通用模型和用户模型。对于通用模型,采用高效的存储结构结合参数预取技术,提高了模型的速度;对于用户模型,使用动态的加权方法结合MAP 动态调整参数。本文所做的实验证明使用该方法能较大程度的降低中文音转字的错误率。
Abstract
In this paper ,an online incremental language model adaptation method is proposed ,which is different from the traditional offline language model adaptation method. There are some problems in the online incremental adaptation. The first one is how to design a flexible framework for online adaptation ,the second one is how to adjust the parameters of the model incrementally according to the corpus collected online. In our application platform ,the whole model is divided into two parts - - the background model and the user model respectively. An effective storage structure ,integrating with parameter looking ahead technique ,accelerates the visiting procedure ; a dynamic weighting MAP method is proposed to adjust the parameters in the user model. Experiments show that it can achieve a comparative Chinese character error rate reduction in Chinese Pinyin to Hanzi translation.
关键词
统计语言模型 /
N-gram /
自适应 /
语音识别
{{custom_keyword}} /
Key words
stochastic language model /
N-gram /
adaptation /
speech recognition
{{custom_keyword}} /
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
参考文献
[1] S. M. Katz , Estimation of probabilities from sparse data for the language model component of a speech recognizer , IEEE Trans. on Acoustics ,Speech ,and Signal Processing ,1987 ,35 (3) :400 - 401
[2] M. Federico ,Efficient language model adaptation through MDI estimation. Eurospeech’99 ,1999 ,4 :1583 - 1586
[3] R. Rosenfeld. A maximum entropy approach to adaptive statistical language model ,Computer ,Speech ,and Language ,10 ,1996
[4] H. Masataki ,Y. Sagisaka ,T. Kawahara ,Task adaptation using MAP estimation in n-gram language modeling , ICASSP’97 ,1997 ,783 - 786
{{custom_fnGroup.title_cn}}
脚注
{{custom_fn.content}}