目前,面向蒙古语的语音识别语音库资源相对稀缺,但存在较多的电视剧、广播等蒙古语音频和对应的文本。该文提出基于语音识别的蒙古语长音频语音文本自动对齐方法,实现蒙古语电视剧语音的自动标注,扩充了蒙古语语音库。在前端处理阶段,使用基于高斯混合模型的语音端点检测技术筛选并删除噪音段;在语音识别阶段,构建基于前向型序列记忆网络的蒙古语声学模型;最后基于向量空间模型,将语音识别得到的假设序列和参考音素序列进行句子级别的动态时间归整算法匹配。实验结果表明,与基于Needleman-Wunsch算法的语音对齐比较,该文提出的蒙古语长音频语音文本自动对齐方法的对齐正确率提升了31.09%。
Abstract
At present, the resources of speech database for Mongolian speech recognition are relatively scarce. To automatically annotate the existing Mongolian audios and corresponding texts, such as TV plays and broadcasts, this paper presents an automatic speech-text alignment method for long Mongolian audio so as to expand Mongolian speech database. In the front-end processing stage, noise segments are filtered and deleted by using Voice Activity Detection technology based on Gaussian Mixture Model. In the speech recognition, the Mongolian Acoustic Model based on Feedforward Sequential Memory Networks is constructed. Finally, based on the Vector Space Model, the hypothesis sequence obtained from speech recognition and the reference phone sequence are matched by the sentence-level Dynamic Time Warping algorithm. The experiments show that the automatic speech-text alignment for Mongolian long audio is improved by 31.09% compared with the traditional Needleman-Wunsch algorithm.
关键词
蒙古语 /
语音端点检测 /
语音文本对齐 /
动态时间归整算法
{{custom_keyword}} /
Key words
Mongolian language /
voice activity detection /
speech-text alignment /
dynamic time warping algorithm
{{custom_keyword}} /
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
参考文献
[1] Nnesbeitt S L. Ethnologue: Languages of the world[J]. Electronic Resources Review, 1999, 3(11): 129-131.
[2] Michael M, Michaela S, Sarah M, et al. Montreal forced aligner: Trainable text-speech alignment using Kaldi[C]//Proceedings of the Interspeech, 2017: 498-502.
[3] Jr G D F. The Viterbi algorithm[J]. Proceedings of the IEEE, 1973, 61(3): 268-278.
[4] Panayotov V, Chen G, Povey D, et al. Librispeech: An ASR corpus based on public domain audio books[C]//Proceedings of the ICASSP, 2015, 5206-5210.
[5] Fabrice M, Thierry D. High-quality speech synthesis for phonetic speech segmentation[C]//Proceedings of the 5th European Conference on Speech Communication and Technology, EURO SPEECH, 1997: 22-25.
[6] Stan A, Bell P, King S. A grapheme-based method for automatic alignment of speech and text data[C]//Proceedings of the Spoken Language Technology Workshop (SLT). 2012: 286-290.
[7] 韩立华, 王博, 段淑凤,等. 语音端点检测技术研究进展[J]. 计算机应用研究, 2010, 27(4): 1220-1226.
[8] 李冬梅, 李小静, 梁圣法,等. 一种以低于奈奎斯特频率的采样频率进行信号采集方法: 中国, CN201310220522.8[P].2013-10-09.
[9] Zhang S, Ming L, Yan Z, et al. Deep-FSMN for large vocabulary continuous speech recognition[C]//Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018: 5869-5873.
[10] Wang Y, Bao F, Zhang H, et al. Research on Mongolian speech recognition based on FSMN[C]//Proceedings of the NLPCC 2017: Natural Language Processing and Chinese Computing, 2017: 243-254.
[11] Chen L, Research on DTW algorithm improvement technology based on speech recognition system[J]. Microcomputer Information, 2006, 22(2): 267-269.
[12] Brian M, Colin R, Dawen L, et al. Librosa: Audio and music signal analysis in Python[C]//Proceedings of the Python in Science Conference, 2015: 18-24.
[13] Povey D, Ghoshal A, Boulianne G, et al. The Kaldi speech recognition toolkit[C]//Proceedings of the IEEE 2011 Workshop on Automatic Speech Recognition and Understanding, 2011: 1-4.
[14] Boersma P D. Praat: Doing phonetics by computer[J]. Ear & Hearing, 2011, 32(2): 266.
[15] Souza G, Neto N. An automatic phonetic aligner for Brazilian portuguese with a Praat interface[C]//Proceedings of the PROPOR 2016: Computational Processing of the Portuguese Language, 2016: 374-384.
[16] Rose J, Eisenmenger F. A fast unbiased comparison of protein structures by means of the Needleman-Wunsch algorithm[J]. Journal of Molecular Evolution, 1991, 32(4): 340-54.
{{custom_fnGroup.title_cn}}
脚注
{{custom_fn.content}}
基金
国家自然科学基金 (61563040, 61773224);内蒙古自然科学基金(2018MS06006, 2016ZD06)
{{custom_fund}}