使用詞向量表示與概念資訊於中文大詞彙連續語音辨識之語言模型調適

No Thumbnail Available

Date

2015

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

近年來深度學習(Deep Learning)激起一股研究熱潮;隨著深度學習的發展而有分散式表示法(Distributed Representation)的產生。此種表示方式不僅能以較低維度的向量表示詞彙,還能藉由向量間的運算,找出任兩詞彙之間的語意關係。本論文以此為發想,提出將分散式表示法,或更具體來說是詞向量表示(Word Representation),應用於語音辨識的語言模型中使用。首先,在語音辨識的過程中,對於動態產生之歷史詞序列與候選詞改以詞向量表示的方式來建立其對應的語言模型,希望透過此種表示方式而能獲取到更多詞彙間的語意資訊。其次,我們針對新近被提出的概念語言模型(Concept Language Model)加以改進;嘗試在調適語料中以句子的層次做模型訓練資料選取之依據,去掉多餘且不相關的資訊,使得經由調適語料中訓練出的概念類別更為具代表性,而能幫助動態語言模型調適。另一方面,在語音辨識過程中,會選擇相關的概念類別來動態組成概念語言模型,而此是透過詞向量表示的方式來估算,其中詞向量表示是由連續型模型(Continue Bag-of-Words Model)或是跳躍式模型(Skip-gram Model)生成,希望藉由詞向量表示記錄每一個概念類別內詞彙彼此間的語意關係。最後,我們嘗試將上述兩種語言模型調適方法做結合。本論文是基於公視電視新聞語料庫來進行大詞彙連續語音辨識(Large Vocabulary Continuous Speech Recognition, LVCSR)實驗,實驗結果顯示本論文所提出的語言模型調適方法相較於當今最好方法有較佳的效用。
Research on deep learning has experienced a surge of interest in recent years. Alongside the rapid development of deep learning related technologies, various distributed representation methods have been proposed to embed the words of a vocabulary as vectors in a lower-dimensional space. Based on the distributed representations, it is anticipated to discover the semantic relationship between any pair of words via some kind of similarity computation of the associated word vectors. With the above background, this thesis explores a novel use of distributed representations of words for language modeling (LM) in speech recognition. Firstly, word vectors are employed to represent the words in the search history and the upcoming words during the speech recognition process, so as to dynamically adapt the language model on top of such vector representations. Second, we extend the recently proposed concept language model (CLM) by conduct relevant training data selection in the sentence level instead of the document level. By doing so, the concept classes of CLM can be more accurately estimated while simultaneously eliminating redundant or irrelevant information. On the other hand, since the resulting concept classes need to be dynamically selected and linearly combined to form the CLM model during the speech recognition process, we determine the relatedness of each concept class to the test utterance based the word representations derived with either the continue bag-of-words model (CBOW) or the skip-gram model (Skip-gram). Finally, we also combine the above LM methods for better speech recognition performance. Extensive experiments carried out on the MATBN (Mandarin Across Taiwan Broadcast News) corpus demonstrate the utility of our proposed LM methods in relation to several state-of-the art baselines.

Description

Keywords

語音辨識, 語言模型, 深度學習, 詞向量表示, 概念模型, speech recognition, language modeling, deep learning, word representation, concept model

Citation

Collections