*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Adaptation of Hybrid Deep Neural Network-hidden Markov Model Speech Recognition System using a Sub-space Approach
Committee:
Dr. Anderson, Advisor
Dr. Clements, Chair
Dr. Davenport
Abstract: The objective of the proposed research is to enhance the performance of automatic speech recognition (ASR) system by adaptation of the ASR for a particular speaker or a group of speakers. In ASR, training and testing data often do not follow the same statistics; they are often mismatched, which leads to a gap in performance. The difference between training and testing statistics can be minimized by speaker adaptation techniques, which require adaptation data from a target speaker to optimize system performance. In the past, ASR systems were based on Gaussian mixture model-hidden Markov models (GMM-HMM). A resurgence of neural networks has resulted in popularity of hybrid deep neural network-hidden Markov models (DNN-HMM) for speech recognition. The adaptation techniques developed for GMM-HMM systems cannot be directly applied to DNN-HMM systems because GMMs are generative models and DNNs are discriminative models. Also, DNN-HMM systems contain large number of parameters and require huge amount of data from target speaker to adapt ASR. In most cases, only a limited amount of adaptation data is available for the target speaker. The work proposed by this research suggests that ASR can be adapted by finding similar speaker for target speaker from the training data and learn speaker similarity scores based on a small amount of adaptation data from target speaker. The novelty of this work is that instead of modifying and retraining the DNN for speaker adaptation, which involves a large number of parameters and is computationally expensive, speaker adaptation is performed based on speaker similarity between the target speaker and a speaker from the training data.