Technical Program

Paper Detail

Presentation #4
Session:ASR II
Location:Kallirhoe Hall
Session Time:Thursday, December 20, 13:30 - 15:30
Presentation Time:Thursday, December 20, 13:30 - 15:30
Presentation: Poster
Topic: Speech recognition and synthesis:
Paper Title: MULTI-OBJECTIVE MULTI-TASK LEARNING ON RNNLM FOR SPEECH RECOGNITION
Authors: Minguang Song, Yunxin Zhao, University of Missouri, United States; Shaojun Wang, Ping An Technology, China
Abstract: The cross entropy (CE) loss function is commonly adopted for neural network language model (NNLM) training. Although this criterion is largely successful, as evidenced by the quick advance of NNLM, minimizing CE only maximizes likelihood of training data. When training data is insufficient, the generalization power of the resulting LM is limited on test data. In this paper, we propose to integrate a pairwise ranking (PR) loss with the CE loss for multi-objective training on recurrent neural network language model (RNNLM). The PR loss emphasizes discrimination between target and non-target words and also reserves probabilities for low-frequency correct words, which complements the distribution learning role of the CE loss. Combining the two losses may therefore help improve the performance of RNNLM. In addition, we incorporate multi-task learning (MTL) into the proposed multi-objective learning to regularize the primary task of RNNLM by an auxiliary task of part-of-speech (POS) tagging. The proposed approach to RNNLM learning has been evaluated on two speech recognition tasks of WSJ and AMI with encouraging results achieved on word error rate reductions.