Technical Program

Paper Detail

Presentation #11
Session:ASR II
Location:Kallirhoe Hall
Session Time:Thursday, December 20, 13:30 - 15:30
Presentation Time:Thursday, December 20, 13:30 - 15:30
Presentation: Poster
Topic: Speech recognition and synthesis:
Paper Title: A teacher-student learning approach for unsupervised domain adaptation of sequence-trained ASR models
Authors: Vimal Manohar, Pegah Ghahremani, Daniel Povey, Sanjeev Khudanpur, Johns Hopkins University, United States
Abstract: Teacher-student (T-S) learning is a transfer learning approach, where a teacher network is used to teach a student network to make the same predictions as the teacher. Originally formulated for model compression, this approach has also been used for domain adaptation, and is particularly effective when parallel data is available in source and target domains. The standard approach uses a frame-level objective of minimizing the KL divergence between the frame-level posteriors of the teacher and student networks. However, for sequence-trained models for speech recognition, it is more appropriate to train the student to mimic the sequence-level posterior of the teacher network. In this work, we compare this sequence-level KL divergence objective with another semi-supervised sequence-training method, namely the lattice-free MMI, for unsupervised domain adaptation. We investigate the approaches in multiple scenarios including adapting from clean to noisy speech, bandwidth mismatch and channel mismatch.