Login Paper Search My Schedule Paper Index Help

My SLT 2018 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Presentation #20
Session:ASR IV
Session Time:Friday, December 21, 13:30 - 15:30
Presentation Time:Friday, December 21, 13:30 - 15:30
Presentation: Poster
Topic: Speech recognition and synthesis:
Paper Title: FAR-FIELD ASR USING LOW-RANK AND SPARSE SOFT TARGETS FROM PARALLEL DATA
Authors: Pranay Dighe; Idiap Research Institute, EPFL 
 Afsaneh Asaei; Idiap Research Institute 
 Herve Bourlard; Idiap Research Institute, EPFL 
Abstract: Far-field automatic speech recognition (ASR) of conversational speech is often considered to be a very challenging task due to poor quality of alignments available for training the DNN acoustic models. A common way to alleviate this problem is to use clean alignments obtained from parallelly recorded close-talk speech. In this work, we advance the parallel data approach by obtaining enhanced low-rank and sparse soft targets from a close-talk ASR system and using them for training more accurate far-field acoustic models. Specifically, we exploit \textit{eigenposteriors} and \textit{Compressive Sensing} dictionaries to learn low-dimensional senone subspaces in DNN posterior space, and enhance close-talk DNN posteriors to obtain high quality soft targets. Enhanced soft targets encode the structural and temporal inter-relationships among senone classes which are easily accessible in the DNN posterior space of close-talk speech but not in its noisy far-field counterpart. We exploit enhanced soft targets to improve the mapping of far-field acoustics to close-talk senone classes. Experiments are performed on AMI corpus where our approach improves DNN acoustic modeling by 4.4\% absolute reduction in WER as compared to a system which doesn't use parallel data. Finally, the approach is also validated on state-of-the-art recurrent and time delay neural network architectures.