Login Paper Search My Schedule Paper Index Help

My SLT 2018 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Presentation #4
Session:Natural Language Processing
Session Time:Thursday, December 20, 13:30 - 15:30
Presentation Time:Thursday, December 20, 13:30 - 15:30
Presentation: Poster
Topic: Natural language processing:
Paper Title: INFORMATION-WEIGHTED NEURAL CACHE LANGUAGE MODELS FOR ASR
Authors: Lyan Verwimp; KU Leuven 
 Joris Pelemans; Apple 
 Hugo Van hamme; KU Leuven 
 Patrick Wambacq; KU Leuven 
Abstract: Neural cache language models (LMs) extend the idea of regular cache language models by making the cache probability dependent on the similarity between the current context and the context of the words in the cache. We make an extensive comparison of ‘regular’ cache models with neural cache models, both in terms of perplexity and WER after rescoring first-pass ASR results. Furthermore, we propose two extensions to this neural cache model that make use of the content value/information weight of the word: firstly, combining the cache probability and LM probability with an information-weighted interpolation and secondly, selectively adding only content words to the cache. We obtain a 29.9%/32.1% (validation/test set) relative improvement in perplexity with respect to a baseline LSTM LM on the WikiText-2 dataset, outperforming previous work on neural cache LMs. Additionally, we observe significant WER reductions with respect to the baseline model on the WSJ ASR task.