Login Paper Search My Schedule Paper Index Help

My SLT 2018 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Presentation #11
Session:Speaker Recognition/Verification
Session Time:Thursday, December 20, 10:00 - 12:00
Presentation Time:Thursday, December 20, 10:00 - 12:00
Presentation: Poster
Topic: Speaker/language recognition:
Paper Title: SHORT UTTERANCE SPEAKER RECOGNITION BY RESERVOIR WITH SELF-ORGANIZED MAPPING
Authors: Narumitsu Ikeda; The University of Tokyo 
 Yoshinao Sato; Fairy Devices Inc. 
 Hirokazu Takahashi; The University of Tokyo 
Abstract: Short utterances cause performance degradation in conventional speaker recognition systems based on i-vector, which relies on the statistics of spectral features. To overcome this difficulty, we propose a novel method that utilizes the dynamics of the spectral features as well as their distribution. Our model integrates echo state network (ESN), a type of reservoir computing architecture, and self-organizing map (SOM), a competitive learning network. The ESN consists of a single-hidden-layer recurrent neural network with randomly fixed weights, which extracts temporal patterns of the spectral features. The input weights of our model are trained using the unsupervised competitive learning algorithm of the SOM, before enrollment, to extract the intrinsic structure of the spectral features, whereas the input weights are fixed randomly in the original ESN. In enrollment, the output weights are trained in a supervised manner to recognize an individual in a group of speakers. Our experiment demonstrates that the proposed method outperforms or is comparable to a baseline i-vector system for text-independent speaker identification on short utterances.