Presentation # | 5 |
Session: | Deep Learning for Speech Synthesis |
Session Time: | Tuesday, December 18, 14:00 - 17:00 |
Presentation Time: | Tuesday, December 18, 14:00 - 17:00 |
Presentation: |
Invited talk, Discussion, Oral presentation, Poster session
|
Topic: |
Speech recognition and synthesis: |
Paper Title: |
PARAMETER GENERATION ALGORITHMS FOR TEXT-TO-SPEECH SYNTHESIS WITH RECURRENT NEURAL NETWORKS |
Authors: |
Viacheslav Klimkov; Amazon | | |
| Alexis Moinet; Amazon | | |
| Adam Nadolski; Amazon | | |
| Thomas Drugman; Amazon | | |
Abstract: |
Recurrent Neural Networks (RNN) have recently proved to be effective in acoustic modeling for TTS. Various techniques such as the Maximum Likelihood Parameter Generation (MLPG) algorithm have been naturally inherited from the HMM-based speech synthesis framework. This paper investigates in which situations parameter generation and variance restoration approaches help for RNN-based TTS. We explore how their performance is affected by various factors such as the choice of the loss function, the application of regularization methods and the amount of training data. We propose an efficient way to calculate MLPG using a convolutional kernel. Our results show that the use of the L1 loss with proper regularization outperforms any system built with the conventional L2 loss and does not require to apply MLPG (which is necessary otherwise). We did not observe perceptual improvements when embedding MLPG into the acoustic model. Finally, we show that variance restoration approaches are important for cepstral features but only yield minor perceptual gains for the prediction of F0. |