Technical Program

Paper Detail

Presentation #3
Session:Voice Conversion and TTS
Location:Kallirhoe Hall
Session Time:Friday, December 21, 10:00 - 12:00
Presentation Time:Friday, December 21, 10:00 - 12:00
Presentation: Poster
Topic: Special session on Speech Synthesis:
Paper Title: Adaptive WaveNet Vocoder for Residual Compensation in GAN-based Voice Conversion
Authors: Berrak Sisman, Mingyang Zhang, National University of Singapore, Singapore; Sakriani Sakti, Nara Institute of Science and Technology, Japan; Haizhou Li, National University of Singapore, Singapore; Satoshi Nakamura, Nara Institute of Science and Technology, Japan
Abstract: In this paper, we propose to use generative adversarial networks (GAN) together with a WaveNet vocoder to address the over-smoothing problem arising from the deep learning approaches to voice conversion, and to improve the vocoding quality over the traditional vocoders. As GAN aims to minimize the divergence between the natural and converted speech parameters, it effectively alleviates the over-smoothing problem in the converted speech. On the other hand, WaveNet vocoder allows us to leverage from the human speech of a large speaker population, thus improving the naturalness of the synthetic voice. Furthermore, for the first time, we study how to use WaveNet vocoder for residual compensation to improve the voice conversion performance. The experiments show that the proposed voice conversion framework consistently outperforms the baselines.