Paper ID | SPE-50.3 | ||
Paper Title | LITESING: TOWARDS FAST, LIGHTWEIGHT AND EXPRESSIVE SINGING VOICE SYNTHESIS | ||
Authors | Xiaobin Zhuang, Tao Jiang, Szu-Yu Chou, Bin Wu, Peng Hu, Simon Lui, Tencent Music Entertainment, China | ||
Session | SPE-50: Voice Conversion & Speech Synthesis: Singing Voice & Other Topics | ||
Location | Gather.Town | ||
Session Time: | Friday, 11 June, 11:30 - 12:15 | ||
Presentation Time: | Friday, 11 June, 11:30 - 12:15 | ||
Presentation | Poster | ||
Topic | Speech Processing: [SPE-SYNT] Speech Synthesis and Generation | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | LiteSing proposed in this paper is a high-quality singing voice synthesis (SVS) system, which is fast, lightweight and expressive. This model mainly stacks several non-autoregressive WaveNet blocks in the encoder and decoder under a generative adversarial architecture, predicts expressive conditions from the musical score, and generates acoustic features from full conditions. The full conditions used in this model consist of spectrogram energy, voiced/unvoiced (V/UV) decision and dynamic pitch curve, which are proved related to the expressiveness. We predict the pitch and the timbre features respectively, avoiding the interdependence between these two features. Instead of neural network vocoders, a parametric WORLD vocoder is employed in the end for the pitch curve consistency. Experiment results show that LiteSing outperforms the baseline model using feed-forward Transformer by 1.386 times faster on inference speed, 15 times smaller on training parameters number, and almost the same MOS on sound quality. Through an A/B test, LiteSing achieves 67.3% preference rate over baseline in expressiveness, which suggests the advantage of LiteSing over the other compared models. |