| Paper ID | SPE-11.6 |
| Paper Title |
ANY-TO-ONE SEQUENCE-TO-SEQUENCE VOICE CONVERSION USING SELF-SUPERVISED DISCRETE SPEECH REPRESENTATIONS |
| Authors |
Wen-Chin Huang, Yi-Chiao Wu, Tomoki Hayashi, Tomoki Toda, Nagoya University, Japan |
| Session | SPE-11: Voice Conversion 1: Non-parallel Conversion |
| Location | Gather.Town |
| Session Time: | Tuesday, 08 June, 16:30 - 17:15 |
| Presentation Time: | Tuesday, 08 June, 16:30 - 17:15 |
| Presentation |
Poster
|
| Topic |
Speech Processing: [SPE-SYNT] Speech Synthesis and Generation |
| IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
| Virtual Presentation |
Click here to watch in the Virtual Conference |
| Abstract |
We present a novel approach to any-to-one (A2O) voice conversion (VC) in a sequence-to-sequence (seq2seq) framework. A2O VC aims to convert any speaker, including those unseen during training, to a fixed target speaker. We utilize vq-wav2vec (VQW2V), a discretized self-supervised speech representation that was learned from massive unlabeled data, which is assumed to be speaker-independent and well corresponds to underlying linguistic contents. Given a training dataset of the target speaker, we extract VQW2V and acoustic features to estimate a seq2seq mapping function from the former to the latter. With the help of a pretraining method and a newly designed postprocessing technique, our model can be generalized to only 5 min of data, even outperforming the same model trained with parallel data. |