Paper ID | SPE-39.4 |
Paper Title |
EAT: ENHANCED ASR-TTS FOR SELF-SUPERVISED SPEECH RECOGNITION |
Authors |
Murali Karthick Baskar, Lukáš Burget, Brno University of Technology, Czechia; Shinji Watanabe, Johns Hopkins University, United States; Ramon Astudillo, IBM T. J. Watson Research Center, United States; Jan "Honza" Cernocky, Brno University of Technology, Czechia |
Session | SPE-39: Speech Recognition 13: Acoustic Modeling 1 |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 15:30 - 16:15 |
Presentation Time: | Thursday, 10 June, 15:30 - 16:15 |
Presentation |
Poster
|
Topic |
Speech Processing: [SPE-RECO] Acoustic Modeling for Automatic Speech Recognition |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Self-supervised ASR-TTS models suffer in out-of-domain data conditions. Here we propose an enhanced ASR-TTS (EAT) model that incorporates two main features: 1) The ASR$\rightarrow$TTS direction is equipped with a language model reward to penalize the ASR hypotheses before forwarding it to TTS. 2) In the TTS$\rightarrow$ASR direction, a hyper-parameter is introduced to scale the attention context from synthesized speech before sending it to ASR to handle out-of-domain data. Training strategies and the effectiveness of the EAT model are explored under out-of-domain data conditions. The results show that EAT reduces the performance gap between supervised and self-supervised training significantly by absolute 2.6\% and 2.7\% on Librispeech and BABEL respectively. |