Paper ID | SPE-40.6 |
Paper Title |
MULTITASK LEARNING AND JOINT OPTIMIZATION FOR TRANSFORMER-RNN-TRANSDUCER SPEECH RECOGNITION |
Authors |
Jae-Jin Jeon, Euisung Kim, Kakaoenterprise, South Korea |
Session | SPE-40: Speech Recognition 14: Acoustic Modeling 2 |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 15:30 - 16:15 |
Presentation Time: | Thursday, 10 June, 15:30 - 16:15 |
Presentation |
Poster
|
Topic |
Speech Processing: [SPE-ROBU] Robust Speech Recognition |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Recently, several types of end-to-end speech recognition methods named as transformer-transducer have been introduced successfully. According to those kinds of methods, transcription networks are generally modelled by transformerbased neural networks, while prediction networks can be modelled by either of transformers or recurrent neural networks (RNN). In this paper, we propose novel multitask learning, joint optimization, and joint decoding methods for transformer-RNN-transducer systems. Main advantage of the proposed methods is that the model can maintain information on the large text corpus eliminating the necessity of an external language model (LM). We demonstrate the effectiveness of the proposed methods based on experiments utilizing the well known ESPNET toolkit on the widely used Librispeech datasets, and show that the proposed methods can reduce word error rate (WER) by 16.6 % and 13.3 % for test-clean and test-other datasets, respectively, without changing the overall model structure and without exploiting an external LM. |