Paper ID | SS-13.4 |
Paper Title |
DIRECTIONAL ASR: A NEW PARADIGM FOR E2E MULTI-SPEAKER SPEECH RECOGNITION WITH SOURCE LOCALIZATION |
Authors |
Aswin Shanmugam Subramanian, Johns Hopkins University, United States; Chao Weng, Tencent AI Lab, China; Shinji Watanabe, Johns Hopkins University, United States; Meng Yu, Yong Xu, Shi-Xiong Zhang, Dong Yu, Tencent AI Lab, United States |
Session | SS-13: Recent Advances in Multichannel and Multimodal Machine Learning for Speech Applications |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 16:30 - 17:15 |
Presentation Time: | Thursday, 10 June, 16:30 - 17:15 |
Presentation |
Poster
|
Topic |
Special Sessions: Recent Advances in Multichannel and Multimodal Machine Learning for Speech Applications |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
This paper proposes a new paradigm for handling far-field multi-speaker data in an end-to-end (E2E) neural network manner, called directional automatic speech recognition (D-ASR), which explicitly models source speaker locations. In D-ASR, the azimuth angle of the sources with respect to the microphone array is defined as a latent variable. This angle controls the quality of separation, which in turn determines the ASR performance. All three functionalities of D-ASR: localization, separation, and recognition are connected as a single differentiable neural network and trained solely based on ASR error minimization objectives. The advantages of D-ASR over existing methods are threefold: (1) it provides explicit speaker locations, (2) it improves the explainability factor, and (3) it achieves better ASR performance as the process is more streamlined. In addition, D-ASR does not require explicit direction of arrival (DOA) supervision like existing data-driven localization models, which makes it more appropriate for realistic data. For the case of two source mixtures, D-ASR achieves an average DOA prediction error of less than three degrees. It also outperforms a strong far-field multi-speaker end-to-end system in both separation quality and ASR performance. |