SPE-2.2
BEST OF BOTH WORLDS: MULTI-TASK AUDIO-VISUAL AUTOMATIC SPEECH RECOGNITION AND ACTIVE SPEAKER DETECTION
Otavio Braga, Olivier Siohan, Google, United States of America
Session:
Speech Recognition: Robust Speech Recognition I
Track:
Speech and Language Processing
Location:
Gather Area C
Presentation Time:
Sun, 8 May, 20:00 - 20:45 China Time (UTC +8)
Sun, 8 May, 12:00 - 12:45 UTC
Sun, 8 May, 12:00 - 12:45 UTC
Session Chair:
Satoshi Nakamura, Nara Institute of Science and Technology
Session SPE-2
SPE-2.1: AUDIO-VISUAL MULTI-CHANNEL SPEECH SEPARATION, DEREVERBERATION AND RECOGNITION
Guinan Li, Jiajun Deng, Xunying Liu, Helen Meng, The Chinese University of Hong Kong, China; Jianwei Yu, Tencent AI lab, China
SPE-2.2: BEST OF BOTH WORLDS: MULTI-TASK AUDIO-VISUAL AUTOMATIC SPEECH RECOGNITION AND ACTIVE SPEAKER DETECTION
Otavio Braga, Olivier Siohan, Google, United States of America
SPE-2.3: End-to-end multi-modal speech recognition with air and bone conducted speech
Junqi Chen, Mou Wang, Northwestern Polytechnical University, China; Xiao-Lei Zhang, Northwestern Polytechnic University, China; Zhiyong Huang, Susanto Rahardja, National University of Singapore, Singapore
SPE-2.4: END-TO-END SPEECH RECOGNITION WITH JOINT DEREVERBERATION OF SUB-BAND AUTOREGRESSIVE ENVELOPES
Rohit Kumar, Anurenjan Purushothaman, Sriram Ganapathy, Indian Institute of Science, Bangalore, India; Anirudh Sreeram, University of Southern California, United States of America
SPE-2.5: IMPROVING NOISE ROBUSTNESS OF CONTRASTIVE SPEECH REPRESENTATION LEARNING WITH SPEECH RECONSTRUCTION
Heming Wang, DeLiang Wang, The Ohio State University, United States of America; Yao Qian, Xiaofei Wang, Yiming Wang, Chengyi Wang, Shujie Liu, Takuya Yoshioka, Jinyu Li, Microsoft Corporation, United States of America
SPE-2.6: MULTI-CHANNEL MULTI-SPEAKER ASR USING 3D SPATIAL FEATURE
Yiwen Shao, Johns Hopkins University, United States of America; Shi-Xiong Zhang, Dong Yu, Tencent AI Lab, United States of America