SPE-60.6
GATED MULTIMODAL FUSION WITH CONTRASTIVE LEARNING FOR TURN-TAKING PREDICTION IN HUMAN-ROBOT DIALOGUE
Jiudong Yang, Peiying Wang, Mingchao Feng, Meng Chen, Xiaodong He, JD AI, China; Yi Zhu, University of Cambridge, United Kingdom of Great Britain and Northern Ireland
Session:
Multimodal Language Processing
Track:
Speech and Language Processing
Location:
Gather Area E
Presentation Time:
Wed, 11 May, 22:00 - 22:45 China Time (UTC +8)
Wed, 11 May, 14:00 - 14:45 UTC
Wed, 11 May, 14:00 - 14:45 UTC
Session Chair:
David Harwath, University of Texas, Austin
Session SPE-60
SPE-60.1: Automated Audio Captioning using Transfer Learning and Reconstruction Latent Space Similarity Regularization
Andrew Koh, Fuzhao Xue, Eng Siong Chng, Nanyang Technological University, Singapore
SPE-60.2: FAST-SLOW TRANSFORMER FOR VISUALLY GROUNDING SPEECH
Puyuan Peng, David Harwath, The University of Texas at Austin, United States of America
SPE-60.3: AUDIO-VISUAL SCENE-AWARE DIALOG AND REASONING USING AUDIO-VISUAL TRANSFORMERS WITH JOINT STUDENT-TEACHER LEARNING
Ankit Parag Shah, Carnegie Mellon University, United States of America; Shijie Geng, Rutgers University, United States of America; Gao Peng, Chinese University of Hong Kong, United States of America; Anoop Cherian, Takaaki Hori, Tim K. Marks, Jonathan Le Roux, Chiori Hori, Mitsubishi Electric Research Laboratories (MERL), United States of America
SPE-60.4: AIMNET: ADAPTIVE IMAGE-TAG MERGING NETWORK FOR AUTOMATIC MEDICAL REPORT GENERATION
Jijun Shi, Shanshe Wang, Ronggang Wang, Siwei Ma, Peking University, China
SPE-60.5: ADVERSARIAL INPUT ABLATION FOR AUDIO-VISUAL LEARNING
David Xu, David Harwath, The University of Texas at Austin, United States of America
SPE-60.6: GATED MULTIMODAL FUSION WITH CONTRASTIVE LEARNING FOR TURN-TAKING PREDICTION IN HUMAN-ROBOT DIALOGUE
Jiudong Yang, Peiying Wang, Mingchao Feng, Meng Chen, Xiaodong He, JD AI, China; Yi Zhu, University of Cambridge, United Kingdom of Great Britain and Northern Ireland