MMSP-3.4
IS CROSS-ATTENTION PREFERABLE TO SELF-ATTENTION FOR MULTI-MODAL EMOTION RECOGNITION?
Vandana Rajan, Andrea Cavallaro, Queen Mary University of London, United Kingdom of Great Britain and Northern Ireland; Alessio Brutti, Fondazione Bruno Kessler, Italy
Session:
Emotion Recognition
Track:
Multimedia Signal Processing
Location:
Gather Area O
Presentation Time:
Mon, 9 May, 22:00 - 22:45 China Time (UTC +8)
Mon, 9 May, 14:00 - 14:45 UTC
Mon, 9 May, 14:00 - 14:45 UTC
Session Co-Chairs:
Aladine Chetouani, Universite d'Orleans and Ivan Kukanov, Institute for Infocomm Research, A*STAR
Session MMSP-3
MMSP-3.1: MULTIMODAL EMOTION RECOGNITION WITH SURGICAL AND FABRIC MASKS
Ziqing Yang, Houwei Cao, New York Institute of Technology, United States of America; Katherine Nayan, Rochester Institute of Technology, United States of America; Zehao Fan, New York University Tandon School of Engineering, United States of America
MMSP-3.2: Human emotion recognition using multi-modal biological signals based on time lag-considered correlation maximization
Yuya Moroto, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama, Hokkaido University, Japan
MMSP-3.3: MULTI-MODAL EMOTION RECOGNITION WITH SELF-GUIDED MODALITY CALIBRATION
Mixiao Hou, Zheng Zhang, Guangming Lu, Harbin Institute of Technology, Shenzhen, China
MMSP-3.4: IS CROSS-ATTENTION PREFERABLE TO SELF-ATTENTION FOR MULTI-MODAL EMOTION RECOGNITION?
Vandana Rajan, Andrea Cavallaro, Queen Mary University of London, United Kingdom of Great Britain and Northern Ireland; Alessio Brutti, Fondazione Bruno Kessler, Italy
MMSP-3.5: A PRE-TRAINED AUDIO-VISUAL TRANSFORMER FOR EMOTION RECOGNITION
Minh Tran, Mohammad Soleymani, University of Southern California, United States of America
MMSP-3.6: MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition
Jinming Zhao, Ruichen Li, Qin Jin, Renmin University of China, China; Xinchao Wang, Haizhou Li, National University of Singapore, Singapore