Paper ID | SPE-25.3 | ||
Paper Title | CONTRASTIVE UNSUPERVISED LEARNING FOR SPEECH EMOTION RECOGNITION | ||
Authors | Mao Li, University of Illinois at Chicago, United States; Bo Yang, Joshua Levy, Andreas Stolcke, Viktor Rozgic, Spyros Matsoukas, Constantinos Papayiannis, Daniel Bone, Chao Wang, Amazon, United States | ||
Session | SPE-25: Speech Emotion 3: Emotion Recognition - Representations, Data Augmentation | ||
Location | Gather.Town | ||
Session Time: | Wednesday, 09 June, 15:30 - 16:15 | ||
Presentation Time: | Wednesday, 09 June, 15:30 - 16:15 | ||
Presentation | Poster | ||
Topic | Speech Processing: [SPE-ANLS] Speech Analysis | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Speech emotion recognition (SER) is a key technology to enable more natural human-machine communication. How- ever, SER has long suffered from a lack of public large-scale labeled datasets. To circumvent this problem, we investigate how unsupervised representation learning on unlabeled datasets can benefit SER. We show that the contrastive predictive coding (CPC) method can learn salient representations from unlabeled datasets, which improves emotion recognition performance. In our experiments, this method achieved state- of-the-art CCC performance for all emotion primitives (activation, valence, and dominance) on IEMOCAP. Additionally, on the MSP-Podcast dataset, our method obtained considerable performance improvements compared to baselines. |