Paper ID | AUD-32.4 | ||
Paper Title | Double-DCCCAE: Estimation of Body Gestures from Speech Waveform | ||
Authors | JinHong Lu, TianHang Liu, Shuzhuang Xu, Hiroshi Shimodaira, University of Edinburgh, United Kingdom | ||
Session | AUD-32: Audio for Multimedia and Audio Processing Systems | ||
Location | Gather.Town | ||
Session Time: | Friday, 11 June, 14:00 - 14:45 | ||
Presentation Time: | Friday, 11 June, 14:00 - 14:45 | ||
Presentation | Poster | ||
Topic | Audio and Acoustic Signal Processing: [AUD-AUMM] Audio for Multimedia and Audio Processing Systems | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | This paper presents an approach for body-motion estimation from audio-speech waveform, where context information in both input and output streams is taken in to account without using recurrent models. Previous works commonly use multiple frames of input to estimate one frame of motion data, where the temporal information of the generated motion is little considered. To resolve the problems, we extend our previous work and propose a system, double deep canonical-correlation-constrained autoencoder (D-DCCCAE), which encodes each of speech and motion segments into fixed-length embedded features that are well correlated with the segments of the other modality. The learnt motion embedded feature is estimated from the learnt speech-embedded feature through a simple neural network and further decoded back to the sequential motion. The proposed pair of embedded features showed higher correlation than spectral features with motion data, and our model was more preferred than the baseline model (BA) in terms of human-likeness and comparable in terms of similar appropriateness. |