Technical Program

Paper Detail

Presentation #4
Session:Dialogue
Location:Kallirhoe Hall
Session Time:Thursday, December 20, 10:00 - 12:00
Presentation Time:Thursday, December 20, 10:00 - 12:00
Presentation: Poster
Topic: Spoken dialog systems:
Paper Title: PREDICTION OF DIALOGUE SUCCESS WITH SPECTRAL AND RHYTHM ACOUSTIC FEATURES USING DNNS AND SVMS
Authors: Athanasios Lykartsis, Technische Universität Berlin, Germany; Margarita Kotti, Alexandros Papangelis, Yannis Stylianou, Toshiba LTD, United Kingdom
Abstract: In this paper we investigate the novel use of exclusively audio to predict whether a spoken dialogue will be successful or not, both in a subjective and in an objective manner. To achieve that, multiple spectral and rhythmic features are inputted to support vector machines and deep neural networks. We report results on data from 3267 spoken dialogues, using both the full user response as well as parts of it. Experiments show an average accuracy of 74% can be achieved using just 5 acoustic features, when analysing merely 1 user turn, which allows both a real-time but also a fairly accurate prediction of a dialogue successfulness only after one short interaction unit. From the features tested, those related to speech rate, signal energy and cepstrum are amongst the most informative. Results presented here outperform the state of the art in spoken dialogue success prediction through solely acoustic features.