Technical Program

Paper Detail

Presentation #13
Session:Detection, Paralinguistics and Coding
Location:Kallirhoe Hall
Session Time:Wednesday, December 19, 13:30 - 15:30
Presentation Time:Wednesday, December 19, 13:30 - 15:30
Presentation: Poster
Topic: Speech recognition and synthesis:
Paper Title: A DEEP LEARNING APPROACH FOR DATA DRIVEN VOCAL TRACT AREA FUNCTION ESTIMATION
Authors: Sasan Asadiabadi, Engin Erzin, Koc university, Turkey
Abstract: In this paper we present a data driven vocal tract area function (VTAF) estimation using Deep Neural Networks (DNN). We approach the VTAF estimation problem based on sequence to sequence learning neural networks, where regression over a sliding window is used to learn arbitrary non-linear one-to-many mapping from the input feature sequence to the target articulatory sequence. We propose two schemes for efficient estimation of the VTAF; (1) a direct estimation of the area function values and (2) an indirect estimation via predicting the vocal tract boundaries. We consider acoustic speech and phone sequence as two possible input modalities for the DNN estimators. Experimental evaluations are performed over a large data comprising acoustic and phonetic features with parallel articulatory information from the USC-TIMIT database. Our results show that the proposed direct and indirect schemes perform the VTAF estimation with mean absolute error (MAE) rates lower than 1.65~mm, where the direct estimation scheme is observed to perform better than the indirect scheme.