Presentation # | 4 |
Session: | Detection, Paralinguistics and Coding |
Session Time: | Wednesday, December 19, 13:30 - 15:30 |
Presentation Time: | Wednesday, December 19, 13:30 - 15:30 |
Presentation: |
Poster
|
Topic: |
Speaker/language recognition: |
Paper Title: |
Unsupervised Representation Learning of Speech for Dialect Identification |
Authors: |
Suwon Shon; Massachusetts Institute of Technology | | |
| Wei-Ning Hsu; Massachusetts Institute of Technology | | |
| James Glass; Massachusetts Institute of Technology | | |
Abstract: |
In this paper, we explore the use of a factorized hierarchical variational autoencoder (FHVAE) model to learn an unsupervised latent representation for dialect identification (DID). An FHVAE can learn a latent space that separates the more static attributes within an utterance from the more dynamic attributes by encoding them into two different sets of latent variables. Useful factors for dialect identification, such as phonetic or linguistic content, are encoded by a segmental latent variable, while irrelevant factors that are relatively constant within a sequence, such as a channel or speaker information, are encoded by sequential latent variable. The disentanglement property makes the segmental latent variable less susceptible to channel and speaker variation, and thus reduces degradation from channel domain mismatch. We demonstrate that on fully-supervised DID tasks, an end-to-end model trained on the features extracted from the FHVAE model achieves the best performance, compared to the same model trained on conventional acoustic features and an i-vector based system. Moreover, we show that the proposed approach can leverage a large amount of unlabeled data for FHVAE training to learn domain-invariant features for DID, and significantly improve the performance in low-resource condition, where the labels for the in-domain data are not available. |