Login Paper Search My Schedule Paper Index Help

My SLT 2018 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Presentation #9
Session:Corpora and Evaluation Methodologies
Session Time:Wednesday, December 19, 13:30 - 15:30
Presentation Time:Wednesday, December 19, 13:30 - 15:30
Presentation: Poster
Topic: Evaluation methodologies: Educational:
Paper Title: DNN-BASED SCORING OF LANGUAGE LEARNERS' PROFICIENCY USING LEARNERS' SHADOWINGS AND NATIVE LISTENERS' RESPONSIVE SHADOWINGS
Authors: Suguru Kabashima; The University of Tokyo 
 Yuusuke Inoue; The University of Tokyo 
 Daisuke Saito; The University of Tokyo 
 Nobuaki Minematsu; The University of Tokyo 
Abstract: This paper investigates DNN-based scoring techniques when they are applied to two tasks related to foreign language education. One is a conventional task, which attempts to predict a language learner's overall proficiency of oral communication. For this aim, learners' shadowing utterances are assessed automatically. The other is a very new and novel task, which attempts to predict intelligibility or comprehensibility of a learner's pronunciation. In this task, native listeners' responsive shadowings are assessed. For both the tasks, similar technical frameworks are tested, where DNN-based phoneme posteriors, posteriogram-based DTW scores, ASR-based accuracies, shadowing latencies, etc are used to train regression models, which aim to predict manually rated scores. Experiments show that, in both the tasks, the correlation between the DNN-based predicted scores and the averaged human scores is higher than or at least comparable to the averaged correlation between the scores of human raters. This fact clearly indicates that our proposed automatic rating module can be introduced to language education as another human rater.