Presentation # | 2 |
Session: | Detection, Paralinguistics and Coding |
Location: | Kallirhoe Hall |
Session Time: | Wednesday, December 19, 13:30 - 15:30 |
Presentation Time: | Wednesday, December 19, 13:30 - 15:30 |
Presentation: |
Poster
|
Topic: |
Speaker/language recognition: |
Paper Title: |
Analysing the predictions of a CNN-based replay spoofing detection system |
Authors: |
Bhusan Chettri, Saumitra Mishra, Queen Mary University of London, United Kingdom; Bob L. Sturm, KTH Royal Institute of Engineering, Sweden; Emmanouil Benetos, Queen Mary University of London, United Kingdom |
Abstract: |
Playing recorded speech samples of an enrolled speaker – “replay attack” – is a simple approach to bypass an automatic speaker verification (ASV) system. The vulnerability of ASV systems to such attacks has been acknowledged and studied, but there has been no research into what spoofing detection systems are actually learning to discriminate. In this paper, we analyse the local behaviour of a replay spoofing detection system based on convolutional neural networks (CNN) adapted from a state-of-the-art CNN (LCNN_FFT) submitted at the ASVspoof 2017 challenge. We generate temporal and spectral explanations for predictions of the model using the SLIME algorithm. Our findings suggest that in most instances of spoofing the model is using information in the first 400 milliseconds of each audio instance to make the class prediction. Knowledge of the characteristics that spoofing detection systems are exploiting can help build less vulnerable ASV systems, other spoofing detection systems, as well as better evaluation databases. |