Paper ID | AUD-33.4 |
Paper Title |
U-CONVOLUTION BASED RESIDUAL ECHO SUPPRESSION WITH MULTIPLE ENCODERS |
Authors |
Eesung Kim, Jae-Jin Jeon, Hyeji Seo, Kakao Enterprise, South Korea |
Session | AUD-33: Topics in Deep Learning for Speech and Audio |
Location | Gather.Town |
Session Time: | Friday, 11 June, 14:00 - 14:45 |
Presentation Time: | Friday, 11 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Audio and Acoustic Signal Processing: [AUD-NEFR] Active Noise Control, Echo Reduction and Feedback Reduction |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
In this paper, we propose an efficient end-to-end neural network that can estimate near-end speech using a U-convolution block by exploiting various signals to achieve residual echo suppression (RES). Specifically, the proposed model employs multiple encoders and an integration block to utilize complete signal information in an acoustic echo cancellation system and also applies the U-convolution blocks to separate near-end speech efficiently. The proposed network affords an improvement in the perceptual evaluation of speech quality (PESQ) and the short-time objective intelligibility (STOI), as compared to baselines, in scenarios involving smart audio devices. The experimental results show that the proposed method outperforms the baselines for various types of mismatched background noise and environmental reverberation, while requiring low computational resources. |