Paper ID | AUD-19.4 |
Paper Title |
UNIFIED GRADIENT REWEIGHTING FOR MODEL BIASING WITH APPLICATIONS TO SOURCE SEPARATION |
Authors |
Efthymios Tzinis, University of Illinois at Urbana-Champaign, United States; Dimitrios Bralios, University of Illinois at Urbana-Champaign, National Technical University of Athens, United States; Paris Smaragdis, University of Illinois at Urbana-Champaign, Adobe Research, United States |
Session | AUD-19: Audio and Speech Source Separation 6: Topics in Source Separation |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Audio and Acoustic Signal Processing: [AUD-SEP] Audio and Speech Source Separation |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Recent deep learning approaches have shown great improvement in audio source separation tasks. However, the vast majority of such work is focused on improving average separation performance, often neglecting to examine or control the distribution of the results. In this paper, we propose a simple, unified gradient reweighting scheme, with a lightweight modification to bias the learning process of a model and steer it towards a certain distribution of results. More specifically, we reweight the gradient updates of each batch, using a user-specified probability distribution. We apply this method to various source separation tasks, in order to shift the operating point of the models towards different objectives. We demonstrate different parameterizations of our unified reweighting scheme can be used towards addressing several real-world problems, such as unreliable separation estimates. Our framework enables the user to control a robustness trade-off between worst and average performance. Moreover, we experimentally show that our unified reweighting scheme can also be used in order to shift the focus of the model towards being more accurate for user-specified sound classes or even towards easier examples in order to enable faster convergence. |