Paper ID | AUD-22.4 | ||
Paper Title | MULTI-VIEW AUDIO AND MUSIC CLASSIFICATION | ||
Authors | Huy Phan, Queen Mary University of London, United Kingdom; Huy Le Nguyen, HCM City University of Technology, Vietnam; Oliver Chén, University of Oxford, United Kingdom; Lam Pham, University of Surrey, United Kingdom; Philipp Koch, University of Lübeck, Germany; Ian McLoughlin, Singapore Institute of Technology, Singapore; Alfred Mertins, University of Lübeck, Germany | ||
Session | AUD-22: Detection and Classification of Acoustic Scenes and Events 3: Multimodal Scenes and Events | ||
Location | Gather.Town | ||
Session Time: | Thursday, 10 June, 15:30 - 16:15 | ||
Presentation Time: | Thursday, 10 June, 15:30 - 16:15 | ||
Presentation | Poster | ||
Topic | Audio and Acoustic Signal Processing: [AUD-CLAS] Detection and Classification of Acoustic Scenes and Events | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | We propose in this work a multi-view learning approach for audio and music classification. Considering four typical low-level representations (i.e. different views) commonly used for audio and music recognition tasks, the proposed multi-view network consists of four subnetworks, each handling one input types. The learned embedding in the subnetworks are then concatenated to form the multi-view embedding for classification similar to a simple concatenation network. However, apart from the joint classification branch, the network also maintains four classification branches on the single-view embedding of the subnetworks. A novel method is then proposed to keep track of the learning behavior on the classification branches and adapt their weights to proportionally blend their gradients for network training. The weights are adapted in such a way that learning on a branch that is generalizing well will be encouraged whereas learning on a branch that is overfitting will be slowed down. Experiments on three different audio and music classification tasks show that the proposed multi-view network not only outperforms the single-view baselines but also is superior to the multi-view baselines based on concatenation and late fusion. |