| Paper ID | IVMSP-9.2 |
| Paper Title |
REPRESENTATIVE LOCAL FEATURE MINING FOR FEW-SHOT LEARNING |
| Authors |
Kun Yan, Peking University, China; Lingbo Liu, Sun Yat-Sen University, China; Jun Hou, Sensetime, China; Ping Wang, Peking University, China |
| Session | IVMSP-9: Zero and Few Short Learning |
| Location | Gather.Town |
| Session Time: | Wednesday, 09 June, 13:00 - 13:45 |
| Presentation Time: | Wednesday, 09 June, 13:00 - 13:45 |
| Presentation |
Poster
|
| Topic |
Image, Video, and Multidimensional Signal Processing: [IVTEC] Image & Video Processing Techniques |
| IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
| Virtual Presentation |
Click here to watch in the Virtual Conference |
| Abstract |
Few-shot learning aims to recognize unseen images of new classes with only a few training examples. While great progress has been made with deep learning technology, most metric-based works rely on the measurement based on global feature representation of images, which is sensitive to background factors due to the scarcity of training data. Given this, we propose a novel method that chooses representative local features to facilitate few-shot learning. Specifically, we propose a “task-specific guided” strategy to mine local features that are task-specific and discriminative. For each task, we first mine representative local features for labeled images by a loss guided mechanism. Then these local features are used to guide a classifier to mine representative local features for unlabeled images. In this way, task-specific representative local features can be selected for better classification. We empirically show our method can effectively alleviate the negative effect introduced by background factors. Extensive experiments on two few-shot benchmarks show the effectiveness of the proposed method. |