Paper ID | MLR-APPL-IVSMR-2.1 | ||
Paper Title | HARD SAMPLES RECTIFICATION FOR UNSUPERVISED CROSS-DOMAIN PERSON RE-IDENTIFICATION | ||
Authors | Chih-Ting Liu, Man-Yu Lee, Tsai-Shien Chen, Shao-Yi Chien, National Taiwan University, Taiwan | ||
Session | MLR-APPL-IVSMR-2: Machine learning for image and video sensing, modeling and representation 2 | ||
Location | Area D | ||
Session Time: | Tuesday, 21 September, 15:30 - 17:00 | ||
Presentation Time: | Tuesday, 21 September, 15:30 - 17:00 | ||
Presentation | Poster | ||
Topic | Applications of Machine Learning: Machine learning for image & video sensing, modeling, and representation | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Person re-identification (re-ID) has received great success with the supervised learning methods. However, the task of unsupervised cross-domain re-ID is still challenging. In this paper, we propose a Hard Samples Rectification (HSR) learning scheme which resolves the weakness of original clustering-based methods being vulnerable to the hard positive and negative samples in the target unlabelled dataset. Our HSR contains two parts, an inter-camera mining method that helps recognize a person under different views (hard positive) and a part-based homogeneity technique that makes the model discriminate different persons but with similar appearance (hard negative). By rectifying those two hard cases, the re-ID model can learn effectively and achieve promising results on two large-scale benchmarks. |