Paper ID | MLSP-5.3 |
Paper Title |
ADVERSARIAL ATTACKS ON COARSE-TO-FINE CLASSIFIERS |
Authors |
Ismail Alkhouri, George Atia, University of Central Florida, United States |
Session | MLSP-5: Machine Learning for Classification Applications 2 |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Machine Learning for Signal Processing: [MLR-PRCL] Pattern recognition and classification |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Adversarial attacks have exposed the vulnerability of one-stage classifiers to carefully crafted perturbations which were shown to drastically alter their predictions while remaining imperceptible. In this paper, we examine the susceptibility of coarse-to-fine hierarchical classifiers to such types of attacks. We formulate convex programs to generate perturbations attacking these models and propose a generic solution based on the Alternating Direction Method of Multipliers (ADMM). We evaluate the performance of the proposed models using the degradation in classification accuracy and imperceptibility measures in comparison to perturbations generated to fool one-stage classifiers. |