IEEE ICASSP 2022

2022 IEEE International Conference on Acoustics, Speech and Signal Processing

7-13 May 2022
  • Virtual (all paper presentations)
22-27 May 2022
  • Main Venue: Marina Bay Sands Expo & Convention Center, Singapore
27-28 October 2022
  • Satellite Venue: Crowne Plaza Shenzhen Longgang City Centre, Shenzhen, China

ICASSP 2022
IEP-12: Holistic Adversarial Robustness of Deep Learning
Wed, 11 May, 22:00 - 22:45 China Time (UTC +8)
Wed, 11 May, 14:00 - 14:45 UTC
Location: Gather Area P
Virtual
Gather.Town
Expert
Presented by: Pin-Yu Chen, IBM Research

1. Overview and technical contents Despite achieving high standard accuracy in a variety of machine learning tasks, deep learning models built upon neural networks have recently been identified as having the issue of lacking adversarial robustness. The decision-making of well-trained deep learning models can be easily falsified and manipulated, resulting in ever-increasing concerns in safety-critical and security-sensitive applications requiring certified robustness and guaranteed reliability. In recent years, there has been a surge of interest in understanding and strengthening adversarial robustness of an AI model in different phases of its life cycle, including data collection, model training, model deployment (inference), and system-level (software+hardware) vulnerabilities, giving rise to different robustness factors and threat assessment schemes.

This presentation will provide an overview of recent advances in the research of adversarial robustness and industrial perspectives, featuring both comprehensive research topics and technical depth. We will cover three fundamental pillars in adversarial robustness: attack, defense, and verification. Attack refers to efficient generation of adversarial examples or poisoned data samples for robustness assessment under different attack assumptions (e.g., white-box v.s. black-box attacks, prediction-evasion v.s. model stealing). Defense refers to adversary detection and robust training algorithms to enhance model robustness. Verification refers to attack-agnostic metrics and certification algorithms for proper evaluation of adversarial robustness and standardization. For each pillar, we will emphasize the tight connection between signal processing techniques and adversarial robustness, ranging from fundamental techniques such as first-order and zero-order optimization, minimax optimization, geometric analysis, model compression, data filtering and quantization, subspace analysis, active sampling, frequency component analysis to specific applications such as computer vision, automatic speech recognition, natural language processing, and data regression. Furthermore, we will also cover new applications originating from adversarial robustness research, such as data-efficient transfer learning and model watermarking and fingerprinting.

2. Relevance and attractiveness to ICASSP Many of the contents in adversarial robustness are related to signal processing methods and techniques, such as (adversary) detection, sparse signal processing, data recovery, and robust machine learning and signal processing. The presentation covers both advanced research topics and libraries (e.g. IBM Adversarial Robustness Toolbox), which is suitable to ICASSP attendees including researchers and practitioners.

3. Novelty, inspirations, and motivations to the audience a) Help the audiences to quickly grasp the research progress and existing tools in the fast-growing field of adversarial robustness b) Provide gateways for interested researchers with signal processing backgrounds to contribute to this research field and expand the impact of signal processing c) Create synergies between signal processing and adversarial robustness to identify and solve challenging tasks in adversarial robustness and deep learning d) Offer unique perspectives from industrial researchers studying trustworthy machine learning

4. Reference (videos on this topic) a) IBM Research Youtube: https://youtu.be/9B2jKXGUZtc b) MLSS 2021: https://youtu.be/rrQi86VQiuc c) CVPR 2021 tutorial: https://youtu.be/ZmkU1YO4X7U

Biography

Dr. Pin-Yu Chen is a research staff member at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. degree in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chen’s recent research focuses on adversarial machine learning and robustness of neural networks. His long-term research vision is building trustworthy machine learning systems. At IBM Research, he received the honor of IBM Master Inventor and several research accomplishment awards, including an IBM Master Inventor and IBM Corporate Technical Award in 2021. His research works contribute to IBM open-source libraries including Adversarial Robustness Toolbox (ART 360) and AI Explainability 360 (AIX 360). He has published more than 40 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at AAAI’22, IJCAI’21, CVPR(’20,’21), ECCV’20, ICASSP’20, KDD’19, and Big Data’18, and organized several workshops for adversarial machine learning. He received a NeurIPS 2017 Best Reviewer Award, and was also the recipient of the IEEE GLOBECOM 2010 GOLD Best Paper Award. More details can be found at his personal website www.pinyuchen.com