Warning: Undefined variable $isLoggedIn in G:\WWWRoot\ICASSP2022\view_event.php on line 162
IEEE ICASSP 2022 || Singapore || 7-13 May 2022 Virtual; 22-27 May 2022 In-Person

IEEE ICASSP 2022

2022 IEEE International Conference on Acoustics, Speech and Signal Processing

7-13 May 2022
  • Virtual (all paper presentations)
22-27 May 2022
  • Main Venue: Marina Bay Sands Expo & Convention Center, Singapore
27-28 October 2022
  • Satellite Venue: Crowne Plaza Shenzhen Longgang City Centre, Shenzhen, China

ICASSP 2022
IEP-6: Speaker Voice Verification at the Edge
Mon, 9 May, 22:00 - 22:45 China Time (UTC +8)
Mon, 9 May, 14:00 - 14:45 UTC
Location: Gather Area P
Virtual
Gather.Town
Expert
Presented by: Jennifer Williams, MyVoice AI

The age of voice computing promises to open up exciting new opportunities that once seemed possible only in science-fiction movies. But it also brings new challenges and implications for privacy, security as well as accessibility and diversity. MyVoice AI specializes in speech technology that can run inference on "the edge" for very small devices that are not connected to the cloud and that operate on very low resources. The low resource environments that we work with involve deep neural networks that are optimized for ultra-low power and ultra-low memory and could run directly on a chip or battery-powered device. A familiar example of ultra-low power speech technology is the "wake-up" word detection that is used for Alexa, Siri, Google, etc. While wake word technology has already gone to market, there are many other opportunities for important speech technology innovation on the edge. MyVoice AI is developing several of these technologies including a focus on speaker verification. Speaker verification at the edge means that user data is not transferred away from device, adding to user privacy. This technology enables smart devices to respond only to authorized users, such as unlocking a car door or accessing personalized settings with a remote control.

This presentation will give an overview of speech signal processing applications on the edge and describe some of the engineering challenges that involve creating solutions that operate in ultra-low resource environments. We will discuss techniques for achieving state of the art performance despite using smaller and more compact neural networks. We will also highlight the need for new standards and testing protocols to be developed by the signal processing community for the purpose of streamlining innovation in this area. We will present our vision for how speech signal processing on the edge will transform everyday lives of consumers while enhancing privacy and accessibility.

MyVoice AI is a privately held company and is a pioneer and leader in conversational AI. MyVoice AI is building the most secure end-to-end voice intelligence platform using advanced machine learning technologies. MyVoice AI licenses software and services to bring speaker verification to the edge, enabling a more seamless and privacy-enhanced authentication experience. We specialize in state-of-the-art deep neural network and deep learning techniques, delivering the world’s smallest footprint and power efficient training and inference engines. Our customers include financial institutions and edge AI embedded platform leaders.

Biography

Dr. Jennifer Williams is an internationally recognized innovator in speech technology. She has more than a decade of experience developing speech and text applications. She spent five years on staff at MIT Lincoln Laboratory as a US Department of Defense civilian contractor. She holds a PhD from the University of Edinburgh in data science with a specialization in speech processing using deep learning. She is a committee member of the ISCA PECRAC, and helps organize events for the ISCA speech privacy and security special interest group. Dr. Williams also serves as a reviewer for numerous speech and language conferences.