Warning: Undefined variable $isLoggedIn in G:\WWWRoot\ICASSP2022\view_event.php on line 162
IEEE ICASSP 2022 || Singapore || 7-13 May 2022 Virtual; 22-27 May 2022 In-Person

IEEE ICASSP 2022

2022 IEEE International Conference on Acoustics, Speech and Signal Processing

7-13 May 2022
  • Virtual (all paper presentations)
22-27 May 2022
  • Main Venue: Marina Bay Sands Expo & Convention Center, Singapore
27-28 October 2022
  • Satellite Venue: Crowne Plaza Shenzhen Longgang City Centre, Shenzhen, China

ICASSP 2022
IEP-1: When signal processing meets user experience: how to turn a regular user into an audio systems engineer in 60 seconds
Sun, 8 May, 20:00 - 20:45 China Time (UTC +8)
Sun, 8 May, 12:00 - 12:45 UTC
Location: Gather Area P
Virtual
Gather.Town
Expert
Presented by: Adib Mehrabi, Sonos, Inc

Digital audio signal processing technology is often implemented in such a way that it requires little or no user interaction for it to function as intended. The user of a teleconferencing device or music playback system often isn't aware of the acoustic echo cancellation, noise suppression, speech enhancement, or various limiters and equalizers that are being applied to improve the quality of the audio. However, sometimes the user is ideally placed to provide inputs or measurements to the system that will improve its performance, or indeed enable it to perform. This is where DSP meets user experience, presenting new and sometimes challenging considerations for both the signal processing methods and interaction design.

In this presentation I will discuss the development and design of a feature called Trueplay, which exists on all Sonos products today. Trueplay is a user facing audio feature, which is used to adapt the sound of our speakers to the listening environment. Early on in the design of Trueplay, it was recognised that in order to estimate how a loudspeaker sounds in a room, there really is no substitute for an in-room measurement in multiple locations, including measurements not made directly on the speaker (e.g. from on-board microphones). This raised the question of whether it would be possible to get a regular user to make a room average acoustic measurement in their homes - something that would normally be performed by acoustics or audio systems engineers - whilst maintaining simplicity and quality of the Sonos user experience. I will discuss how the entire process - including the measurement method, stimulus tones, user guidance, and feedback were informed by balancing the objectives of premium sound quality with human-centric user design to achieve not only a performant result, but also a pleasant, and perhaps even magical end user experience.

Biography

Adib Mehrabi is a Senior Manager in the Advanced Technology group at Sonos, Inc, and Honorary Lecturer at Queen Mary University, London, UK. He received his PhD from the Centre for Digital Music at Queen Mary University, London, UK, and BSc in Audio Engineering from the University of the West of England, UK. Prior to working at Sonos, Adib was Head of Research at Chirp - a company that developed audio signal processing and machine learning methods for transmitting data between devices using sound. Adib currently leads the Advanced Rendering and Immersive Audio research group at Sonos, Inc.