Warning: Undefined variable $isLoggedIn in G:\WWWRoot\ICASSP2022\view_event.php on line 162
IEEE ICASSP 2022 || Singapore || 7-13 May 2022 Virtual; 22-27 May 2022 In-Person

IEEE ICASSP 2022

2022 IEEE International Conference on Acoustics, Speech and Signal Processing

7-13 May 2022
  • Virtual (all paper presentations)
22-27 May 2022
  • Main Venue: Marina Bay Sands Expo & Convention Center, Singapore
27-28 October 2022
  • Satellite Venue: Crowne Plaza Shenzhen Longgang City Centre, Shenzhen, China

ICASSP 2022
EXP-8: Speech as a disease biomarker
Thu, 26 May, 17:00 - 18:00 China Time (UTC +8)
Thu, 26 May, 09:00 - 10:00 UTC
Location: Sands Ballroom E - L
In-Person
Live-Stream
Expert
1Catarina Botelho and 2Ayimnisagul Ablimit,
1Instituto Superior Técnico, Portugal; 2Universität Bremen, Germany

Chair: Tanja Schultz, University of Bremen, Germany

Today’s overburdened health systems worldwide face numerous challenges, aggravated by an increased aging population. Speech emerges as a rich, and ubiquitous biomarker with strong potential for the development of low-cost, widespread, and remote casual testing tools for several diseases. In fact, speech encodes information about a plethora of diseases, which go beyond the so-called speech and language disorders, and include neurodegenerative diseases, mood and anxiety-related diseases, and diseases that concern respiratory organs.

Recent advances in speech processing and machine learning have enabled the automatic detection of these diseases. Despite exciting results, this active research area faces several challenges that arise mostly from the limitations of the current datasets. They are typically very small, obtained in very specific recording conditions, for a single language, and concerning a single disease.

These challenges provide the guidelines for our research: how to deal with data scarcity? How to disentangle the effects of aging or other coexisting diseases in small, cross sectional datasets? How to deal with changing recording conditions, namely across longitudinal studies? How to transfer results across different corpora, often in different languages? Can other modalities (e.g. visual speech, EMG) provide complementary information to the acoustic speech signal? Are the results generalizable, explainable and fair?

In this talk, we will illustrate these challenges for different diseases, in particular with our work on the detection of Alzheimer’s disease in the context of longitudinal corpus and cross corpora analysis. We will also explore multimodal approaches for the prediction of obstructive sleep apnea.

Speaker Biographies

Catarina Botelho is a PhD student at Instituto Superior Técnico / INESC-ID, Universidade de Lisboa since 2019. Her research topic is "Speech as a biomarker for speech affecting diseases”, focusing on the use of speech for medical diagnosis, monitoring and therapy. Particularly, she worked with obstructive sleep apnea, Parkinson's Disease, Alzheimer’s Disease, and COVID-19, and multimodal signals including EMG and visual speech. She was a research intern at Google AI, Toronto. She has been involved in the student advisory committee of the International Speech Communication Association (ISCA-SAC), since 2020, and currently acts as General Coordinator. She is also an IEEE student member.

Ayimnisagul Ablimit received her master’s degree in computer science from the Universität Bremen, Germany and she is a PhD student at Cognitive Systems Lab, Universität Bremen since July 2019. Her research topic is “speech-based cognitive impairment screening”, focusing on developing automatic speech recognition systems for spontaneous speech corpora, developing multilingual automatic speech recognition systems, conversational speech-based Alzheimer’s Disease, and Age-Associated Cognitive Decline screening, speech-based cognitive performance detection. She has been an IEEE Student Member since March 2019.