Paper ID | HLT-10.2 |
Paper Title |
LIFI: TOWARDS LINGUISTICALLY INFORMED FRAME INTERPOLATION |
Authors |
Aradhya Mathur, IIIT Delhi, India; Devansh Batra, IIIT-D, India; Yaman Kumar Singla, IIIT-D; Adobe; State University of New York at Buffalo, India; Rajiv Ratn Shah, IIIT Delhi, India; Changyou Chen, State University of New York at Buffalo, United States; Roger Zimmermann, NUS, Singapore |
Session | HLT-10: Multi-modality in Language |
Location | Gather.Town |
Session Time: | Wednesday, 09 June, 16:30 - 17:15 |
Presentation Time: | Wednesday, 09 June, 16:30 - 17:15 |
Presentation |
Poster
|
Topic |
Human Language Technology: [HLT-MMPL] Multimodal Processing of Language |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Here we explore the problem of speech video interpolation. With close to 70% of web traffic, such content today forms the primary form of online communication and entertainment. Despite high performance on conventional metrics like MSE, PSNR, and SSIM, we find that the state-of-the-art frame interpolation models fail to produce faithful speech interpolation. For instance, we observe the lips stay static while the person is still speaking for most interpolated frames. With this motivation, using the information of words, sub-words, and visemes, we provide a new set of linguistically informed metrics targeted explicitly to the problem of speech video interpolation. We release several datasets to test video interpolation models of their speech understanding. We also design linguistically informed deep learning video interpolation algorithms to generate the missing frames. |