Paper ID | MMSP-3.2 |
Paper Title |
COLLABORATIVE LEARNING TO GENERATE AUDIO-VIDEO JOINTLY |
Authors |
Vinod Kurmi, Vipul Bajaj, Badri Patro, Venkatesh K Subramanian, Indian Institute of Technology, Kanpur, India; Vinay P Namboodiri, University of Bath, United Kingdom; Preethi Jyothi, Indian Institute of Technology, Bombay, India |
Session | MMSP-3: Multimedia Synthesis and Enhancement |
Location | Gather.Town |
Session Time: | Wednesday, 09 June, 14:00 - 14:45 |
Presentation Time: | Wednesday, 09 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Multimedia Signal Processing: Signal Processing for Multimedia Applications |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
There have been a number of techniques that have demonstrated the generation of multimedia data for a single modality at a time using GANs such as the ability to generate images, videos, and audio. However, so far, the task of multi-modal generation of data, specifically for audio and videos both, has not been explored well. Towards this problem, we propose a method that demonstrates that we are able to generate naturalistic samples of video and audio data by the joint correlated generation of audio and video modalities. The proposed method uses multiple discriminators to ensure that the audio, video, and the joint output are also indistinguishable from real-world samples. We present a dataset for this task and show that we are able to generate realistic samples. This method is validated using various standard metrics such as Inception Score, Frechet Inception Distance (FID) and through human evaluation. |