Paper ID | IFS-1.4 |
Paper Title |
Subjective and objective evaluation of deepfake videos |
Authors |
Pavel Korshunov, Sébastien Marcel, Idiap Research Institute, Switzerland |
Session | IFS-1: Multimedia Forensics 1 |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 13:00 - 13:45 |
Presentation Time: | Tuesday, 08 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Information Forensics and Security: [MMF] Multimedia Forensics |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Practically anyone can generate a realistic looking deepfake. The online prevalence of such fake videos will erode the societal trust in video evidence. To counter the looming threat, methods to detect deepfakes were recently proposed by the research community. However, it is still unclear how realistic deepfake videos are for an average person and whether the algorithms are significantly better than humans at detecting them. Therefore, this paper, presents a subjective study by using 60 naive subjects to evaluate how hard it is for humans to see if a video is a deepfake. For the study, 120 videos (60 deepfakes and 60 originals) were manually selected from the Facebook database used in Kaggle's Deepfake Detection Challenge 2020. The results of the subjective evaluation were compared with two state of the art deepfake detection methods, based on Xception and EfficientNet neural networks pre-trained on two other public databases: Google and Jiqsaw subset from FaceForensics++ and Celeb-DF v2 dataset. The experiments demonstrate that while the human perception is very different from the perception of a machine, both successfully but in different ways are fooled by deepfakes. Specifically, algorithms struggle to detect deepfake videos which humans found to be very easy to spot. |