Technical Program

Paper Detail

Session:Image and Video Quality Assessment with Industry Applications
Location:Lecture Room
Session Time:Tuesday, June 26, 10:20 - 12:40
Presentation Time:Tuesday, June 26, 11:40 - 12:00
Presentation: Special Session Lecture
Paper Title: A SIMPLE PREDICTION FUSION IMPROVES DATA-DRIVEN FULL-REFERENCE VIDEO QUALITY ASSESSMENT MODELS
Authors: Christos Bampis; The University of Texas at Austin, United States 
 Alan Bovik; The University of Texas at Austin, United States 
 Zhi Li; Netflix, United States 
Abstract: When developing data-driven video quality assessment algorithms, the size of the available ground truth subjective data may hamper the generalization capabilities of the trained models. Nevertheless, if the application context is known a priori, leveraging data-driven approaches for video quality prediction can deliver promising results. Towards achieving high-performing video quality prediction for compression and scaling artifacts, Netflix developed the Video Multi-method Assessment Fusion (VMAF) Framework, a full-reference prediction system which uses a regression scheme to integrate multiple perception-motivated features to predict video quality. However, the current version of VMAF does not fully capture temporal video features relevant to temporal video distortions. To achieve this goal, we developed Ensemble VMAF (E-VMAF): a video quality predictor that combines two models: VMAF and predictions based on entropic differencing features calculated on video frames and frame differences. We demonstrate the improved performance of E-VMAF on various subjective video databases. The proposed model will become available as part of the open source package in https://github.com/Netflix/vmaf.