Paper ID | TEC-7.11 | ||
Paper Title | GPG-NET: FACE INPAINTING WITH GENERATIVE PARSING GUIDANCE | ||
Authors | Yuelong Li, Jialiang Yan, Jianming Wang, Tiangong University, China | ||
Session | TEC-7: Interpolation, Enhancement, Inpainting | ||
Location | Area G | ||
Session Time: | Tuesday, 21 September, 08:00 - 09:30 | ||
Presentation Time: | Tuesday, 21 September, 08:00 - 09:30 | ||
Presentation | Poster | ||
Topic | Image and Video Processing: Restoration and enhancement | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Face inpainting is a meaningful but challenging task in the fields of computer vision and image processing. As well known, the restoration of the overall structure information is critical to a successful image inpainting. Hence, in this paper, we enroll face parsing to assist facial image reconstruction. Clearly, intact facial images contain extensive details, which may be difficult to perfectly recover when the images are seriously damaged. While their corresponding parsing maps are much more pure, where only the overall structure information are accommodated. Therefore, the recovery of face parsing map is a quite simple and easily conducted task. Base on this idea, a two stage based face inpainting framework, namely Generative Parsing Guidance Network (GPG-Net), is worked out. Moreover, a Semantic Compensation Module (SCM) is fused to ensure effective context information aggregation, while a Contextual Attention Module (CAM) is brought in to further improve the appearance rationality. Experiments are extensively conducted on the publicly available CelebA-HQ dataset to verify the effectiveness of the proposed approach. |