Paper ID | IVMSP-34.1 | ||
Paper Title | SEMANTIC-AWARE CONTEXT AGGREGATION FOR IMAGE INPAINTING | ||
Authors | Zhilin Huang, Chujun Qin, Ruixin Liu, Zhenyu Weng, Yuesheng Zhu, Peking University, China | ||
Session | IVMSP-34: Inpaiting and Occlusions Handling | ||
Location | Gather.Town | ||
Session Time: | Friday, 11 June, 14:00 - 14:45 | ||
Presentation Time: | Friday, 11 June, 14:00 - 14:45 | ||
Presentation | Poster | ||
Topic | Image, Video, and Multidimensional Signal Processing: [IVTEC] Image & Video Processing Techniques | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Recent attention-based image inpainting methods have made inspiring progress by propagating distant contextual information into holes. However, they tend to generate blurry contents since the propagation process is always misled by preliminarily-recovered holes features which are not well-inferred. To handle this problem, we propose a novel semantic-aware context aggregation module (SACA) that aggregates distant contextual information from a semantic perspective by exploiting the internal semantic similarity of the input feature map. Compared with existing attention mechanisms that model the relation of all pixel-pairs, SACA can suppress the impact of misleading holes features in context aggregation and significantly reduce computation burden by learning the relation between pixels and semantics. Also, we apply SACA to both high-level and low-level feature maps in our model for generating both semantically and visually plausible results. Extensive experiments on Outdoor Scenes, CelebA and Paris StreetView datasets validate the superiority of our method compared with existing methods. |