Paper ID | IVMSP-34.2 | ||
Paper Title | BISHIFT-NET FOR IMAGE INPAINTING | ||
Authors | Xue Zhou, Tao Dai, Yong Jiang, Shutao Xia, Tsinghua University, China | ||
Session | IVMSP-34: Inpaiting and Occlusions Handling | ||
Location | Gather.Town | ||
Session Time: | Friday, 11 June, 14:00 - 14:45 | ||
Presentation Time: | Friday, 11 June, 14:00 - 14:45 | ||
Presentation | Poster | ||
Topic | Image, Video, and Multidimensional Signal Processing: [IVSMR] Image & Video Sensing, Modeling, and Representation | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Image inpainting remains a challenging task in computer vision which aims to fill in the missing area of a corrupted image with proper contents and generate photorealistic images by using the information from the existing area. Most existing methods always generate contents with blurry texture caused by propogating the convolutional feature through a fully connected layer. To address this problem, Shift-Net is proposed to shift the encoder feature from existing area to serve as an estimation of the missing parts. However, the decoder feature with new encoding information is ingnored. Inspired by this, we propose a new inpainting model, which is called BiShift-Net. BiShift-Net adopts the structure of U-Net, and we introduce a BiShift layer to it. We use the BiShift layer to capture the information from both encoder and decoder features, rearranging the features to generate sharp texture. Experiments show that Bishift-Net outperforms the other stateof-the-art CNN-based methods, while produce more faithful results at the same time. |