A Generative Method for Image Inpainting Using Semantic-Aware Conditioning
Abstract
Image inpainting is the task of filling missing regions in a masked image. Modern approaches for inpainting are unable to effectively utilize contextual information present within the image which results in color, texture or boundary artifacts in the reconstructed result. We propose a semantic-aware generative method for image inpainting. We inject semantic information into the generator through conditional feature modulation and demonstrate our method's ability to distinguish objects and synthesize color and texture information that is consistent with the semantic label.
To generate visually pleasing images we introduce a dual discriminator framework for training, which comprises an input consistency discriminator that evaluates the inpainted region to best match the surrounding unmasked areas and a semantic consistency discriminator that assesses whether the generated image is consistent with the semantic labels. To obtain the complete segmentation map, we use a pre-trained network to compute the semantic map in the unmasked areas and inpaint it using a network trained in an adversarial manner.
We compare our approach to existing state-of-the-art methods and show significant improvement in the visual quality of the results. We perform our experiments on complex publicly available datasets - the COCO-Stuff dataset and the CelebAMask-HQ dataset to illustrate the representational power of our method. Further, we show an extension of the technique to allow user-interactivity by which a user can manually edit semantic content in the image to obtain the desired results.
Citation
Chanda, Deepankar (2020). A Generative Method for Image Inpainting Using Semantic-Aware Conditioning. Master's thesis, Texas A&M University. Available electronically from https : / /hdl .handle .net /1969 .1 /193037.