Segmentation-aware Image Denoising Without Knowing True Segmentation
Abstract
Recent works have discussed application-driven image restoration neural networks capable of not only removing noise in images but also preserving their semantic-aware details, making them suitable for various high-level computer vision tasks as the pre-processing step. However, such approaches require extra annotations for their high-level vision tasks in order to train the joint pipeline using hybrid losses, yet the availability of those annotations is often limited to a few image sets, thereby restricting the general applicability of these methods to simply denoise more unseen and unannotated images. Motivated by this, we propose a segmentation-aware image denoising model dubbed U-SAID, based on a novel unsupervised approach with a pixel-wise uncertainty loss. U-SAID does not require any ground-truth segmentation map, and thus can be applied to any image dataset. It is capable of generating denoised images with comparable or even better quality than that of its supervised counterpart and even more general “application-agnostic” denoisers, and its denoised results show stronger robustness for subsequent semantic segmentation tasks. Moreover, plugging its “universal” denoiser without fine-tuning, we demonstrate the superior generalizability of U-SAID in three-folds: (1) denoising unseen types of images; (2) denoising as preprocessing for segmenting unseen noisy images; and (3) denoising for unseen high-level tasks. Extensive experiments were conducted to assess the effectiveness and robustness of the proposed U-SAID model against various popular image sets.
Citation
Wang, Sicheng (2020). Segmentation-aware Image Denoising Without Knowing True Segmentation. Master's thesis, Texas A&M University. Available electronically from https : / /hdl .handle .net /1969 .1 /192818.