dc.description.abstract | Dehazing is an important pre-processing step in almost all computer vision systems deployed in outdoor settings. Existing dehaze methods are either based on heuristic image priors, or on models trained with hazy-clear image pairs of the same scene. In practice, however, obtaining paired images isn’t feasible, so researchers often add synthetic haze on clean images to create paired data sets. This might result in a domain shift when models trained on synthetic images are used for real-world outdoor settings. In this work, we propose UD-GAN (UnPaired Dehaze GAN), a novel generative adversarial network based dehazing model, which can generate clean images using only unpaired data. UD-GAN can not only be trained using a large repository of real-world clear and hazy images but it can also learn the characteristics of true haze better than other models trained on synthetic data. Moreover, our method is model-agnostic and would perform well even when the assumptions made by the physical model don’t hold true. UD-GAN uses an attention-based generator and we explore two types of attention maps which can be used along with this generator. Finally, we compare the performance of our approach using full-reference metrics, no-reference metrics, and the accuracy in object detection. The qualitative and quantitative results generated by UD-GAN are on-par with the current state-of-the-art dehazing methods. | en |