The full text of this item is not available at this time because the student has placed this item under an embargo for a period of time. The Libraries are not authorized to provide a copy of this work during the embargo period, even for Texas A&M users with NetID.
Underwater Image Enhancement Using Domain-Adversarial Learning
MetadataShow full item record
Clean underwater images have a variety of applications in marine research, autonomous underwater vehicles and so on. The task of enhancing underwater images is especially difficult because of the diversity with which they are captured. For example, images captured in deep waters look different than those captured in shallow waters. Thus it is difficult to obtain clean underwater images due to lack of a algorithm which handles this diversity. Through our work, we aim to handle this diversity by learning the scene specific features of the images while discarding the features denoting the water type and generate clean underwater images through these learned domain agnostic features. We train our model on a dataset synthesized using NYU Depth Dataset V2 . Our model outperforms quantitative metrics of existing methods for almost all water types and also generalizes well on real world datasets. Performance of underwater images on high level vision tasks like object detection also shows improvement after preprocessing with our model.
underwater image enhancement
Uplavikar, Pritish Milind (2019). Underwater Image Enhancement Using Domain-Adversarial Learning. Master's thesis, Texas A&M University. Available electronically from