The full text of this item is not available at this time because the student has placed this item under an embargo for a period of time. The Libraries are not authorized to provide a copy of this work during the embargo period, even for Texas A&M users with NetID.
Delving into the Robostness, Efficiency and Practicality of Self-Supervised Learning
Abstract
Contrastive Learning, a class of self-supervised learning methods, learns strong visual representations by pulling the same images to be similar while pushing different images to be distinct. Though contrastive learning has achieved success in many downstream tasks, some problems are still hindering it from applying to real-world applications (i.e. vulnerability towards imbalancedness, huge training cost, and limited practicality). In this dissertation, several techniques are proposed to tackle the above challenges. Specifically, an implicit balancing method and an active learning algorithm are proposed to improve the robustness towards imbalancedness. An efficient scaling method is developed to reduce the training cost of large networks. Besides, we improve the practicality from two aspects: i) An adversarial contrastive pre-training framework is proposed to address the vulnerability towards adversarial attacks ii) A decomposition and alignment strategy is developed to boost the transferability of downstream few-shot learning. We believe the proposed methods can benefit the real-world applications of contrastive learning.
Subject
Self-supervised learningContrastive learning
Long-tail distribution
Adversarial robustness
Transferability
Citation
Jiang, Ziyu (2022). Delving into the Robostness, Efficiency and Practicality of Self-Supervised Learning. Doctoral dissertation, Texas A&M University. Available electronically from https : / /hdl .handle .net /1969 .1 /198768.