Description
Recent advancements in computer vision models have largely been driven by supervised training on labeled data. However, the process of labeling datasets remains both costly and time-intensive. This dissertation delves into enhancing the performance of deep neural networks when faced with limited or no labeling information. I address this challenge through four primary methodologies: domain adaptation, self-supervision, input regularization, and label regularization. In situations where labeled data is unavailable but a similar dataset exists, domain adaptation emerges as a valuable strategy for transferring knowledge from the labeled dataset to the target dataset. This dissertation introduces three innovative domain adaptation methods that operate at pixel, feature, and output levels.Another approach to tackle the absence of labels involves a novel self-supervision technique tailored to train Vision Transformers in extracting rich features.
The third and fourth approaches focus on scenarios where only a limited amount of labeled data is available. In such cases, I present novel regularization techniques designed to mitigate overfitting by modifying the input data and the target labels, respectively.
Details
Title
- Making the Best of What We Have: Novel Strategies for Training Neural Networks under Restricted Labeling Information
Contributors
- Chhabra, Sachin (Author)
- Li, Baoxin (Thesis advisor)
- Venkateswara, Hemanth (Committee member)
- Yang, Yezhou (Committee member)
- Wu, Teresa (Committee member)
- Yang, Yingzhen (Committee member)
- Arizona State University (Publisher)
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2024
Resource Type
Collections this item is in
Note
- Partial requirement for: Ph.D., Arizona State University, 2024
- Field of study: Computer Science