Efficient Neural Architecture Search for Automated Deep Learning
Abstract
Deep learning has been widely applied for its success in many real-world applications. To adopt deep learning, people often need to go through a non-trivial learning curve like learning the foundation of machine learning theory and how to use the deep learning libraries. Automated deep learning has emerged as an important research topic to reduce the prerequisites for adopting deep learning. Neural architecture search (NAS), as the most important component of the automated deep learning process, is to solve the problem of automatically finding a good neural architecture. However, existing NAS methods suffer from several problems. It usually has a high requirement for computational resources and cannot be efficiently and jointly tuned with other parts of the deep learning solution like the preprocessing steps or the optimizer hyperparameters. This dissertation aims to improve the efficiency of NAS as a stand-alone process and as an important step in the overall automated deep learning process. We propose a series of methods and frameworks for extracting information from the neural architectures, improving the search and evaluation efficiency of NAS, enabling joint tuning with other hyperparameters, and automatically selecting data augmentation strategies.
Citation
Jin, Haifeng (2021). Efficient Neural Architecture Search for Automated Deep Learning. Doctoral dissertation, Texas A&M University. Available electronically from https : / /hdl .handle .net /1969 .1 /193093.