dc.contributor.advisor | Wang, Zhangyang | |
dc.creator | Wu, Junru | |
dc.date.accessioned | 2023-09-19T16:26:48Z | |
dc.date.available | 2023-09-19T16:26:48Z | |
dc.date.created | 2023-05 | |
dc.date.issued | 2023-05-03 | |
dc.date.submitted | May 2023 | |
dc.identifier.uri | https://hdl.handle.net/1969.1/198822 | |
dc.description.abstract | Deep learning has gained considerable interest due to its record-breaking performance in a variety of different domains, including computer vision, natural language processing, multimodal understanding, etc. Meanwhile, deep neural networks are usually parameter-heavy, inefficient, and highly specialized. As a result, there has been a growing demand to improve the efficiency and interoperability of deep neural networks motivated by different needs. In this dissertation, we proposed to address those problems via serial of approaches, including (a) reducing the memory storage and energy footprint via parameter sharing (b) improving the trade-off between performance and computation via neural architecture search (c) unifying neural architectures across different modalities via cross-modality gradient harmonization. | |
dc.format.mimetype | application/pdf | |
dc.language.iso | en | |
dc.subject | Efficient Deep learning | |
dc.subject | Neural Network Compression | |
dc.subject | Neural Architecture Search | |
dc.subject | Neural Architecture Unification | |
dc.title | Towards Efficient Deep Learning: From Compression, Search to Unification | |
dc.type | Thesis | |
thesis.degree.department | Computer Science and Engineering | |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | Texas A&M University | |
thesis.degree.name | Doctor of Philosophy | |
thesis.degree.level | Doctoral | |
dc.contributor.committeeMember | Ji, Shuiwang | |
dc.contributor.committeeMember | Qian, Xiaoning | |
dc.contributor.committeeMember | Kalantari, Nima | |
dc.type.material | text | |
dc.date.updated | 2023-09-19T16:26:49Z | |
local.etdauthor.orcid | 0000-0003-4443-0873 | |