Show simple item record

dc.contributor.advisorWang, Zhangyang
dc.creatorWu, Junru
dc.date.accessioned2023-09-19T16:26:48Z
dc.date.available2023-09-19T16:26:48Z
dc.date.created2023-05
dc.date.issued2023-05-03
dc.date.submittedMay 2023
dc.identifier.urihttps://hdl.handle.net/1969.1/198822
dc.description.abstractDeep learning has gained considerable interest due to its record-breaking performance in a variety of different domains, including computer vision, natural language processing, multimodal understanding, etc. Meanwhile, deep neural networks are usually parameter-heavy, inefficient, and highly specialized. As a result, there has been a growing demand to improve the efficiency and interoperability of deep neural networks motivated by different needs. In this dissertation, we proposed to address those problems via serial of approaches, including (a) reducing the memory storage and energy footprint via parameter sharing (b) improving the trade-off between performance and computation via neural architecture search (c) unifying neural architectures across different modalities via cross-modality gradient harmonization.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectEfficient Deep learning
dc.subjectNeural Network Compression
dc.subjectNeural Architecture Search
dc.subjectNeural Architecture Unification
dc.titleTowards Efficient Deep Learning: From Compression, Search to Unification
dc.typeThesis
thesis.degree.departmentComputer Science and Engineering
thesis.degree.disciplineComputer Science
thesis.degree.grantorTexas A&M University
thesis.degree.nameDoctor of Philosophy
thesis.degree.levelDoctoral
dc.contributor.committeeMemberJi, Shuiwang
dc.contributor.committeeMemberQian, Xiaoning
dc.contributor.committeeMemberKalantari, Nima
dc.type.materialtext
dc.date.updated2023-09-19T16:26:49Z
local.etdauthor.orcid0000-0003-4443-0873


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record