Show simple item record

dc.contributor.advisorHuang, Jianhua
dc.contributor.advisorPati, Debdeep
dc.creatorArmandpour, Mohammadreza
dc.date.accessioned2023-02-07T16:11:25Z
dc.date.available2024-05-01T06:07:38Z
dc.date.created2022-05
dc.date.issued2022-04-11
dc.date.submittedMay 2022
dc.identifier.urihttps://hdl.handle.net/1969.1/197220
dc.description.abstractThe promise of deep learning is to discover rich, hierarchical models that represent probability distributions over the kinds of data encountered in artificial intelligence applications, such as natural images, audio waveforms containing speech, and symbols in natural language corpora. In recent years, the most striking successes in deep learning have involved generative models. However, in their vanilla forms, generative models have a number of shortcomings and failure modes that can hinder their application: they can be difficult to train on high dimensional data, and they can fail in tasks such as the generation of realistic artificial data. In this thesis, we first explore the reasons for these failures in the adversarial-based generative models and propose a novel approach to alleviate these shortfalls. Then, we discuss how a learned generative model can be employed for a downstream task such as speech recognition.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectGANs
dc.subjectGenerative Models
dc.subjectDeep Learning
dc.subjectASR
dc.subjectSpeech recognition
dc.titleDeep Generative Models: Pitfalls and Fixes
dc.typeThesis
thesis.degree.departmentStatistics
thesis.degree.disciplineStatistics
thesis.degree.grantorTexas A&M University
thesis.degree.nameDoctor of Philosophy
thesis.degree.levelDoctoral
dc.contributor.committeeMemberHu, Xia Ben
dc.contributor.committeeMemberNi, Yang
dc.type.materialtext
dc.date.updated2023-02-07T16:11:26Z
local.embargo.terms2024-05-01
local.etdauthor.orcid0000-0002-4258-8688


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record