Show simple item record

dc.contributor.advisorQian, Xiaoning
dc.creatorArdywibowo, Randy
dc.date.accessioned2023-09-18T16:13:28Z
dc.date.created2022-12
dc.date.issued2022-08-18
dc.date.submittedDecember 2022
dc.identifier.urihttps://hdl.handle.net/1969.1/198478
dc.description.abstractArtificial Intelligent and Machine Learning (AI/ML) systems have been widely adopted with the increasing availability of data in a variety of applications such as computer vision, activity recognition, autonomous driving, healthcare, and many other science and engineering applications. Several challenges arise in translating them for effective and reliable decision making. Besides common challenges in analyzing sensor behavioral data, such as missing values and outliers, growing concerns of overfitting arise in widely used ML models, such as Deep Neural Networks (DNNs). This is exacerbated when considering their robustness and generalizability in real-world safety-critical applications such as autonomous driving and healthcare. It is therefore important to have accurate predictions as well as uncertainty estimates in the presence of data defects and anomalies. Bayesian learning is a promising field that works with probabilistic models explicitly considering uncertainty. In this field, models such as Gaussian Processes (GPs) and more recent Bayesian Neural Networks (BNNs), which define probability distributions over functions, are used to generalize from observed data while principally accounting for the uncertainty of its generalization. Moreover, such an accurate characterization of the data uncertainty allows us to perform accurate predictions in the face of irregularities. Such a system may allow us to benefit on other practical aspects of deploying AI/ML systems, such as enabling resource efficiency in the form of resources used to collect features, energy used for inference, and time wasted on experiments. Indeed, accurate model uncertainty estimates allow us to selectively deploy resources to cases where the uncertainty is high, while being efficient in cases with high certainty. In this work, we present a robust framework for enabling uncertainty-aware AI/ML through differentiable reparameterizations of discrete variational distributions. This enables expressive distributions to be used in tractably approximating the posterior model distribution, especially in BNNs. We apply this robust framework to develop various systems that handle missing values and outliers, quantify uncertainty, detect outliers, achieve resource efficient machine learning, and continually learn novel concepts from a stream of data.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectBayesian inference
dc.subjectvariational inference
dc.subjectuncertainty quantification
dc.subjectdeep learning
dc.subjectBayesian neural networks
dc.titleLearning under Data Irregularity and Uncertainty
dc.typeThesis
thesis.degree.departmentElectrical and Computer Engineering
thesis.degree.disciplineElectrical Engineering
thesis.degree.grantorTexas A&M University
thesis.degree.nameDoctor of Philosophy
thesis.degree.levelDoctoral
dc.contributor.committeeMemberWang, Zhangyang
dc.contributor.committeeMemberBraga-Neto, Ulisses
dc.contributor.committeeMemberKumar, Panganamala
dc.type.materialtext
dc.date.updated2023-09-18T16:13:28Z
local.embargo.terms2024-12-01
local.embargo.lift2024-12-01
local.etdauthor.orcid0000-0002-6590-9026


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record