Show simple item record

dc.contributor.advisorNarayanan, Krishna
dc.contributor.advisorDuffield, Nick
dc.creatorHasanzadehmoghimi, Arman
dc.date.accessioned2022-07-27T16:47:25Z
dc.date.available2023-12-01T09:23:03Z
dc.date.created2021-12
dc.date.issued2021-12-03
dc.date.submittedDecember 2021
dc.identifier.urihttps://hdl.handle.net/1969.1/196407
dc.description.abstractIn this dissertation, we propose novel Bayesian machine learning models to solve various graph analytics problems, including graph representation learning, graph generative modeling, structured semi-supervised learning, and relational learning. Our proposed methods model different components of a graph including nodes, node attributes, and the graph structure, as distributions. More specifically, our proposed methods are Bayesian generative models with robust variational inference and hence are equipped with natural uncertainty estimates. First, we propose Semi-Implicit Graph Variational Autoencoders (SIG-VAE) (Chapter 3) for probabilistic representation learning in graph-structured data. SIG-VAE employs a hierarchical variational framework to enable neighboring node distribution sharing for better generative modeling of graph dependency structure, together with a Bernoulli-Poisson link decoder. SIG-VAE integrates a carefully designed generative model, well suited to model real-world sparse graphs, and a sophisticated semi-implicit variational inference network, which propagates the graph structural information and distribution uncertainty to capture complex posteriors which may exhibit heavy tails, multiple modes, and skewness. SIG-VAE provides highly interpretable latent representations and significantly outperforms state-of-the-art methods on several different graph analytic tasks. In addition, we propose Bayesian Graph Neural Networks with Graph DropConnect (Chapter 4) by introducing a unified framework for adaptive connection sampling in graph neural networks (GNNs), called Graph DropConnect (GDC), that generalizes existing stochastic regularization methods for training GNNs. The proposed framework not only alleviates over-smoothing and overfitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs. Instead of using fixed sampling rates or hand-tuning them as model hyperparameters as in existing stochastic regularization methods, our GDC can be trained jointly with GNN model parameters. GNN training with GDC is shown to be mathematically equivalent to an efficient approximation of training Bayesian GNNs. Finally, we propose MoReL: Multi-modal Relational Learning (Chapter 5) to infer hidden relations among features in heterogeneous views using a fused Gromov-Wasserstein (FGW) regularization between latent representations of corresponding views. Such an optimal transport regularization in the deep Bayesian generative model not only allows incorporating view-specific side information, either with graph-structured or unstructured data in different views, but it also increases model flexibility with the distribution-based regularization. We apply MoReL to integrative analysis in multi-omics data inferring molecular interactions.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectBayesian statistics
dc.subjectmachine learning
dc.subjectgraph deep learning
dc.subjectrepresentation learning
dc.subjectrelation learning
dc.subjectBayesian deep learning
dc.titleBayesian Machine Learning on Graphs
dc.typeThesis
thesis.degree.departmentElectrical and Computer Engineering
thesis.degree.disciplineElectrical Engineering
thesis.degree.grantorTexas A&M University
thesis.degree.nameDoctor of Philosophy
thesis.degree.levelDoctoral
dc.contributor.committeeMemberChamberland, Jean-Francois
dc.contributor.committeeMemberKatzfuss, Matthias
dc.type.materialtext
dc.date.updated2022-07-27T16:47:26Z
local.embargo.terms2023-12-01
local.etdauthor.orcid0000-0003-1694-2249


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record