Bayesian Machine Learning on Graphs
Abstract
In this dissertation, we propose novel Bayesian machine learning models to solve various graph analytics problems, including graph representation learning, graph generative modeling, structured semi-supervised learning, and relational learning. Our proposed methods model different components of a graph including nodes, node attributes, and the graph structure, as distributions. More specifically, our proposed methods are Bayesian generative models with robust variational inference and hence are equipped with natural uncertainty estimates.
First, we propose Semi-Implicit Graph Variational Autoencoders (SIG-VAE) (Chapter 3) for
probabilistic representation learning in graph-structured data. SIG-VAE employs a hierarchical
variational framework to enable neighboring node distribution sharing for better generative modeling of graph dependency structure, together with a Bernoulli-Poisson link decoder. SIG-VAE integrates a carefully designed generative model, well suited to model real-world sparse graphs, and a sophisticated semi-implicit variational inference network, which propagates the graph structural information and distribution uncertainty to capture complex posteriors which may exhibit heavy tails, multiple modes, and skewness. SIG-VAE provides highly interpretable latent representations and significantly outperforms state-of-the-art methods on several different graph analytic tasks.
In addition, we propose Bayesian Graph Neural Networks with Graph DropConnect (Chapter 4) by introducing a unified framework for adaptive connection sampling in graph neural networks (GNNs), called Graph DropConnect (GDC), that generalizes existing stochastic regularization methods for training GNNs. The proposed framework not only alleviates over-smoothing and overfitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs. Instead of using fixed sampling rates or hand-tuning them as model hyperparameters as in existing stochastic regularization methods, our GDC can be trained jointly with GNN model parameters. GNN training with GDC is shown to be mathematically equivalent to an efficient approximation of training Bayesian GNNs.
Finally, we propose MoReL: Multi-modal Relational Learning (Chapter 5) to infer hidden relations among features in heterogeneous views using a fused Gromov-Wasserstein (FGW) regularization between latent representations of corresponding views. Such an optimal transport regularization in the deep Bayesian generative model not only allows incorporating view-specific side information, either with graph-structured or unstructured data in different views, but it also increases model flexibility with the distribution-based regularization. We apply MoReL to integrative analysis in multi-omics data inferring molecular interactions.
Subject
Bayesian statisticsmachine learning
graph deep learning
representation learning
relation learning
Bayesian deep learning
Citation
Hasanzadehmoghimi, Arman (2021). Bayesian Machine Learning on Graphs. Doctoral dissertation, Texas A&M University. Available electronically from https : / /hdl .handle .net /1969 .1 /196407.
Related items
Showing items related by title, author, creator and subject.
-
Peterson, Cheryl (2012-10-19)PlantingScience (PS) is a unique web-based learning system designed to develop secondary students' scientific practices and proficiencies as they engage in hands-on classroom investigations while being mentored online by ...
-
Guimaraes Goecks, Vinicius (2020-03-17)Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. This can be attributed to the fact that ...
-
Rengarajan, Desik (2023-07-07)Reinforcement learning is a powerful approach for training intelligent agents to make decisions in complex environments. However, these algorithms often struggle when faced with challenging scenarios, such as sparse reward ...