Show simple item record

dc.contributor.advisorMallick, Bani K
dc.contributor.advisorPati, Debdeep
dc.creatorNiu, Yabo
dc.date.accessioned2019-11-25T22:27:17Z
dc.date.available2021-08-01T07:35:45Z
dc.date.created2019-08
dc.date.issued2019-07-01
dc.date.submittedAugust 2019
dc.identifier.urihttps://hdl.handle.net/1969.1/186527
dc.description.abstractGaussian graphical models (GGMs) are a popular tool to learn the dependence structure in the form of a graph among variables of interest. Bayesian methods have gained in popularity in the last two decades due to their ability to simultaneously learn the covariance and the graph and characterize uncertainty in the selection. In this study, I first develop a Bayesian method to incorporate covariate information in the GGMs setup in a nonlinear seemingly unrelated regression framework. I propose a joint predictor and graph selection model and develop an efficient collapsed Gibbs sampler algorithm to search the joint model space. Furthermore, I investigate its theoretical variable selection properties. I demonstrate the proposed method on a variety of simulated data, concluding with a real data set from The Cancer Proteome Atlas (TCPA) project. For scalability of the Markov chain Monte Carlo algorithms, decomposability is commonly imposed on the graph space. A wide variety of graphical conjugate priors are proposed jointly on the covariance matrix and the graph with improved algorithms to search along the space of decomposable graphs, rendering the methods extremely popular in the context of multivariate dependence modeling. An open problem in Bayesian decomposable structure learning is whether the posterior distribution is able to select a meaningful decomposable graph that it is “close” in an appropriate sense to the true non-decomposable graph, when the dimension of the variables increases with the sample size. In the second part of this study, I explore specific conditions on the true precision matrix and the graph which results in an affirmative answer to this question using a commonly used hyper-inverse Wishart prior on the covariance matrix and a suitable complexity prior on the graph space, both in the well-specified and misspecified settings. In absence of structural sparsity assumptions, the strong selection consistency holds in a high dimensional setting where p = O(n α ) for α < 1/3. I show when the true graph is non-decomposable, the posterior distribution on the graph concentrates on a set of graphs that are minimal triangulations of the true graph.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectGaussian Graphical Modelsen
dc.subjectSelection Consistencyen
dc.subjectHyper-inverse Wisharten
dc.titleTopics On Bayesian Gaussian Graphical Modelsen
dc.typeThesisen
thesis.degree.departmentStatisticsen
thesis.degree.disciplineStatisticsen
thesis.degree.grantorTexas A&M Universityen
thesis.degree.nameDoctor of Philosophyen
thesis.degree.levelDoctoralen
dc.contributor.committeeMemberBhattacharya, Anirban
dc.contributor.committeeMemberSang, Huiyan
dc.contributor.committeeMemberNi, Yang
dc.contributor.committeeMemberDing, Yu
dc.type.materialtexten
dc.date.updated2019-11-25T22:27:17Z
local.embargo.terms2021-08-01
local.etdauthor.orcid0000-0001-8087-6747


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record