Show simple item record

dc.contributor.advisorMallick, Bani Kumar
dc.contributor.advisorBhattacharya, Anirban
dc.creatorChakraborty, Antik
dc.date.accessioned2019-01-18T16:46:22Z
dc.date.available2020-08-01T06:37:11Z
dc.date.created2018-08
dc.date.issued2018-08-06
dc.date.submittedAugust 2018
dc.identifier.urihttps://hdl.handle.net/1969.1/174156
dc.description.abstractSparsity is a standard structural assumption that is made while modeling high-dimensional statistical parameters. This assumption essentially entails a lower dimensional embedding of the high-dimensional parameter thus enabling sound statistical inference. Apart from this obvious statistical motivation, in many modern applications of statistics such as Genomics, Neuroscience etc. parameters of interest are indeed of this nature. For over almost two decades, spike and slab type priors have been the Bayesian gold standard for modeling of sparsity. However, due to their computational bottlenecks shrinkage priors have emerged as a powerful alternative. This family of priors can almost exclusively be represented as a scale mixture of Gaussian distribution and posterior Markov chain Monte Carlo (MCMC) updates of related parameters are then relatively easy to design. Although shrinkage priors were tipped as having computational scalability in high-dimensions, when the number of parameters is in thousands or more, they do come with their own computational challenges. Standard MCMC algorithms implementing shrinkage priors generally scale cubic in the dimension of the parameter making real life application of these priors severely limited. The first chapter of this document addresses this computational issue and proposes an alternative exact posterior sampling algorithm complexity of which that linearly in the ambient dimension. The algorithm developed in the first chapter is specifically designed for regression problems. However, simple modifications of it allows tackling other high-dimensional problems where these priors have found little application. In the second chapter, we develop a Bayesian method based on shrinkage priors for high-dimensional multiple response response regression. We show how proper shrinkage may be used for modeling high-dimensional low-rank matrices. Unlike spike and slab type priors, shrinkage priors are unable to produce exact zeros in the posterior. In this chapter we also devise two independent post MCMC processing schemes based on the idea of soft-thresholding with default choices of tuning parameters. This post processing steps provide exact estimates of the row and rank sparsity in the parameter matrix. Theoretical study of the posterior convergence rates using shrinkage priors are relatively underdeveloped. While we do not attempt to provide a unifying foundation to study these properties, in chapter three we choose a specific member of the shrinkage family known as the horseshoe prior and study its convergence rates in several high-dimensional models. These results are completely new in the literature and also establish the horseshoe priors’ optimality in the minimax sense in high-dimensional problems.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectHigh-dimensionalen
dc.subjectSparsityen
dc.subjectShrinkage priorsen
dc.subjectLow-ranken
dc.subjectConvergence ratesen
dc.subjectFactor modelsen
dc.subjectRegressionen
dc.titleBayesian Shrinkage: Computation, Methods and Theoryen
dc.typeThesisen
thesis.degree.departmentStatisticsen
thesis.degree.disciplineStatisticsen
thesis.degree.grantorTexas A & M Universityen
thesis.degree.nameDoctor of Philosophyen
thesis.degree.levelDoctoralen
dc.contributor.committeeMemberCarroll, Raymond Jerome
dc.contributor.committeeMemberSivakumar, Natarajan
dc.type.materialtexten
dc.date.updated2019-01-18T16:46:35Z
local.embargo.terms2020-08-01
local.etdauthor.orcid0000-0002-4369-9232


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record