Show simple item record

dc.contributor.advisorWang, Suojin
dc.contributor.advisorHuang, Jianhua
dc.creatorXu, Ganggang
dc.date.accessioned2012-02-14T22:20:06Z
dc.date.accessioned2012-02-16T16:18:31Z
dc.date.available2014-01-15T07:05:30Z
dc.date.created2011-12
dc.date.issued2012-02-14
dc.date.submittedDecember 2011
dc.identifier.urihttps://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10451
dc.description.abstractPenalized methods are becoming more and more popular in statistical research. This dissertation research covers two major aspects of applications of penalized methods: variable selection and nonparametric function estimation. The following two paragraphs give brief introductions to each of the two topics. Infinite variance autoregressive models are important for modeling heavy-tailed time series. We use a penalty method to conduct model selection for autoregressive models with innovations in the domain of attraction of a stable law indexed by alpha is an element of (0, 2). We show that by combining the least absolute deviation loss function and the adaptive lasso penalty, we can consistently identify the true model. At the same time, the resulting coefficient estimator converges at a rate of n^(?1/alpha) . The proposed approach gives a unified variable selection procedure for both the finite and infinite variance autoregressive models. While automatic smoothing parameter selection for nonparametric function estimation has been extensively researched for independent data, it is much less so for clustered and longitudinal data. Although leave-subject-out cross-validation (CV) has been widely used, its theoretical property is unknown and its minimization is computationally expensive, especially when there are multiple smoothing parameters. By focusing on penalized modeling methods, we show that leave-subject-out CV is optimal in that its minimization is asymptotically equivalent to the minimization of the true loss function. We develop an efficient Newton-type algorithm to compute the smoothing parameters that minimize the CV criterion. Furthermore, we derive one simplification of the leave-subject-out CV, which leads to a more efficient algorithm for selecting the smoothing parameters. We show that the simplified version of CV criteria is asymptotically equivalent to the unsimplified one and thus enjoys the same optimality property. This CV criterion also provides a completely data driven approach to select working covariance structure using generalized estimating equations in longitudinal data analysis. Our results are applicable to additive, linear varying-coefficient, nonlinear models with data from exponential families.en
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.subjectAdaptive lassoen
dc.subjectAutoregressive modelen
dc.subjectInfinite varianceen
dc.subjectLeast absolute deviationen
dc.subjectCross-validation, Generalized estimating equations, Multiple smoothing parameters, Penalized splines, Working covariance matrix.en
dc.titleVariable Selection and Function Estimation Using Penalized Methodsen
dc.typeThesisen
thesis.degree.departmentStatisticsen
thesis.degree.disciplineStatisticsen
thesis.degree.grantorTexas A&M Universityen
thesis.degree.nameDoctor of Philosophyen
thesis.degree.levelDoctoralen
dc.contributor.committeeMemberCarroll, Raymond J.
dc.contributor.committeeMemberZhou, Jianxin
dc.type.genrethesisen
dc.type.materialtexten
local.embargo.terms2014-01-15


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record