dc.description.abstract | We consider the asymptotic behavior of the l^1 regularized least squares estimator (LASSO) for the linear regression model
Y=X(beta)+xi
with training data (X,Y) in R^{nxp}xR^n, true parameter beta in R^p, and observation noise xi in R^n. The LASSO estimator, defined by
betahat in argmin_{u in R^p}||Xu-Y||^2+lambda||u||_1
introduces a bias toward 0 to encourage sparse estimates. LASSO has become a staple in the statistician’s breadbasket; it behaves very well and is quickly computed.
In the case that xi_i are i.i.d. with E|xi_i|^alpha<alphat}=t^{-alpha} for some 1<alpha<2, Chatterjee and Lahiri found the exact rate, almost surely, for which the LASSO betahat tends to beta. We consider instead xi I that are i.i.d., possess all moments less than alpha, and eventually nearly follow a Pareto tail P{|xi_i|>t}=t^{-alpha} Specifically, we only require that the tails of xi_i to be regularly varying.
We center and scale both the quantity inside the arg min and betahat itself to prepare for a CLT. We find conditions that promise both convergence (uniformly over a class of designs X) of the quantity inside the arg min and uniform tightness of the centered, scaled bethahat. Then, we use a standard theorem to pass to uniform convergence of the centered, scaled betahat. Finally, we use a basic inequality to prove rate consistency for betahat when p is allowed to increase with n. | en |