*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Information technology advances are making data collection possible in most if not all fields of science and engineering and beyond. Often data reduction or feature selection is the first step towards solving current IT age problems. However, data reduction through model selection or l_0 constrained LS optimization leads to a combinatorial search which is computationally infeasible for massive data problems. A computationally efficient alternative is the l_1 constrained LS optimization or Lasso optimization.
In this talk, we first study the model selection property of Lasso in linear regression models. We show that an Irrepresentable Condition on the design matrix is almost necessary and sufficient for the model selection consistency of Lasso for fixed p and p >> n cases, provided that the true model is sparse. Moreover, we describe the Boosted Lasso (BLasso)algorithm which produces an approximation to the complete regularization path of Lasso. BLasso consists of both a forward step and a backward step. The forward step is similar to Boosting and Forward Stagewise Fitting, but the backward step is new and crucial for BLasso to approximate the Lasso path in all situations. For cases with finite number of base learners, when the step size goes to zero, the BLasso path is shown to converge to the Lasso path. Finally, the Blasso algorithm is extended to give an approximate path for the case of a convex loss function plus a convex penalty.