摘要

An abundance of high-dimensional data has meant that L-1 penalized regression, known as the lasso, has become an indispensable tool of the practitioner. A feature of the lasso is a "tuning" parameter that controls the amount of shrinkage applied to the coefficients. In practice, a value for the tuning parameter is chosen using the method of cross-validation. It is shown that the model that is selected by the lasso can be extremely sensitive to the fold assignment used for cross-validation. A consequence of this sensitivity is that the results from a lasso analysis can lack interpretability. To overcome this model-selection instability of the lasso, a method called the percentile-lasso is introduced. The model selected by the percentile-lasso corresponds to the model selected by the lasso, when the lasso is fitted using an appropriate percentile of the possible "optimal" tuning parameter values. It is demonstrated that the percentile-lasso can achieve substantial improvements in both model-selection stability and model-selection error compared to the lasso. Importantly, when applied to real data the percentile-lasso, unlike the lasso, produces interpretable results, that is, results that are robust to the assignment of observations to folds for cross-validation. The percentile-lasso is easily applied to extensions of the lasso and in the context of penalized generalized linear models.

  • 出版日期2014-2