Linear Model Selection by Cross-validation

We show that the inconsistency of the leave-one-out cross-validation can be rectified by using a leave-nv-out cross-validation with nv, the number of observations reserved for validation, satisfying nv/n → 1 as n → ∞

Jun Shao

2012

Scholarcy highlights

  • We consider the problem of selecting a model having the best predictive ability among a class of linear models
  • The popular leave-one-out cross-validation method, which is asymptotically equivalent to many other model selection methods such as the Akaike information criterion, the Cp, and the bootstrap, is asymptotically inconsistent in the sense that the probability of selecting the model with the best predictive ability does not converge to 1 as the total number of observations n → ∞
  • We show that the inconsistency of the leave-one-out cross-validation can be rectified by using a leave-nv-out cross-validation with nv, the number of observations reserved for validation, satisfying nv/n → 1 as n → ∞
  • Choose new content alerts to be informed about new research of interest to you
  • Easy remote access to your institution's subscriptions on any device, from any location

Need more features? Save interactive summary cards to your Scholarcy Library.