1. High-dimensional bolstered error estimation
- Author
-
Chao Sima, Ulisses Braga-Neto, and Edward R. Dougherty
- Subjects
Statistics and Probability ,Feature vector ,Breast Neoplasms ,Feature selection ,Biochemistry ,Statistics ,Humans ,Molecular Biology ,Oligonucleotide Array Sequence Analysis ,Mathematics ,Gene Expression Profiling ,Reproducibility of Results ,Variance (accounting) ,Original Papers ,Computer Science Applications ,Computational Mathematics ,Computational Theory and Mathematics ,Sample size determination ,Feature (computer vision) ,Data Interpretation, Statistical ,Sample Size ,Classification rule ,Kernel (statistics) ,Key (cryptography) ,Female ,Multiple Myeloma ,Algorithm ,Algorithms - Abstract
Motivation: In small-sample settings, bolstered error estimation has been shown to perform better than cross-validation and competitively with bootstrap with regard to various criteria. The key issue for bolstering performance is the variance setting for the bolstering kernel. Heretofore, this variance has been determined in a non-parametric manner from the data. Although bolstering based on this variance setting works well for small feature sets, results can deteriorate for high-dimensional feature spaces. Results: This article computes an optimal kernel variance depending on the classification rule, sample size, model and feature space, both the original number and the number remaining after feature selection. A key point is that the optimal variance is robust relative to the model. This allows us to develop a method for selecting a suitable variance to use in real-world applications where the model is not known, but the other factors in determining the optimal kernel are known. Availability: Companion website at http://compbio.tgen.org/paper_supp/high_dim_bolstering Contact: edward@mail.ece.tamu.edu
- Published
- 2011