1. Utilizing stability criteria in choosing feature selection methods yields reproducible results in microbiome data
- Author
-
Shi Huang, Loki Natarajan, Ho-Cheol Kim, Rob Knight, Lingjing Jiang, Yoshiki Vázquez-Baeza, Austin D. Swafford, Laxmi Parida, Niina Haiminen, and Anna Paola Carrieri
- Subjects
FOS: Computer and information sciences ,Statistics and Probability ,Computer Science - Machine Learning ,Underdetermined system ,Computer science ,Statistics & Probability ,Stability (learning theory) ,microbiome ,Binary number ,Bioengineering ,Feature selection ,computer.software_genre ,01 natural sciences ,Quantitative Biology - Quantitative Methods ,General Biochemistry, Genetics and Molecular Biology ,Machine Learning (cs.LG) ,010104 statistics & probability ,03 medical and health sciences ,feature selection ,reproducible ,Microbiome ,0101 mathematics ,Quantitative Methods (q-bio.QM) ,030304 developmental biology ,0303 health sciences ,Artifact (error) ,General Immunology and Microbiology ,Applied Mathematics ,Microbiota ,Statistics ,SIGNAL (programming language) ,Reproducibility of Results ,prediction ,General Medicine ,stability ,classification ,Feature (computer vision) ,FOS: Biological sciences ,Data mining ,General Agricultural and Biological Sciences ,Other Mathematical Sciences ,computer ,Algorithms - Abstract
Feature selection is indispensable in microbiome data analysis, but it can be particularly challenging as microbiome data sets are high dimensional, underdetermined, sparse and compositional. Great efforts have recently been made on developing new methods for feature selection that handle the above data characteristics, but almost all methods were evaluated based on performance of model predictions. However, little attention has been paid to address a fundamental question: how appropriate are those evaluation criteria? Most feature selection methods often control the model fit, but the ability to identify meaningful subsets of features cannot be evaluated simply based on the prediction accuracy. If tiny changes to the data would lead to large changes in the chosen feature subset, then many selected features are likely to be a data artifact rather than real biological signal. This crucial need of identifying relevant and reproducible features motivated the reproducibility evaluation criterion such as Stability, which quantifies how robust a method is to perturbations in the data. In our paper, we compare the performance of popular model prediction metrics (MSE or AUC) with proposed reproducibility criterion Stability in evaluating four widely used feature selection methods in both simulations and experimental microbiome applications with continuous or binary outcomes. We conclude that Stability is a preferred feature selection criterion over model prediction metrics because it better quantifies the reproducibility of the feature selectionmethod.
- Published
- 2020