1. Interpretable genotype-to-phenotype classifiers with performance guarantees
- Author
-
Mario Marchand, Alexandre Drouin, Jacques Corbeil, Frédéric Raymond, Gaël Letarte, and François Laviolette
- Subjects
0301 basic medicine ,Computer science ,Generalization ,lcsh:Medicine ,Sample (statistics) ,Machine learning ,computer.software_genre ,Article ,Machine Learning ,03 medical and health sciences ,0302 clinical medicine ,Artificial Intelligence ,Humans ,Limit (mathematics) ,Precision Medicine ,lcsh:Science ,Genetic Association Studies ,030304 developmental biology ,Interpretability ,0303 health sciences ,Genome ,Multidisciplinary ,Interpretation (logic) ,030306 microbiology ,business.industry ,lcsh:R ,Genomics ,3. Good health ,030104 developmental biology ,lcsh:Q ,Artificial intelligence ,Genotype to phenotype ,business ,computer ,Algorithms ,Software ,030217 neurology & neurosurgery - Abstract
Understanding the relationship between the genome of a cell and its phenotype is a central problem in precision medicine. Nonetheless, genotype-to-phenotype prediction comes with great challenges for machine learning algorithms that limit their use in this setting. The high dimensionality of the data tends to hinder generalization and challenges the scalability of most learning algorithms. Additionally, most algorithms produce models that are complex and difficult to interpret. We alleviate these limitations by proposing strong performance guarantees, based on sample compression theory, for rule-based learning algorithms that produce highly interpretable models. We show that these guarantees can be leveraged to accelerate learning and improve model interpretability. Our approach is validated through an application to the genomic prediction of antimicrobial resistance, an important public health concern. Highly accurate models were obtained for 12 species and 56 antibiotics, and their interpretation revealed known resistance mechanisms, as well as some potentially new ones. An open-source disk-based implementation that is both memory and computationally efficient is provided with this work. The implementation is turnkey, requires no prior knowledge of machine learning, and is complemented by comprehensive tutorials.
- Published
- 2019