1. Pathogenicity Prediction of Gene Fusion in Structural Variations: A Knowledge Graph-Infused Explainable Artificial Intelligence (XAI) Framework.
- Author
-
Murakami, Katsuhiko, Tago, Shin-ichiro, Takishita, Sho, Morikawa, Hiroaki, Kojima, Rikuhiro, Yokoyama, Kazuaki, Ogawa, Miho, Fukushima, Hidehito, Takamori, Hiroyuki, Nannya, Yasuhito, Imoto, Seiya, and Fuji, Masaru
- Subjects
INTELLECT ,MICROBIAL virulence ,PREDICTION models ,GENOMICS ,ARTIFICIAL intelligence ,MICRORNA ,DECISION making ,CONCEPTUAL structures ,MOLECULAR biology ,GENETICS ,ALGORITHMS - Abstract
Simple Summary: Cancer genome analysis often reveals structural variants (SVs) involving fusion genes that are difficult to classify as drivers or passengers. Obtaining accurate AI predictions and explanations, which are crucial for a reliable diagnosis, is challenging. We developed an explainable AI (XAI) system that predicts the pathogenicity of SVs with gene fusions, providing reasons for its predictions. Our XAI achieved high accuracy, comparable to existing tools, and generated plausible explanations based on pathogenic mechanisms. This research represents a promising step towards AI-supported decision making in genomic medicine, enabling efficient and accurate diagnosis. When analyzing cancer sample genomes in clinical practice, many structural variants (SVs), other than single nucleotide variants (SNVs), have been identified. To identify driver variants, the leading candidates must be narrowed down. When fusion genes are involved, selection is particularly difficult, and highly accurate predictions from AI is important. Furthermore, we also wanted to determine how the prediction can make more reliable diagnoses. Here, we developed an explainable AI (XAI) suitable for SVs with gene fusions, based on the XAI technology we previously developed for the prediction of SNV pathogenicity. To cope with gene fusion variants, we added new data to the previous knowledge graph for SVs and we improved the algorithm. Its prediction accuracy was as high as that of existing tools. Moreover, our XAI could explain the reasons for these predictions. We used some variant examples to demonstrate that the reasons are plausible in terms of pathogenic basic mechanisms. These results can be seen as a hopeful step toward the future of genomic medicine, where efficient and correct decisions can be made with the support of AI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF