1. Simulating clinical features on chest radiographs for medical image exploration and CNN explainability using a style-based generative adversarial autoencoder.
- Author
-
Hasenstab, Kyle, Hahn, Lewis, Chao, Nick, and Hsiao, Albert
- Subjects
Autoencoder ,Chest radiographs ,Convolutional neural network ,Explainable artificial intelligence ,Generative adversarial network ,Humans ,Radiography ,Thoracic ,Neural Networks ,Computer ,Male ,Female ,Natriuretic Peptide ,Brain ,Middle Aged ,Aged ,Peptide Fragments ,Heart Failure ,Image Processing ,Computer-Assisted - Abstract
Explainability of convolutional neural networks (CNNs) is integral for their adoption into radiological practice. Commonly used attribution methods localize image areas important for CNN prediction but do not characterize relevant imaging features underlying these areas, acting as a barrier to the adoption of CNNs for clinical use. We therefore propose Semantic Exploration and Explainability using a Style-based Generative Adversarial Autoencoder Network (SEE-GAAN), an explainability framework that uses latent space manipulation to generate a sequence of synthetic images that semantically visualizes how clinical and CNN features manifest within medical images. Visual analysis of changes in these sequences then facilitates the interpretation of features, thereby improving explainability. SEE-GAAN was first developed on a cohort of 26,664 chest radiographs across 15,409 patients from our institution. SEE-GAAN sequences were then generated across several clinical features and CNN predictions of NT-pro B-type natriuretic peptide (BNPP) as a proxy for acute heart failure. Radiological interpretations indicated SEE-GAAN sequences captured relevant changes in anatomical and pathological morphology associated with clinical and CNN predictions and clarified ambiguous areas highlighted by commonly used attribution methods. Our study demonstrates SEE-GAAN can facilitate our understanding of clinical features for imaging biomarker exploration and improve CNN transparency over commonly used explainability methods.
- Published
- 2024