101. A scoping review of reporting gaps in FDA-approved AI medical devices.
- Author
-
Muralidharan, Vijaytha, Adewale, Boluwatife Adeleye, Huang, Caroline J., Nta, Mfon Thelma, Ademiju, Peter Oluwaduyilemi, Pathmarajah, Pirunthan, Hang, Man Kien, Adesanya, Oluwafolajimi, Abdullateef, Ridwanullah Olamide, Babatunde, Abdulhammed Opeyemi, Ajibade, Abdulquddus, Onyeka, Sonia, Cai, Zhou Ran, Daneshjou, Roxana, and Olatunji, Tobi
- Subjects
PATIENT safety ,SOCIOECONOMIC factors ,SEX distribution ,AGE distribution ,DESCRIPTIVE statistics ,SYSTEMATIC reviews ,RACE ,MEDICAL equipment ,LITERATURE reviews ,COMMUNICATION ,SOCIODEMOGRAPHIC factors ,HEALTH equity ,EQUIPMENT & supplies ,NEW product development laws - Abstract
Machine learning and artificial intelligence (AI/ML) models in healthcare may exacerbate health biases. Regulatory oversight is critical in evaluating the safety and effectiveness of AI/ML devices in clinical settings. We conducted a scoping review on the 692 FDA-approved AI/ML-enabled medical devices approved from 1995-2023 to examine transparency, safety reporting, and sociodemographic representation. Only 3.6% of approvals reported race/ethnicity, 99.1% provided no socioeconomic data. 81.6% did not report the age of study subjects. Only 46.1% provided comprehensive detailed results of performance studies; only 1.9% included a link to a scientific publication with safety and efficacy data. Only 9.0% contained a prospective study for post-market surveillance. Despite the growing number of market-approved medical devices, our data shows that FDA reporting data remains inconsistent. Demographic and socioeconomic characteristics are underreported, exacerbating the risk of algorithmic bias and health disparity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF