1. Who does the fairness in health AI community represent?
- Author
-
Isabelle Rose I. Alberto, Nicole Rose I. Alberto, Yuksel Altinel, Sarah Blacker, William Warr Binotti, Leo Anthony Celi, Tiffany Chua, Amelia Fiske, Molly Griffin, Gulce Karaca, Nkiruka Mokolo, David Kojo N. Naawu, Jonathan Patscheider, Anton Petushkov, Justin Quion, Charles Senteio, Simon Taisbak, İsmail Tırnova, Harumi Tokashiki, Adrian Velasquez, Antonio Yaghy, and Keagan Yap
- Abstract
OBJECTIVEArtificial intelligence (AI) and machine learning are central components of today’s medical environment. The fairness of AI, i.e. the ability of AI to be free from bias, has repeatedly come into question. This study investigates the diversity of the members of academia whose scholarship poses questions about the fairness of AI.METHODSThe articles that combine the topics of fairness, artificial intelligence, and medicine were selected from Pubmed, Google Scholar, and Embase using keywords. Eligibility and data extraction from the articles were done manually and cross-checked by another author for accuracy. 375 articles were selected for further analysis, cleaned, and organized in Microsoft Excel; spatial diagrams were generated using Public Tableau. Additional graphs were generated using Matplotlib and Seaborn. The linear and logistic regressions were analyzed using Python.RESULTSWe identified 375 eligible publications, including research and review articles concerning AI and fairness in healthcare. When looking at the demographics of all authors, out of 1984, 794 were female, and 1190 were male. Out of 375 first authors, 155 (41.33%) were female, and 220 (58.67%) were male. For last authors 110 (31.16%) were female, and 243 (68.84%) were male. In regards to ethnicity, 234 (62.40%) of the first authors were white, 103 (27.47%) were Asian, 24 (6.40%) were black, and 14 (3.73%) were Hispanic. For the last authors, 234 (66.29%) were white, 96 (27.20%) were Asian, 12 (3.40%) were black, and 11 (3.11%) were Hispanic. Most authors were from the USA, Canada, and the United Kingdom. The trend continued for the first and last authors of the articles. When looking at the general distribution, 1631 (82.2%) were based in high-income countries, 209 (10.5 %) were based in upper-middle-income countries, 135 (6.8%) were based in lower-middle-income countries, and 9 (0.5 %) were based in low-income countries.CONCLUSIONSAnalysis of the bibliographic data revealed that there is an overrepresentation of white authors and male authors, especially in the roles of first and last author. The more male authors a paper had the more likely they were to be cited. Additionally, analysis showed that papers whose authors are based in higher-income countries were more likely to be cited more often and published in higher impact journals. These findings highlight the lack of diversity among the authors in the AI fairness community whose work gains the largest readership, potentially compromising the very impartiality that the AI fairness community is working towards.
- Published
- 2023
- Full Text
- View/download PDF