191 results on '"Mongan, John"'
Search Results
2. Clinical, Cultural, Computational, and Regulatory Considerations to Deploy AI in Radiology: Perspectives of RSNA and MICCAI Experts.
- Author
-
Linguraru, Marius, Bakas, Spyridon, Aboian, Mariam, Chang, Peter, Flanders, Adam, Kalpathy-Cramer, Jayashree, Kitamura, Felipe, Lungren, Matthew, Mongan, John, Prevedello, Luciano, Summers, Ronald, Wu, Carol, Adewole, Maruf, and Kahn, Charles
- Subjects
Adults and Pediatrics ,Computer Applications–General (Informatics) ,Diagnosis ,Prognosis ,Artificial Intelligence ,Humans ,Radiology ,Societies ,Medical - Abstract
The Radiological Society of North of America (RSNA) and the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society have led a series of joint panels and seminars focused on the present impact and future directions of artificial intelligence (AI) in radiology. These conversations have collected viewpoints from multidisciplinary experts in radiology, medical imaging, and machine learning on the current clinical penetration of AI technology in radiology and how it is impacted by trust, reproducibility, explainability, and accountability. The collective points-both practical and philosophical-define the cultural changes for radiologists and AI scientists working together and describe the challenges ahead for AI technologies to meet broad approval. This article presents the perspectives of experts from MICCAI and RSNA on the clinical, cultural, computational, and regulatory considerations-coupled with recommended reading materials-essential to adopt AI technology successfully in radiology and, more generally, in clinical practice. The report emphasizes the importance of collaboration to improve clinical deployment, highlights the need to integrate clinical and medical imaging data, and introduces strategies to ensure smooth and incentivized integration. Keywords: Adults and Pediatrics, Computer Applications-General (Informatics), Diagnosis, Prognosis © RSNA, 2024.
- Published
- 2024
3. The University of California San Francisco Adult Longitudinal Post-Treatment Diffuse Glioma MRI Dataset.
- Author
-
Fields, Brandon KK, Calabrese, Evan, Mongan, John, Cha, Soonmee, Hess, Christopher P, Sugrue, Leo P, Chang, Susan M, Luks, Tracy L, Villanueva-Meyer, Javier E, Rauschecker, Andreas M, and Rudie, Jeffrey D
- Subjects
Language ,Communication and Culture ,Linguistics ,Human Society ,Neurosciences ,Brain Cancer ,Cancer ,Brain Disorders ,Rare Diseases ,Humans ,Glioma ,Magnetic Resonance Imaging ,Brain Neoplasms ,Female ,Male ,Middle Aged ,Adult ,Longitudinal Studies ,San Francisco ,Aged ,Artificial Intelligence ,Deep Learning ,Diffuse Glioma ,Neuro-Oncology ,Resection Cavity - Abstract
Supplemental material is available for this article.
- Published
- 2024
4. The RSNA Abdominal Traumatic Injury CT (RATIC) Dataset
- Author
-
Rudie, Jeffrey D., Lin, Hui-Ming, Ball, Robyn L., Jalal, Sabeena, Prevedello, Luciano M., Nicolaou, Savvas, Marinelli, Brett S., Flanders, Adam E., Magudia, Kirti, Shih, George, Davis, Melissa A., Mongan, John, Chang, Peter D., Berger, Ferco H., Hermans, Sebastiaan, Law, Meng, Richards, Tyler, Grunz, Jan-Peter, Kunz, Andreas Steven, Mathur, Shobhit, Galea-Soler, Sandro, Chung, Andrew D., Afat, Saif, Kuo, Chin-Chi, Aweidah, Layal, Campos, Ana Villanueva, Somasundaram, Arjuna, Tijmes, Felipe Antonio Sanchez, Jantarangkoon, Attaporn, Bittencourt, Leonardo Kayat, Brassil, Michael, Hajjami, Ayoub El, Dogan, Hakan, Becircic, Muris, Bharatkumar, Agrahara G., Farina, Eduardo Moreno Júdice de Mattos, Group, Dataset Curator, Group, Dataset Contributor, Group, Dataset Annotator, and Colak, Errol
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
The RSNA Abdominal Traumatic Injury CT (RATIC) dataset is the largest publicly available collection of adult abdominal CT studies annotated for traumatic injuries. This dataset includes 4,274 studies from 23 institutions across 14 countries. The dataset is freely available for non-commercial use via Kaggle at https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection. Created for the RSNA 2023 Abdominal Trauma Detection competition, the dataset encourages the development of advanced machine learning models for detecting abdominal injuries on CT scans. The dataset encompasses detection and classification of traumatic injuries across multiple organs, including the liver, spleen, kidneys, bowel, and mesentery. Annotations were created by expert radiologists from the American Society of Emergency Radiology (ASER) and Society of Abdominal Radiology (SAR). The dataset is annotated at multiple levels, including the presence of injuries in three solid organs with injury grading, image-level annotations for active extravasations and bowel injury, and voxelwise segmentations of each of the potentially injured organs. With the release of this dataset, we hope to facilitate research and development in machine learning and abdominal trauma that can lead to improved patient care and outcomes., Comment: 40 pages, 2 figures, 3 tables
- Published
- 2024
5. The 2024 Brain Tumor Segmentation (BraTS) Challenge: Glioma Segmentation on Post-treatment MRI
- Author
-
de Verdier, Maria Correia, Saluja, Rachit, Gagnon, Louis, LaBella, Dominic, Baid, Ujjwall, Tahon, Nourel Hoda, Foltyn-Dumitru, Martha, Zhang, Jikai, Alafif, Maram, Baig, Saif, Chang, Ken, D'Anna, Gennaro, Deptula, Lisa, Gupta, Diviya, Haider, Muhammad Ammar, Hussain, Ali, Iv, Michael, Kontzialis, Marinos, Manning, Paul, Moodi, Farzan, Nunes, Teresa, Simon, Aaron, Sollmann, Nico, Vu, David, Adewole, Maruf, Albrecht, Jake, Anazodo, Udunna, Chai, Rongrong, Chung, Verena, Faghani, Shahriar, Farahani, Keyvan, Kazerooni, Anahita Fathi, Iglesias, Eugenio, Kofler, Florian, Li, Hongwei, Linguraru, Marius George, Menze, Bjoern, Moawad, Ahmed W., Velichko, Yury, Wiestler, Benedikt, Altes, Talissa, Basavasagar, Patil, Bendszus, Martin, Brugnara, Gianluca, Cho, Jaeyoung, Dhemesh, Yaseen, Fields, Brandon K. K., Garrett, Filip, Gass, Jaime, Hadjiiski, Lubomir, Hattangadi-Gluth, Jona, Hess, Christopher, Houk, Jessica L., Isufi, Edvin, Layfield, Lester J., Mastorakos, George, Mongan, John, Nedelec, Pierre, Nguyen, Uyen, Oliva, Sebastian, Pease, Matthew W., Rastogi, Aditya, Sinclair, Jason, Smith, Robert X., Sugrue, Leo P., Thacker, Jonathan, Vidic, Igor, Villanueva-Meyer, Javier, White, Nathan S., Aboian, Mariam, Conte, Gian Marco, Dale, Anders, Sabuncu, Mert R., Seibert, Tyler M., Weinberg, Brent, Abayazeed, Aly, Huang, Raymond, Turk, Sevcan, Rauschecker, Andreas M., Farid, Nikdokht, Vollmuth, Philipp, Nada, Ayman, Bakas, Spyridon, Calabrese, Evan, and Rudie, Jeffrey D.
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Gliomas are the most common malignant primary brain tumors in adults and one of the deadliest types of cancer. There are many challenges in treatment and monitoring due to the genetic diversity and high intrinsic heterogeneity in appearance, shape, histology, and treatment response. Treatments include surgery, radiation, and systemic therapies, with magnetic resonance imaging (MRI) playing a key role in treatment planning and post-treatment longitudinal assessment. The 2024 Brain Tumor Segmentation (BraTS) challenge on post-treatment glioma MRI will provide a community standard and benchmark for state-of-the-art automated segmentation models based on the largest expert-annotated post-treatment glioma MRI dataset. Challenge competitors will develop automated segmentation models to predict four distinct tumor sub-regions consisting of enhancing tissue (ET), surrounding non-enhancing T2/fluid-attenuated inversion recovery (FLAIR) hyperintensity (SNFH), non-enhancing tumor core (NETC), and resection cavity (RC). Models will be evaluated on separate validation and test datasets using standardized performance metrics utilized across the BraTS 2024 cluster of challenges, including lesion-wise Dice Similarity Coefficient and Hausdorff Distance. Models developed during this challenge will advance the field of automated MRI segmentation and contribute to their integration into clinical practice, ultimately enhancing patient care., Comment: 10 pages, 4 figures, 1 table
- Published
- 2024
6. Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge
- Author
-
LaBella, Dominic, Baid, Ujjwal, Khanna, Omaditya, McBurney-Lin, Shan, McLean, Ryan, Nedelec, Pierre, Rashid, Arif, Tahon, Nourel Hoda, Altes, Talissa, Bhalerao, Radhika, Dhemesh, Yaseen, Godfrey, Devon, Hilal, Fathi, Floyd, Scott, Janas, Anastasia, Kazerooni, Anahita Fathi, Kirkpatrick, John, Kent, Collin, Kofler, Florian, Leu, Kevin, Maleki, Nazanin, Menze, Bjoern, Pajot, Maxence, Reitman, Zachary J., Rudie, Jeffrey D., Saluja, Rachit, Velichko, Yury, Wang, Chunhao, Warman, Pranav, Adewole, Maruf, Albrecht, Jake, Anazodo, Udunna, Anwar, Syed Muhammad, Bergquist, Timothy, Chen, Sully Francis, Chung, Verena, Conte, Gian-Marco, Dako, Farouk, Eddy, James, Ezhov, Ivan, Khalili, Nastaran, Iglesias, Juan Eugenio, Jiang, Zhifan, Johanson, Elaine, Van Leemput, Koen, Li, Hongwei Bran, Linguraru, Marius George, Liu, Xinyang, Mahtabfar, Aria, Meier, Zeke, Moawad, Ahmed W., Mongan, John, Piraud, Marie, Shinohara, Russell Takeshi, Wiggins, Walter F., Abayazeed, Aly H., Akinola, Rachel, Jakab, András, Bilello, Michel, de Verdier, Maria Correia, Crivellaro, Priscila, Davatzikos, Christos, Farahani, Keyvan, Freymann, John, Hess, Christopher, Huang, Raymond, Lohmann, Philipp, Moassefi, Mana, Pease, Matthew W., Vollmuth, Phillipp, Sollmann, Nico, Diffley, David, Nandolia, Khanak K., Warren, Daniel I., Hussain, Ali, Fehringer, Pascal, Bronstein, Yulia, Deptula, Lisa, Stein, Evan G., Taherzadeh, Mahsa, de Oliveira, Eduardo Portela, Haughey, Aoife, Kontzialis, Marinos, Saba, Luca, Turner, Benjamin, Brüßeler, Melanie M. T., Ansari, Shehbaz, Gkampenis, Athanasios, Weiss, David Maximilian, Mansour, Aya, Shawali, Islam H., Yordanov, Nikolay, Stein, Joel M., Hourani, Roula, Moshebah, Mohammed Yahya, Abouelatta, Ahmed Magdy, Rizvi, Tanvir, Willms, Klara, Martin, Dann C., Okar, Abdullah, D'Anna, Gennaro, Taha, Ahmed, Sharifi, Yasaman, Faghani, Shahriar, Kite, Dominic, Pinho, Marco, Haider, Muhammad Ammar, Aristizabal, Alejandro, Karargyris, Alexandros, Kassem, Hasan, Pati, Sarthak, Sheller, Micah, Alonso-Basanta, Michelle, Villanueva-Meyer, Javier, Rauschecker, Andreas M., Nada, Ayman, Aboian, Mariam, Flanders, Adam E., Wiestler, Benedikt, Bakas, Spyridon, and Calabrese, Evan
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
We describe the design and results from the BraTS 2023 Intracranial Meningioma Segmentation Challenge. The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas, which are typically benign extra-axial tumors with diverse radiologic and anatomical presentation and a propensity for multiplicity. Nine participating teams each developed deep-learning automated segmentation models using image data from the largest multi-institutional systematically expert annotated multilabel multi-sequence meningioma MRI dataset to date, which included 1000 training set cases, 141 validation set cases, and 283 hidden test set cases. Each case included T2, T2/FLAIR, T1, and T1Gd brain MRI sequences with associated tumor compartment labels delineating enhancing tumor, non-enhancing tumor, and surrounding non-enhancing T2/FLAIR hyperintensity. Participant automated segmentation models were evaluated and ranked based on a scoring system evaluating lesion-wise metrics including dice similarity coefficient (DSC) and 95% Hausdorff Distance. The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor, respectively and a corresponding average DSC of 0.899, 0.904, and 0.871, respectively. These results serve as state-of-the-art benchmarks for future pre-operative meningioma automated segmentation algorithms. Additionally, we found that 1286 of 1424 cases (90.3%) had at least 1 compartment voxel abutting the edge of the skull-stripped image edge, which requires further investigation into optimal pre-processing face anonymization steps., Comment: 16 pages, 11 tables, 10 figures, MICCAI
- Published
- 2024
7. Lessons Learned in Building Expertly Annotated Multi-Institution Datasets and Hosting the RSNA AI Challenges.
- Author
-
Kitamura, Felipe, Prevedello, Luciano, Colak, Errol, Halabi, Safwan, Lungren, Matthew, Ball, Robyn, Kalpathy-Cramer, Jayashree, Kahn, Charles, Richards, Tyler, Shih, George, Lin, Hui, Andriole, Katherine, Vazirabad, Maryam, Erickson, Bradley, Flanders, Adam, Talbott, Jason, and Mongan, John
- Subjects
Artificial Intelligence ,Use of AI in Education ,Humans ,Artificial Intelligence ,Radiology ,Diagnostic Imaging ,Societies ,Medical ,North America - Abstract
The Radiological Society of North America (RSNA) has held artificial intelligence competitions to tackle real-world medical imaging problems at least annually since 2017. This article examines the challenges and processes involved in organizing these competitions, with a specific emphasis on the creation and curation of high-quality datasets. The collection of diverse and representative medical imaging data involves dealing with issues of patient privacy and data security. Furthermore, ensuring quality and consistency in data, which includes expert labeling and accounting for various patient and imaging characteristics, necessitates substantial planning and resources. Overcoming these obstacles requires meticulous project management and adherence to strict timelines. The article also highlights the potential of crowdsourced annotation to progress medical imaging research. Through the RSNA competitions, an effective global engagement has been realized, resulting in innovative solutions to complex medical imaging problems, thus potentially transforming health care by enhancing diagnostic accuracy and patient outcomes. Keywords: Use of AI in Education, Artificial Intelligence © RSNA, 2024.
- Published
- 2024
8. The University of California San Francisco Brain Metastases Stereotactic Radiosurgery (UCSF-BMSR) MRI Dataset.
- Author
-
Rudie, Jeffrey, Saluja, Rachit, Weiss, David, Nedelec, Pierre, Calabrese, Evan, Colby, John, Laguna, Benjamin, Rauschecker, Andreas, Sugrue, Leo, Hess, Christopher, Mongan, John, Braunstein, Steve, and Villanueva-Meyer, Javier
- Subjects
Artificial Intelligence ,Brain Metastases ,MRI ,Public Datasets ,Humans ,Radiosurgery ,San Francisco ,Brain Neoplasms ,Magnetic Resonance Imaging - Abstract
Supplemental material is available for this article.
- Published
- 2024
9. A Generalizable Deep Learning System for Cardiac MRI
- Author
-
Shad, Rohan, Zakka, Cyril, Kaur, Dhamanpreet, Fong, Robyn, Filice, Ross Warren, Mongan, John, Kalianos, Kimberly, Khandwala, Nishith, Eng, David, Leipzig, Matthew, Witschey, Walter, de Feria, Alejandro, Ferrari, Victor, Ashley, Euan, Acker, Michael A., Langlotz, Curtis, and Hiesinger, William
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning ,I.2.10 - Abstract
Cardiac MRI allows for a comprehensive assessment of myocardial structure, function, and tissue characteristics. Here we describe a foundational vision system for cardiac MRI, capable of representing the breadth of human cardiovascular disease and health. Our deep learning model is trained via self-supervised contrastive learning, by which visual concepts in cine-sequence cardiac MRI scans are learned from the raw text of the accompanying radiology reports. We train and evaluate our model on data from four large academic clinical institutions in the United States. We additionally showcase the performance of our models on the UK BioBank, and two additional publicly available external datasets. We explore emergent zero-shot capabilities of our system, and demonstrate remarkable performance across a range of tasks; including the problem of left ventricular ejection fraction regression, and the diagnosis of 35 different conditions such as cardiac amyloidosis and hypertrophic cardiomyopathy. We show that our deep learning system is capable of not only understanding the staggering complexity of human cardiovascular disease, but can be directed towards clinical problems of interest yielding impressive, clinical grade diagnostic accuracy with a fraction of the training data typically required for such tasks., Comment: 21 page main manuscript, 4 figures. Supplementary Appendix and code will be made available on publication
- Published
- 2023
10. FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
- Author
-
Lekadir, Karim, Feragen, Aasa, Fofanah, Abdul Joseph, Frangi, Alejandro F, Buyx, Alena, Emelie, Anais, Lara, Andrea, Porras, Antonio R, Chan, An-Wen, Navarro, Arcadi, Glocker, Ben, Botwe, Benard O, Khanal, Bishesh, Beger, Brigit, Wu, Carol C, Cintas, Celia, Langlotz, Curtis P, Rueckert, Daniel, Mzurikwao, Deogratias, Fotiadis, Dimitrios I, Zhussupov, Doszhan, Ferrante, Enzo, Meijering, Erik, Weicken, Eva, González, Fabio A, Asselbergs, Folkert W, Prior, Fred, Krestin, Gabriel P, Collins, Gary, Tegenaw, Geletaw S, Kaissis, Georgios, Misuraca, Gianluca, Tsakou, Gianna, Dwivedi, Girish, Kondylakis, Haridimos, Jayakody, Harsha, Woodruf, Henry C, Mayer, Horst Joachim, Aerts, Hugo JWL, Walsh, Ian, Chouvarda, Ioanna, Buvat, Irène, Tributsch, Isabell, Rekik, Islem, Duncan, James, Kalpathy-Cramer, Jayashree, Zahir, Jihad, Park, Jinah, Mongan, John, Gichoya, Judy W, Schnabel, Julia A, Kushibar, Kaisar, Riklund, Katrine, Mori, Kensaku, Marias, Kostas, Amugongo, Lameck M, Fromont, Lauren A, Maier-Hein, Lena, Alberich, Leonor Cerdá, Rittner, Leticia, Phiri, Lighton, Marrakchi-Kacem, Linda, Donoso-Bach, Lluís, Martí-Bonmatí, Luis, Cardoso, M Jorge, Bobowicz, Maciej, Shabani, Mahsa, Tsiknakis, Manolis, Zuluaga, Maria A, Bielikova, Maria, Fritzsche, Marie-Christine, Camacho, Marina, Linguraru, Marius George, Wenzel, Markus, De Bruijne, Marleen, Tolsgaard, Martin G, Ghassemi, Marzyeh, Ashrafuzzaman, Md, Goisauf, Melanie, Yaqub, Mohammad, Abadía, Mónica Cano, Mahmoud, Mukhtar M E, Elattar, Mustafa, Rieke, Nicola, Papanikolaou, Nikolaos, Lazrak, Noussair, Díaz, Oliver, Salvado, Olivier, Pujol, Oriol, Sall, Ousmane, Guevara, Pamela, Gordebeke, Peter, Lambin, Philippe, Brown, Pieta, Abolmaesumi, Purang, Dou, Qi, Lu, Qinghua, Osuala, Richard, Nakasi, Rose, Zhou, S Kevin, Napel, Sandy, Colantonio, Sara, Albarqouni, Shadi, Joshi, Smriti, Carter, Stacy, Klein, Stefan, Petersen, Steffen E, Aussó, Susanna, Awate, Suyash, Raviv, Tammy Riklin, Cook, Tessa, Mutsvangwa, Tinashe E M, Rogers, Wendy A, Niessen, Wiro J, Puig-Bosch, Xènia, Zeng, Yi, Mohammed, Yunusa G, Aquino, Yves Saint James, Salahuddin, Zohaib, and Starmans, Martijn P A
- Subjects
Computer Science - Computers and Society ,Computer Science - Artificial Intelligence ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning ,I.2.0 ,I.4.0 ,I.5.0 - Abstract
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI.
- Published
- 2023
11. Developing, purchasing, implementing and monitoring AI tools in radiology: practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA
- Author
-
Brady, Adrian P., Allen, Bibb, Chong, Jaron, Kotter, Elmar, Kottler, Nina, Mongan, John, Oakden-Rayner, Lauren, dos Santos, Daniel Pinto, Tang, An, Wald, Christoph, and Slavotinek, John
- Published
- 2024
- Full Text
- View/download PDF
12. The Brain Tumor Segmentation (BraTS) Challenge 2023: Brain MR Image Synthesis for Tumor Segmentation (BraSyn)
- Author
-
Li, Hongwei Bran, Conte, Gian Marco, Anwar, Syed Muhammad, Kofler, Florian, Ezhov, Ivan, van Leemput, Koen, Piraud, Marie, Diaz, Maria, Cole, Byrone, Calabrese, Evan, Rudie, Jeff, Meissen, Felix, Adewole, Maruf, Janas, Anastasia, Kazerooni, Anahita Fathi, LaBella, Dominic, Moawad, Ahmed W., Farahani, Keyvan, Eddy, James, Bergquist, Timothy, Chung, Verena, Shinohara, Russell Takeshi, Dako, Farouk, Wiggins, Walter, Reitman, Zachary, Wang, Chunhao, Liu, Xinyang, Jiang, Zhifan, Familiar, Ariana, Johanson, Elaine, Meier, Zeke, Davatzikos, Christos, Freymann, John, Kirby, Justin, Bilello, Michel, Fathallah-Shaykh, Hassan M., Wiest, Roland, Kirschke, Jan, Colen, Rivka R., Kotrotsou, Aikaterini, Lamontagne, Pamela, Marcus, Daniel, Milchenko, Mikhail, Nazeri, Arash, Weber, Marc André, Mahajan, Abhishek, Mohan, Suyash, Mongan, John, Hess, Christopher, Cha, Soonmee, Villanueva, Javier, Colak, Meyer Errol, Crivellaro, Priscila, Jakab, Andras, Albrecht, Jake, Anazodo, Udunna, Aboian, Mariam, Yu, Thomas, Baid, Ujjwal, Bakas, Spyridon, Linguraru, Marius George, Menze, Bjoern, Iglesias, Juan Eugenio, and Wiestler, Benedikt
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Automated brain tumor segmentation methods have become well-established and reached performance levels offering clear clinical utility. These methods typically rely on four input magnetic resonance imaging (MRI) modalities: T1-weighted images with and without contrast enhancement, T2-weighted images, and FLAIR images. However, some sequences are often missing in clinical practice due to time constraints or image artifacts, such as patient motion. Consequently, the ability to substitute missing modalities and gain segmentation performance is highly desirable and necessary for the broader adoption of these algorithms in the clinical routine. In this work, we present the establishment of the Brain MR Image Synthesis Benchmark (BraSyn) in conjunction with the Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023. The primary objective of this challenge is to evaluate image synthesis methods that can realistically generate missing MRI modalities when multiple available images are provided. The ultimate aim is to facilitate automated brain tumor segmentation pipelines. The image dataset used in the benchmark is diverse and multi-modal, created through collaboration with various hospitals and research institutions., Comment: Technical report of BraSyn
- Published
- 2023
13. The Brain Tumor Segmentation (BraTS) Challenge: Local Synthesis of Healthy Brain Tissue via Inpainting
- Author
-
Kofler, Florian, Meissen, Felix, Steinbauer, Felix, Graf, Robert, Ehrlich, Stefan K, Reinke, Annika, Oswald, Eva, Waldmannstetter, Diana, Hoelzl, Florian, Horvath, Izabela, Turgut, Oezguen, Shit, Suprosanna, Bukas, Christina, Yang, Kaiyuan, Paetzold, Johannes C., de da Rosa, Ezequiel, Mekki, Isra, Vinayahalingam, Shankeeth, Kassem, Hasan, Zhang, Juexin, Chen, Ke, Weng, Ying, Durrer, Alicia, Cattin, Philippe C., Wolleb, Julia, Sadique, M. S., Rahman, M. M., Farzana, W., Temtam, A., Iftekharuddin, K. M., Adewole, Maruf, Anwar, Syed Muhammad, Baid, Ujjwal, Janas, Anastasia, Kazerooni, Anahita Fathi, LaBella, Dominic, Li, Hongwei Bran, Moawad, Ahmed W, Conte, Gian-Marco, Farahani, Keyvan, Eddy, James, Sheller, Micah, Pati, Sarthak, Karagyris, Alexandros, Aristizabal, Alejandro, Bergquist, Timothy, Chung, Verena, Shinohara, Russell Takeshi, Dako, Farouk, Wiggins, Walter, Reitman, Zachary, Wang, Chunhao, Liu, Xinyang, Jiang, Zhifan, Johanson, Elaine, Meier, Zeke, Familiar, Ariana, Davatzikos, Christos, Freymann, John, Kirby, Justin, Bilello, Michel, Fathallah-Shaykh, Hassan M, Wiest, Roland, Kirschke, Jan, Colen, Rivka R, Kotrotsou, Aikaterini, Lamontagne, Pamela, Marcus, Daniel, Milchenko, Mikhail, Nazeri, Arash, Weber, Marc-André, Mahajan, Abhishek, Mohan, Suyash, Mongan, John, Hess, Christopher, Cha, Soonmee, Villanueva-Meyer, Javier, Colak, Errol, Crivellaro, Priscila, Jakab, Andras, Fatade, Abiodun, Omidiji, Olubukola, Lagos, Rachel Akinola, Olatunji, O O, Khanna, Goldey, Kirkpatrick, John, Alonso-Basanta, Michelle, Rashid, Arif, Bornhorst, Miriam, Nabavizadeh, Ali, Lepore, Natasha, Palmer, Joshua, Porras, Antonio, Albrecht, Jake, Anazodo, Udunna, Aboian, Mariam, Calabrese, Evan, Rudie, Jeffrey David, Linguraru, Marius George, Iglesias, Juan Eugenio, Van Leemput, Koen, Bakas, Spyridon, Wiestler, Benedikt, Ezhov, Ivan, Piraud, Marie, and Menze, Bjoern H
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
A myriad of algorithms for the automatic analysis of brain MR images is available to support clinicians in their decision-making. For brain tumor patients, the image acquisition time series typically starts with an already pathological scan. This poses problems, as many algorithms are designed to analyze healthy brains and provide no guarantee for images featuring lesions. Examples include, but are not limited to, algorithms for brain anatomy parcellation, tissue segmentation, and brain extraction. To solve this dilemma, we introduce the BraTS inpainting challenge. Here, the participants explore inpainting techniques to synthesize healthy brain scans from lesioned ones. The following manuscript contains the task formulation, dataset, and submission procedure. Later, it will be updated to summarize the findings of the challenge. The challenge is organized as part of the ASNR-BraTS MICCAI challenge., Comment: 14 pages, 6 figures
- Published
- 2023
14. The ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2023: Intracranial Meningioma
- Author
-
LaBella, Dominic, Adewole, Maruf, Alonso-Basanta, Michelle, Altes, Talissa, Anwar, Syed Muhammad, Baid, Ujjwal, Bergquist, Timothy, Bhalerao, Radhika, Chen, Sully, Chung, Verena, Conte, Gian-Marco, Dako, Farouk, Eddy, James, Ezhov, Ivan, Godfrey, Devon, Hilal, Fathi, Familiar, Ariana, Farahani, Keyvan, Iglesias, Juan Eugenio, Jiang, Zhifan, Johanson, Elaine, Kazerooni, Anahita Fathi, Kent, Collin, Kirkpatrick, John, Kofler, Florian, Van Leemput, Koen, Li, Hongwei Bran, Liu, Xinyang, Mahtabfar, Aria, McBurney-Lin, Shan, McLean, Ryan, Meier, Zeke, Moawad, Ahmed W, Mongan, John, Nedelec, Pierre, Pajot, Maxence, Piraud, Marie, Rashid, Arif, Reitman, Zachary, Shinohara, Russell Takeshi, Velichko, Yury, Wang, Chunhao, Warman, Pranav, Wiggins, Walter, Aboian, Mariam, Albrecht, Jake, Anazodo, Udunna, Bakas, Spyridon, Flanders, Adam, Janas, Anastasia, Khanna, Goldey, Linguraru, Marius George, Menze, Bjoern, Nada, Ayman, Rauschecker, Andreas M, Rudie, Jeff, Tahon, Nourel Hoda, Villanueva-Meyer, Javier, Wiestler, Benedikt, and Calabrese, Evan
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Meningiomas are the most common primary intracranial tumor in adults and can be associated with significant morbidity and mortality. Radiologists, neurosurgeons, neuro-oncologists, and radiation oncologists rely on multiparametric MRI (mpMRI) for diagnosis, treatment planning, and longitudinal treatment monitoring; yet automated, objective, and quantitative tools for non-invasive assessment of meningiomas on mpMRI are lacking. The BraTS meningioma 2023 challenge will provide a community standard and benchmark for state-of-the-art automated intracranial meningioma segmentation models based on the largest expert annotated multilabel meningioma mpMRI dataset to date. Challenge competitors will develop automated segmentation models to predict three distinct meningioma sub-regions on MRI including enhancing tumor, non-enhancing tumor core, and surrounding nonenhancing T2/FLAIR hyperintensity. Models will be evaluated on separate validation and held-out test datasets using standardized metrics utilized across the BraTS 2023 series of challenges including the Dice similarity coefficient and Hausdorff distance. The models developed during the course of this challenge will aid in incorporation of automated meningioma MRI segmentation into clinical practice, which will ultimately improve care of patients with meningioma.
- Published
- 2023
15. The University of California San Francisco Brain Metastases Stereotactic Radiosurgery (UCSF-BMSR) MRI Dataset
- Author
-
Rudie, Jeffrey D., Saluja, Rachit, Weiss, David A., Nedelec, Pierre, Calabrese, Evan, Colby, John B., Laguna, Benjamin, Mongan, John, Braunstein, Steve, Hess, Christopher P., Rauschecker, Andreas M., Sugrue, Leo P., and Villanueva-Meyer, Javier E.
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
The University of California San Francisco Brain Metastases Stereotactic Radiosurgery (UCSF-BMSR) dataset is a public, clinical, multimodal brain MRI dataset consisting of 560 brain MRIs from 412 patients with expert annotations of 5136 brain metastases. Data consists of registered and skull stripped T1 post-contrast, T1 pre-contrast, FLAIR and subtraction (T1 pre-contrast - T1 post-contrast) images and voxelwise segmentations of enhancing brain metastases in NifTI format. The dataset also includes patient demographics, surgical status and primary cancer types. The UCSF-BSMR has been made publicly available in the hopes that researchers will use these data to push the boundaries of AI applications for brain metastases. The dataset is freely available for non-commercial use at https://imagingdatasets.ucsf.edu/dataset/1, Comment: 15 pages, 2 tables, 2 figures
- Published
- 2023
- Full Text
- View/download PDF
16. Deep Learning to Simulate Contrast-enhanced Breast MRI of Invasive Breast Cancer.
- Author
-
Chung, Maggie, Calabrese, Evan, Mongan, John, Hayward, Jessica, Sieberg, Ryan, Joe, Bonnie, Hylton, Nola, Lee, Amie, Kelil, Tatiana, and Ray, Kimberly
- Subjects
Female ,Humans ,Middle Aged ,Breast Neoplasms ,Deep Learning ,Retrospective Studies ,Breast ,Magnetic Resonance Imaging ,Contrast Media - Abstract
Background There is increasing interest in noncontrast breast MRI alternatives for tumor visualization to increase the accessibility of breast MRI. Purpose To evaluate the feasibility and accuracy of generating simulated contrast-enhanced T1-weighted breast MRI scans from precontrast MRI sequences in biopsy-proven invasive breast cancer with use of deep learning. Materials and Methods Women with invasive breast cancer and a contrast-enhanced breast MRI examination that was performed for initial evaluation of the extent of disease between January 2015 and December 2019 at a single academic institution were retrospectively identified. A three-dimensional, fully convolutional deep neural network simulated contrast-enhanced T1-weighted breast MRI scans from five precontrast sequences (T1-weighted non-fat-suppressed [FS], T1-weighted FS, T2-weighted FS, apparent diffusion coefficient, and diffusion-weighted imaging). For qualitative assessment, four breast radiologists (with 3-15 years of experience) blinded to whether the method of contrast was real or simulated assessed image quality (excellent, acceptable, good, poor, or unacceptable), presence of tumor enhancement, and maximum index mass size by using 22 pairs of real and simulated contrast-enhanced MRI scans. Quantitative comparison was performed using whole-breast similarity and error metrics and Dice coefficient analysis of enhancing tumor overlap. Results Ninety-six MRI examinations in 96 women (mean age, 52 years ± 12 [SD]) were evaluated. The readers assessed all simulated MRI scans as having the appearance of a real MRI scan with tumor enhancement. Index mass sizes on real and simulated MRI scans demonstrated good to excellent agreement (intraclass correlation coefficient, 0.73-0.86; P < .001) without significant differences (mean differences, -0.8 to 0.8 mm; P = .36-.80). Almost all simulated MRI scans (84 of 88 [95%]) were considered of diagnostic quality (ratings of excellent, acceptable, or good). Quantitative analysis demonstrated strong similarity (structural similarity index, 0.88 ± 0.05), low voxel-wise error (symmetric mean absolute percent error, 3.26%), and Dice coefficient of enhancing tumor overlap of 0.75 ± 0.25. Conclusion It is feasible to generate simulated contrast-enhanced breast MRI scans with use of deep learning. Simulated and real contrast-enhanced MRI scans demonstrated comparable tumor sizes, areas of tumor enhancement, and image quality without significant qualitative or quantitative differences. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Slanetz in this issue. An earlier incorrect version appeared online. This article was corrected on January 17, 2023.
- Published
- 2023
17. Behavioral nudges in the electronic health record to reduce waste and misuse: 3 interventions.
- Author
-
Grouse, Carrie, Waung, Maggie, Holmgren, A, Mongan, John, Neinstein, Aaron, Josephson, S, and Khanna, Raman
- Subjects
choice architecture ,computerized tomography ,decision support ,nudge ,overuse ,waste ,Electronic Health Records ,Workflow - Abstract
Electronic health records (EHRs) offer decision support in the form of alerts, which are often though not always interruptive. These alerts, though sometimes effective, can come at the cost of high cognitive burden and workflow disruption. Less well studied is the design of the EHR itself-the ordering providers choice architecture-which nudges users toward alternatives, sometimes unintentionally toward waste and misuse, but ideally intentionally toward better practice. We studied 3 different workflows at our institution where the existing choice architecture was potentially nudging providers toward erroneous decisions, waste, and misuse in the form of inappropriate laboratory work, incorrectly specified computerized tomographic imaging, and excessive benzodiazepine dosing for imaging-related sedation. We changed the architecture to nudge providers toward better practice and found that the 3 nudges were successful to varying degrees in reducing erroneous decision-making and mitigating waste and misuse.
- Published
- 2023
18. Automated detection of IVC filters on radiographs with deep convolutional neural networks.
- Author
-
Mongan, John, Kohli, Marc D, Houshyar, Roozbeh, Chang, Peter D, Glavis-Bloom, Justin, and Taylor, Andrew G
- Subjects
Humans ,Radiography ,Retrospective Studies ,Vena Cava Filters ,Algorithms ,Neural Networks ,Computer ,Artificial intelligence ,Deep learning ,Inferior vena cava filter ,Screening - Abstract
PurposeTo create an algorithm able to accurately detect IVC filters on radiographs without human assistance, capable of being used to screen radiographs to identify patients needing IVC filter retrieval.MethodsA primary dataset of 5225 images, 30% of which included IVC filters, was assembled and annotated. 85% of the data was used to train a Cascade R-CNN (Region Based Convolutional Neural Network) object detection network incorporating a pre-trained ResNet-50 backbone. The remaining 15% of the data, independently annotated by three radiologists, was used as a test set to assess performance. The algorithm was also assessed on an independently constructed 1424-image dataset, drawn from a different institution than the primary dataset.ResultsOn the primary test set, the algorithm achieved a sensitivity of 96.2% (95% CI 92.7-98.1%) and a specificity of 98.9% (95% CI 97.4-99.5%). Results were similar on the external test set: sensitivity 97.9% (95% CI 96.2-98.9%), specificity 99.6 (95% CI 98.9-99.9%).ConclusionFully automated detection of IVC filters on radiographs with high sensitivity and excellent specificity required for an automated screening system can be achieved using object detection neural networks. Further work will develop a system for identifying patients for IVC filter retrieval based on this algorithm.
- Published
- 2023
19. The University of California San Francisco Preoperative Diffuse Glioma MRI Dataset.
- Author
-
Calabrese, Evan, Villanueva-Meyer, Javier E, Rudie, Jeffrey D, Rauschecker, Andreas M, Baid, Ujjwal, Bakas, Spyridon, Cha, Soonmee, Mongan, John T, and Hess, Christopher P
- Subjects
Brain/Brain Stem ,CNS ,Informatics ,MR Diffusion Tensor Imaging ,MR Imaging ,MR Perfusion ,Neuro-Oncology ,Oncology ,Radiogenomics ,Radiology-Pathology Integration ,Rare Diseases ,Brain Cancer ,Biomedical Imaging ,Cancer ,Neurosciences ,Brain Disorders ,Neurological ,Good Health and Well Being - Abstract
Supplemental material is available for this article. Keywords: Informatics, MR Diffusion Tensor Imaging, MR Perfusion, MR Imaging, Neuro-Oncology, CNS, Brain/Brain Stem, Oncology, Radiogenomics, Radiology-Pathology Integration © RSNA, 2022.
- Published
- 2022
20. The University of California San Francisco Preoperative Diffuse Glioma MRI (UCSF-PDGM) Dataset
- Author
-
Calabrese, Evan, Villanueva-Meyer, Javier E., Rudie, Jeffrey D., Rauschecker, Andreas M., Baid, Ujjwal, Bakas, Spyridon, Cha, Soonmee, Mongan, John T., and Hess, Christopher P.
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Here we present the University of California San Francisco Preoperative Diffuse Glioma MRI (UCSF-PDGM) dataset. The UCSF-PDGM dataset includes 500 subjects with histopathologically-proven diffuse gliomas who were imaged with a standardized 3 Tesla preoperative brain tumor MRI protocol featuring predominantly 3D imaging, as well as advanced diffusion and perfusion imaging techniques. The dataset also includes isocitrate dehydrogenase (IDH) mutation status for all cases and O6-methylguanine-DNA methyltransferase (MGMT) promotor methylation status for World Health Organization (WHO) grade III and IV gliomas. The UCSF-PDGM has been made publicly available in the hopes that researchers around the world will use these data to continue to push the boundaries of AI applications for diffuse gliomas., Comment: 7 pages, 2 figures, 2 tables
- Published
- 2021
- Full Text
- View/download PDF
21. The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification
- Author
-
Baid, Ujjwal, Ghodasara, Satyam, Mohan, Suyash, Bilello, Michel, Calabrese, Evan, Colak, Errol, Farahani, Keyvan, Kalpathy-Cramer, Jayashree, Kitamura, Felipe C., Pati, Sarthak, Prevedello, Luciano M., Rudie, Jeffrey D., Sako, Chiharu, Shinohara, Russell T., Bergquist, Timothy, Chai, Rong, Eddy, James, Elliott, Julia, Reade, Walter, Schaffter, Thomas, Yu, Thomas, Zheng, Jiaxin, Moawad, Ahmed W., Coelho, Luiz Otavio, McDonnell, Olivia, Miller, Elka, Moron, Fanny E., Oswood, Mark C., Shih, Robert Y., Siakallis, Loizos, Bronstein, Yulia, Mason, James R., Miller, Anthony F., Choudhary, Gagandeep, Agarwal, Aanchal, Besada, Cristina H., Derakhshan, Jamal J., Diogo, Mariana C., Do-Dai, Daniel D., Farage, Luciano, Go, John L., Hadi, Mohiuddin, Hill, Virginia B., Iv, Michael, Joyner, David, Lincoln, Christie, Lotan, Eyal, Miyakoshi, Asako, Sanchez-Montano, Mariana, Nath, Jaya, Nguyen, Xuan V., Nicolas-Jilwan, Manal, Jimenez, Johanna Ortiz, Ozturk, Kerem, Petrovic, Bojan D., Shah, Chintan, Shah, Lubdha M., Sharma, Manas, Simsek, Onur, Singh, Achint K., Soman, Salil, Statsevych, Volodymyr, Weinberg, Brent D., Young, Robert J., Ikuta, Ichiro, Agarwal, Amit K., Cambron, Sword C., Silbergleit, Richard, Dusoi, Alexandru, Postma, Alida A., Letourneau-Guillon, Laurent, Perez-Carrillo, Gloria J. Guzman, Saha, Atin, Soni, Neetu, Zaharchuk, Greg, Zohrabian, Vahe M., Chen, Yingming, Cekic, Milos M., Rahman, Akm, Small, Juan E., Sethi, Varun, Davatzikos, Christos, Mongan, John, Hess, Christopher, Cha, Soonmee, Villanueva-Meyer, Javier, Freymann, John B., Kirby, Justin S., Wiestler, Benedikt, Crivellaro, Priscila, Colen, Rivka R., Kotrotsou, Aikaterini, Marcus, Daniel, Milchenko, Mikhail, Nazeri, Arash, Fathallah-Shaykh, Hassan, Wiest, Roland, Jakab, Andras, Weber, Marc-Andre, Mahajan, Abhishek, Menze, Bjoern, Flanders, Adam E., and Bakas, Spyridon
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
The BraTS 2021 challenge celebrates its 10th anniversary and is jointly organized by the Radiological Society of North America (RSNA), the American Society of Neuroradiology (ASNR), and the Medical Image Computing and Computer Assisted Interventions (MICCAI) society. Since its inception, BraTS has been focusing on being a common benchmarking venue for brain glioma segmentation algorithms, with well-curated multi-institutional multi-parametric magnetic resonance imaging (mpMRI) data. Gliomas are the most common primary malignancies of the central nervous system, with varying degrees of aggressiveness and prognosis. The RSNA-ASNR-MICCAI BraTS 2021 challenge targets the evaluation of computational algorithms assessing the same tumor compartmentalization, as well as the underlying tumor's molecular characterization, in pre-operative baseline mpMRI data from 2,040 patients. Specifically, the two tasks that BraTS 2021 focuses on are: a) the segmentation of the histologically distinct brain tumor sub-regions, and b) the classification of the tumor's O[6]-methylguanine-DNA methyltransferase (MGMT) promoter methylation status. The performance evaluation of all participating algorithms in BraTS 2021 will be conducted through the Sage Bionetworks Synapse platform (Task 1) and Kaggle (Task 2), concluding in distributing to the top ranked participants monetary awards of $60,000 collectively., Comment: 19 pages, 2 figures, 1 table
- Published
- 2021
22. Machine Learning Tools for Image-Based Glioma Grading and the Quality of Their Reporting: Challenges and Opportunities
- Author
-
Merkaj, Sara, Bahar, Ryan C, Zeevi, Tal, Lin, MingDe, Ikuta, Ichiro, Bousabarah, Khaled, Petersen, Gabriel I Cassinelli, Staib, Lawrence, Payabvash, Seyedmehdi, Mongan, John T, Cha, Soonmee, and Aboian, Mariam S
- Subjects
Biomedical and Clinical Sciences ,Clinical Sciences ,Brain Disorders ,Networking and Information Technology R&D (NITRD) ,Brain Cancer ,Neurosciences ,Bioengineering ,Rare Diseases ,Cancer ,artificial intelligence ,glioma ,machine learning ,deep learning ,reporting quality ,Oncology and Carcinogenesis ,Oncology and carcinogenesis - Abstract
Technological innovation has enabled the development of machine learning (ML) tools that aim to improve the practice of radiologists. In the last decade, ML applications to neuro-oncology have expanded significantly, with the pre-operative prediction of glioma grade using medical imaging as a specific area of interest. We introduce the subject of ML models for glioma grade prediction by remarking upon the models reported in the literature as well as by describing their characteristic developmental workflow and widely used classifier algorithms. The challenges facing these models-including data sources, external validation, and glioma grade classification methods -are highlighted. We also discuss the quality of how these models are reported, explore the present and future of reporting guidelines and risk of bias tools, and provide suggestions for the reporting of prospective works. Finally, this review offers insights into next steps that the field of ML glioma grade prediction can take to facilitate clinical implementation.
- Published
- 2022
23. RSNA-MICCAI Panel Discussion: Machine Learning for Radiology from Challenges to Clinical Applications.
- Author
-
Mongan, John, Kalpathy-Cramer, Jayashree, Flanders, Adam, and George Linguraru, Marius
- Subjects
Artificial Neural Network Algorithms ,Back-Propagation ,Machine Learning Algorithms - Abstract
On October 5, 2020, the Medical Image Computing and Computer Assisted Intervention Society (MICCAI) 2020 conference hosted a virtual panel discussion with members of the Machine Learning Steering Subcommittee of the Radiological Society of North America. The MICCAI Society brings together scientists, engineers, physicians, educators, and students from around the world. Both societies share a vision to develop radiologic and medical imaging techniques through advanced quantitative imaging biomarkers and artificial intelligence. The panel elaborated on how collaborations between radiologists and machine learning scientists facilitate the creation and clinical success of imaging technology for radiology. This report presents structured highlights of the moderated dialogue at the panel. Keywords: Back-Propagation, Artificial Neural Network Algorithms, Machine Learning Algorithms © RSNA, 2021.
- Published
- 2021
24. Automated coronary calcium scoring using deep learning with multicenter external validation.
- Author
-
Eng, David, Chute, Christopher, Khandwala, Nishith, Rajpurkar, Pranav, Long, Jin, Shleifer, Sam, Khalaf, Mohamed H, Sandhu, Alexander T, Rodriguez, Fatima, Maron, David J, Seyyedi, Saeed, Marin, Daniele, Golub, Ilana, Budoff, Matthew, Kitamura, Felipe, Takahashi, Marcelo Straus, Filice, Ross W, Shah, Rajesh, Mongan, John, Kallianos, Kimberly, Langlotz, Curtis P, Lungren, Matthew P, Ng, Andrew Y, and Patel, Bhavik N
- Subjects
Heart Disease - Coronary Heart Disease ,Prevention ,Atherosclerosis ,Heart Disease ,Clinical Research ,Cardiovascular ,Health Services ,Biomedical Imaging ,Bioengineering ,Detection ,screening and diagnosis ,4.2 Evaluation of markers and technologies ,Good Health and Well Being - Abstract
Coronary artery disease (CAD), the most common manifestation of cardiovascular disease, remains the most common cause of mortality in the United States. Risk assessment is key for primary prevention of coronary events and coronary artery calcium (CAC) scoring using computed tomography (CT) is one such non-invasive tool. Despite the proven clinical value of CAC, the current clinical practice implementation for CAC has limitations such as the lack of insurance coverage for the test, need for capital-intensive CT machines, specialized imaging protocols, and accredited 3D imaging labs for analysis (including personnel and software). Perhaps the greatest gap is the millions of patients who undergo routine chest CT exams and demonstrate coronary artery calcification, but their presence is not often reported or quantitation is not feasible. We present two deep learning models that automate CAC scoring demonstrating advantages in automated scoring for both dedicated gated coronary CT exams and routine non-gated chest CTs performed for other reasons to allow opportunistic screening. First, we trained a gated coronary CT model for CAC scoring that showed near perfect agreement (mean difference in scores = -2.86; Cohen's Kappa = 0.89, P
- Published
- 2021
25. The RSNA International COVID-19 Open Radiology Database (RICORD).
- Author
-
Tsai, Emily, Simpson, Scott, Lungren, Matthew, Hershman, Michelle, Roshkovan, Leonid, Colak, Errol, Erickson, Bradley, Shih, George, Stein, Anouk, Kalpathy-Cramer, Jayashree, Shen, Jody, Hafez, Mona, John, Susan, Rajiah, Prabhakar, Pogatchnik, Brian, Altinmakas, Emre, Ranschaert, Erik, Kitamura, Felipe, Topff, Laurens, Moy, Linda, Kanne, Jeffrey, Wu, Carol, and Mongan, John
- Subjects
COVID-19 ,Databases ,Factual ,Global Health ,Humans ,Internationality ,Lung ,Radiography ,Thoracic ,Radiology ,SARS-CoV-2 ,Societies ,Medical ,Tomography ,X-Ray Computed - Abstract
The coronavirus disease 2019 (COVID-19) pandemic is a global health care emergency. Although reverse-transcription polymerase chain reaction testing is the reference standard method to identify patients with COVID-19 infection, chest radiography and CT play a vital role in the detection and management of these patients. Prediction models for COVID-19 imaging are rapidly being developed to support medical decision making. However, inadequate availability of a diverse annotated data set has limited the performance and generalizability of existing models. To address this unmet need, the RSNA and Society of Thoracic Radiology collaborated to develop the RSNA International COVID-19 Open Radiology Database (RICORD). This database is the first multi-institutional, multinational, expert-annotated COVID-19 imaging data set. It is made freely available to the machine learning community as a research and educational resource for COVID-19 chest imaging. Pixel-level volumetric segmentation with clinical annotations was performed by thoracic radiology subspecialists for all COVID-19-positive thoracic CT scans. The labeling schema was coordinated with other international consensus panels and COVID-19 data annotation efforts, the European Society of Medical Imaging Informatics, the American College of Radiology, and the American Association of Physicists in Medicine. Study-level COVID-19 classification labels for chest radiographs were annotated by three radiologists, with majority vote adjudication by board-certified radiologists. RICORD consists of 240 thoracic CT scans and 1000 chest radiographs contributed from four international sites. It is anticipated that RICORD will ideally lead to prediction models that can demonstrate sustained performance across populations and health care systems.
- Published
- 2021
26. Effect of shelter-in-place on emergency department radiology volumes during the COVID-19 pandemic.
- Author
-
Houshyar, Roozbeh, Tran-Harding, Karen, Glavis-Bloom, Justin, Nguyentat, Michael, Mongan, John, Chahine, Chantal, Loehfelm, Thomas W, Kohli, Marc D, Zaragoza, Edward J, Murphy, Paul M, and Kampalath, Rony
- Subjects
Humans ,Pneumonia ,Viral ,Coronavirus Infections ,Diagnostic Imaging ,Quarantine ,Emergency Service ,Hospital ,Utilization Review ,California ,Female ,Male ,Pandemics ,Betacoronavirus ,COVID-19 ,SARS-CoV-2 ,Coronavirus ,ER ,Healthcare utilization ,Predictive model ,Trauma ,Clinical Research ,Emergency Care ,Health Services ,Good Health and Well Being ,Clinical Sciences ,Nuclear Medicine & Medical Imaging - Abstract
PurposeThe coronavirus disease 2019 (COVID-19) pandemic has led to significant disruptions in the healthcare system including surges of infected patients exceeding local capacity, closures of primary care offices, and delays of non-emergent medical care. Government-initiated measures to decrease healthcare utilization (i.e., "flattening the curve") have included shelter-in-place mandates and social distancing, which have taken effect across most of the USA. We evaluate the immediate impact of the Public Health Messaging and shelter-in-place mandates on Emergency Department (ED) demand for radiology services.MethodsWe analyzed ED radiology volumes from the five University of California health systems during a 2-week time period following the shelter-in-place mandate and compared those volumes with March 2019 and early April 2019 volumes.ResultsED radiology volumes declined from the 2019 baseline by 32 to 40% (p < 0.001) across the five health systems with a total decrease in volumes across all 5 systems by 35% (p < 0.001). Stratifying by subspecialty, the smallest declines were seen in non-trauma thoracic imaging, which decreased 18% (p value < 0.001), while all other non-trauma studies decreased by 48% (p < 0.001).ConclusionTotal ED radiology demand may be a marker for public adherence to shelter-in-place mandates, though ED chest radiology demand may increase with an increase in COVID-19 cases.
- Published
- 2020
27. Erratum: Construction of a Machine Learning Dataset through Collaboration: The RSNA 2019 Brain CT Hemorrhage Challenge.
- Author
-
Flanders, Adam, Prevedello, Luciano, Shih, George, Halabi, Safwan, Kalpathy-Cramer, Jayashree, Ball, Robyn, Stein, Anouk, Kitamura, Felipe, Lungren, Matthew, Choudhary, Gagandeep, Cala, Lesley, Coelho, Luiz, Mogensen, Monique, Morón, Fanny, Miller, Elka, Ikuta, Ichiro, Zohrabian, Vahe, McDonnell, Olivia, Lincoln, Christie, Shah, Lubdha, Joyner, David, Agarwal, Amit, Lee, Ryan, Nath, Jaya, and Mongan, John
- Abstract
[This corrects the article DOI: 10.1148/ryai.2020190211.].
- Published
- 2020
28. Construction of a Machine Learning Dataset through Collaboration: The RSNA 2019 Brain CT Hemorrhage Challenge.
- Author
-
Flanders, Adam, Prevedello, Luciano, Shih, George, Halabi, Safwan, Kalpathy-Cramer, Jayashree, Ball, Robyn, Mongan, John, Stein, Anouk, Kitamura, Felipe, Lungren, Matthew, Choudhary, Gagandeep, Cala, Lesley, Coelho, Luiz, Mogensen, Monique, Morón, Fanny, Miller, Elka, Ikuta, Ichiro, Zohrabian, Vahe, McDonnell, Olivia, Lincoln, Christie, Shah, Lubdha, Joyner, David, Agarwal, Amit, Lee, Ryan, and Nath, Jaya
- Abstract
This dataset is composed of annotations of the five hemorrhage subtypes (subarachnoid, intraventricular, subdural, epidural, and intraparenchymal hemorrhage) typically encountered at brain CT.
- Published
- 2020
29. The University of California San Francisco Adult Longitudinal Post-Treatment Diffuse Glioma (UCSF-ALPTDG) MRI Dataset
- Author
-
Fields, Brandon K. K., primary, Calabrese, Evan, additional, Mongan, John, additional, Cha, Soonmee, additional, Hess, Christopher P., additional, Sugrue, Leo P., additional, Chang, Susan M., additional, Luks, Tracy L., additional, Villanueva-Meyer, Javier E., additional, Rauschecker, Andreas M., additional, and Rudie, Jeffrey D., additional
- Published
- 2024
- Full Text
- View/download PDF
30. Automated detection of moderate and large pneumothorax on frontal chest X-rays using deep convolutional neural networks: A retrospective study.
- Author
-
Taylor, Andrew G, Mielke, Clinton, and Mongan, John
- Subjects
Humans ,Pneumothorax ,Diagnosis ,Computer-Assisted ,Radiographic Image Interpretation ,Computer-Assisted ,Radiography ,Thoracic ,Prognosis ,Retrospective Studies ,Reproducibility of Results ,Predictive Value of Tests ,Automation ,Databases ,Factual ,Deep Learning ,Diagnosis ,Computer-Assisted ,Radiographic Image Interpretation ,Radiography ,Thoracic ,Databases ,Factual ,General & Internal Medicine ,Medical and Health Sciences - Abstract
BackgroundPneumothorax can precipitate a life-threatening emergency due to lung collapse and respiratory or circulatory distress. Pneumothorax is typically detected on chest X-ray; however, treatment is reliant on timely review of radiographs. Since current imaging volumes may result in long worklists of radiographs awaiting review, an automated method of prioritizing X-rays with pneumothorax may reduce time to treatment. Our objective was to create a large human-annotated dataset of chest X-rays containing pneumothorax and to train deep convolutional networks to screen for potentially emergent moderate or large pneumothorax at the time of image acquisition.Methods and findingsIn all, 13,292 frontal chest X-rays (3,107 with pneumothorax) were visually annotated by radiologists. This dataset was used to train and evaluate multiple network architectures. Images showing large- or moderate-sized pneumothorax were considered positive, and those with trace or no pneumothorax were considered negative. Images showing small pneumothorax were excluded from training. Using an internal validation set (n = 1,993), we selected the 2 top-performing models; these models were then evaluated on a held-out internal test set based on area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and positive predictive value (PPV). The final internal test was performed initially on a subset with small pneumothorax excluded (as in training; n = 1,701), then on the full test set (n = 1,990), with small pneumothorax included as positive. External evaluation was performed using the National Institutes of Health (NIH) ChestX-ray14 set, a public dataset labeled for chest pathology based on text reports. All images labeled with pneumothorax were considered positive, because the NIH set does not classify pneumothorax by size. In internal testing, our "high sensitivity model" produced a sensitivity of 0.84 (95% CI 0.78-0.90), specificity of 0.90 (95% CI 0.89-0.92), and AUC of 0.94 for the test subset with small pneumothorax excluded. Our "high specificity model" showed sensitivity of 0.80 (95% CI 0.72-0.86), specificity of 0.97 (95% CI 0.96-0.98), and AUC of 0.96 for this set. PPVs were 0.45 (95% CI 0.39-0.51) and 0.71 (95% CI 0.63-0.77), respectively. Internal testing on the full set showed expected decreased performance (sensitivity 0.55, specificity 0.90, and AUC 0.82 for high sensitivity model and sensitivity 0.45, specificity 0.97, and AUC 0.86 for high specificity model). External testing using the NIH dataset showed some further performance decline (sensitivity 0.28-0.49, specificity 0.85-0.97, and AUC 0.75 for both). Due to labeling differences between internal and external datasets, these findings represent a preliminary step towards external validation.ConclusionsWe trained automated classifiers to detect moderate and large pneumothorax in frontal chest X-rays at high levels of performance on held-out test data. These models may provide a high specificity screening solution to detect moderate or large pneumothorax on images collected when human review might be delayed, such as overnight. They are not intended for unsupervised diagnosis of all pneumothoraces, as many small pneumothoraces (and some larger ones) are not detected by the algorithm. Implementation studies are warranted to develop appropriate, effective clinician alerts for the potentially critical finding of pneumothorax, and to assess their impact on reducing time to treatment.
- Published
- 2018
31. Impact of PACS-EMR Integration on Radiologist Usage of the EMR
- Author
-
Mongan, John and Avrin, David
- Subjects
Biomedical and Clinical Sciences ,Clinical Sciences ,Clinical Research ,Good Health and Well Being ,Electronic Health Records ,Humans ,Radiology Information Systems ,Retrospective Studies ,Systems Integration ,Time ,PACS ,EMR ,Context integration ,Clinical history ,Nuclear Medicine & Medical Imaging ,Clinical sciences - Abstract
The purpose of this study was to objectively quantify the impact of implementing picture archiving and communication system-electronic medical record (PACS-EMR) integration on the time required to access data in the EMR and the frequency with which data are accessed by radiologists. Time to access a clinic note in the EMR was measured before and after integration with a stopwatch and compared by t test. An IRB-approved, HIPAA-compliant retrospective review of EMR access data from security audit logs was conducted for a 14-month period spanning the integration. Correlation of these data with report signatures identified the studies in which the radiologist accessed the EMR to obtain additional clinical data. Proportions of studies with EMR access were plotted and compared before and after integration using a chi-square test. Time to access the EMR decreased from 52 to 6 s (p
- Published
- 2018
32. Contrast Enhanced Ultrasound as a Radiation-Free Alternative to Fluoroscopic Nephrostogram for Evaluating Ureteral Patency
- Author
-
Chi, Thomas, Usawachintachit, Manint, Weinstein, Stefanie, Kohi, Maureen P, Taylor, Andrew, Tzou, David T, Chang, Helena C, Stoller, Marshall, and Mongan, John
- Subjects
Biomedical and Clinical Sciences ,Clinical Sciences ,Biomedical Imaging ,Kidney Disease ,Clinical Research ,Patient Safety ,4.1 Discovery and preclinical testing of markers and technologies ,Detection ,screening and diagnosis ,Contrast Media ,Female ,Fluoroscopy ,Humans ,Kidney Calculi ,Male ,Middle Aged ,Nephrolithotomy ,Percutaneous ,Prospective Studies ,Treatment Outcome ,Ultrasonography ,Ureter ,Ureteral Calculi ,kidney calculi ,ultrasonography ,contrast media ,fluoroscopy ,diagnostic imaging ,Urology & Nephrology ,Clinical sciences - Abstract
PurposeWe compared contrast enhanced ultrasound and fluoroscopic nephrostography in the evaluation of ureteral patency following percutaneous nephrolithotomy.Materials and methodsThis prospective cohort, noninferiority study was performed after obtaining institutional review board approval. We enrolled eligible patients with kidney and proximal ureteral stones who underwent percutaneous nephrolithotomy at our center. On postoperative day 1 patients received contrast enhanced ultrasound and fluoroscopic nephrostogram within 2 hours of each other to evaluate ureteral patency, which was the primary outcome of this study.ResultsA total of 92 pairs of imaging studies were performed in 82 patients during the study period. Five study pairs were excluded due to technical errors that prevented imaging interpretation. Females slightly predominated over males with a mean ± SD age of 50.5 ± 15.9 years and a mean body mass index of 29.6 ± 8.6 kg/m2. Of the remaining 87 sets of studies 69 (79.3%) demonstrated concordant findings regarding ureteral patency for the 2 imaging techniques and 18 (20.7%) were discordant. The nephrostomy tube was removed on the same day in 15 of the 17 patients who demonstrated antegrade urine flow only on contrast enhanced ultrasound and they had no subsequent adverse events. No adverse events were noted related to ultrasound contrast injection. While contrast enhanced ultrasound used no ionizing radiation, fluoroscopic nephrostograms provided a mean radiation exposure dose of 2.8 ± 3.7 mGy.ConclusionsA contrast enhanced ultrasound nephrostogram can be safely performed to evaluate for ureteral patency following percutaneous nephrolithotomy. This imaging technique was mostly concordant with fluoroscopic findings. Most discordance was likely attributable to the higher sensitivity for patency of contrast enhanced ultrasound compared to fluoroscopy.
- Published
- 2017
33. PRISMA AI reporting guidelines for systematic reviews and meta-analyses on AI in healthcare
- Author
-
Cacciamani, Giovanni E., Chu, Timothy N., Sanford, Daniel I., Abreu, Andre, Duddalwar, Vinay, Oberai, Assad, Kuo, C.-C. Jay, Liu, Xiaoxuan, Denniston, Alastair K., Vasey, Baptiste, McCulloch, Peter, Wolff, Robert F., Mallett, Sue, Mongan, John, Kahn, Jr, Charles E., Sounderajah, Viknesh, Darzi, Ara, Dahm, Philipp, Moons, Karel G. M., Topol, Eric, Collins, Gary S., Moher, David, Gill, Inderbir S., and Hung, Andrew J.
- Published
- 2023
- Full Text
- View/download PDF
34. Feasibility of Antegrade Contrast-enhanced US Nephrostograms to Evaluate Ureteral Patency.
- Author
-
Chi, Thomas, Usawachintachit, Manint, Mongan, John, Kohi, Maureen P, Taylor, Andrew, Jha, Priyanka, Chang, Helena C, Stoller, Marshall, Goldstein, Ruth, and Weinstein, Stefanie
- Subjects
Ureter ,Humans ,Contrast Media ,Image Enhancement ,Ultrasonography ,Nephrostomy ,Percutaneous ,Prospective Studies ,Feasibility Studies ,Microbubbles ,Adult ,Aged ,Middle Aged ,Female ,Male ,Biomedical Imaging ,4.2 Evaluation of markers and technologies ,Nuclear Medicine & Medical Imaging ,Medical and Health Sciences - Abstract
Purpose To demonstrate the feasibility of contrast material-enhanced ulrasonographic (US) nephrostograms to assess ureteral patency after percutaneous nephrolithotomy (PCNL) in this proof-of-concept study. Materials and Methods For this HIPAA-compliant, institutional review board-approved prospective blinded pilot study, patients undergoing PCNL provided consent to undergo contrast-enhanced US and fluoroscopic nephrostograms on postoperative day 1. For contrast-enhanced US, 1.5 mL of Optison (GE Healthcare, Oslo, Norway) microbubble contrast agent solution (perflutren protein-type A microspheres) was injected via the nephrostomy tube. Unobstructed antegrade ureteral flow was defined by the presence of contrast material in the bladder. Contrast-enhanced US results were compared against those of fluoroscopic nephrostograms for concordance. Results Ten studies were performed in nine patients (four women, five men). Contrast-enhanced US demonstrated ureteral patency in eight studies and obstruction in two. One patient underwent two studies, one showing obstruction and the second showing patency. Concordance between US and fluoroscopic assessments of ureteral patency was evaluated by using a Clopper-Pearson exact binomial test. These results were perfectly concordant with fluoroscopic nephrostogram results, with a 95% confidence interval of 69.2% and 100%. No complications or adverse events related to contrast-enhanced US occurred. Conclusion Contrast-enhanced US nephrostograms are simple to perform and are capable of demonstrating both patency and obstruction of the ureter. The perfect concordance with fluoroscopic results across 10 studies demonstrated here is not sufficient to establish diagnostic accuracy of this technique, but motivates further, larger scale investigation. If subsequent larger studies confirm these preliminary results, contrast-enhanced US may provide a safer, more convenient way to evaluate ureteral patency than fluoroscopy. © RSNA, 2016 Online supplemental material is available for this article.
- Published
- 2017
35. Feasibility of Retrograde Ureteral Contrast Injection to Guide Ultrasonographic Percutaneous Renal Access in the Nondilated Collecting System
- Author
-
Usawachintachit, Manint, Tzou, David T, Mongan, John, Taguchi, Kazumi, Weinstein, Stefanie, and Chi, Thomas
- Subjects
Biomedical and Clinical Sciences ,Clinical Sciences ,Biomedical Imaging ,Kidney Disease ,Urologic Diseases ,Bioengineering ,Adult ,Aged ,Contrast Media ,Feasibility Studies ,Female ,Humans ,Injections ,Kidney Calculi ,Male ,Middle Aged ,Nephrostomy ,Percutaneous ,Pilot Projects ,Ultrasonography ,Interventional ,contrast-enhanced ultrasound ,image-guided therapy ,percutaneous nephrolithotomy ,renal stone ,Urology & Nephrology ,Clinical sciences - Abstract
ObjectivesUltrasound-guided percutaneous nephrolithotomy (PCNL) has become increasingly utilized. Patients with nondilated collecting systems represent a challenge: the target calix is often difficult to visualize. Here we report pilot study results for retrograde ultrasound contrast injection to aid in percutaneous renal access during ultrasound-guided PCNL.Patients and methodsFrom April to July 2016, consecutive patients over the age of 18 years with nondilated collecting systems on preoperative imaging who presented for PCNL were enrolled. B-mode ultrasound imaging was compared with contrast-enhanced mode with simultaneous retrograde injection of Optison™ via an ipsilateral ureteral catheter.ResultsFive patients (four males and one female) with renal stones underwent PCNL with retrograde ultrasound contrast injection during the study period. Mean body mass index was 28.3 ± 5.6 kg/m2 and mean stone size was 24.5 ± 12.0 mm. Under B-mode ultrasound, all patients demonstrated nondilated renal collecting systems that appeared as hyperechoic areas, where it was difficult to identify a target calix for puncture. Retrograde contrast injection facilitated delineation of all renal calices initially difficult to visualize under B-mode ultrasound. Renal puncture was then performed effectively in all cases with a mean puncture time of 55.4 ± 44.8 seconds. All PCNL procedures were completed without intraoperative complications and no adverse events related to ultrasound contrast injection occurred.ConclusionRetrograde ultrasound contrast injection as an aide for renal puncture during PCNL is a feasible technique. By improving visualization of the collecting system, it facilitates needle placement in challenging patients without hydronephrosis. Future larger scale studies comparing its use to standard ultrasound-guided technique will be required to validate this concept.
- Published
- 2017
36. High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks
- Author
-
Rajkomar, Alvin, Lingam, Sneha, Taylor, Andrew G, Blum, Michael, and Mongan, John
- Subjects
Biomedical and Clinical Sciences ,Clinical Sciences ,Humans ,Neural Networks ,Computer ,Radiography ,Radiography ,Thoracic ,Random Allocation ,Retrospective Studies ,Chest radiographs ,Machine learning ,Artificial neural networks ,Computer vision ,Deep learning ,Convolutional neural network ,Nuclear Medicine & Medical Imaging ,Clinical sciences - Abstract
The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.
- Published
- 2017
37. The University of California San Francisco Adult Longitudinal Post-Treatment Diffuse Glioma (UCSF-ALPTDG) MRI Dataset.
- Author
-
Fields, Brandon KK, Fields, Brandon KK, Calabrese, Evan, Mongan, John, Cha, Soonmee, Hess, Christopher P, Sugrue, Leo P, Chang, Susan M, Luks, Tracy L, Villanueva-Meyer, Javier E, Rauschecker, Andreas M, Rudie, Jeffrey D, Fields, Brandon KK, Fields, Brandon KK, Calabrese, Evan, Mongan, John, Cha, Soonmee, Hess, Christopher P, Sugrue, Leo P, Chang, Susan M, Luks, Tracy L, Villanueva-Meyer, Javier E, Rauschecker, Andreas M, and Rudie, Jeffrey D
- Abstract
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. The University of California San Francisco Adult Longitudinal Post-Treatment Diffuse Glioma (UCSF-ALPTDG) MRI dataset is a publicly available annotated dataset featuring multimodal brain MRIs from 298 patients with diffuse gliomas taken at two consecutive follow-ups (596 scans total), with corresponding clinical history and expert voxelwise annotations. ©RSNA, 2024.
- Published
- 2024
38. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA
- Author
-
Brady, Adrian P., primary, Allen, Bibb, additional, Chong, Jaron, additional, Kotter, Elmar, additional, Kottler, Nina, additional, Mongan, John, additional, Oakden-Rayner, Lauren, additional, dos Santos, Daniel Pinto, additional, Tang, An, additional, Wald, Christoph, additional, and Slavotinek, John, additional
- Published
- 2024
- Full Text
- View/download PDF
39. In vivo comparison of tantalum, tungsten, and bismuth enteric contrast agents to complement intravenous iodine for double‐contrast dual‐energy CT of the bowel
- Author
-
Rathnayake, Samira, Mongan, John, Torres, Andrew S, Colborn, Robert, Gao, Dong-Wei, Yeh, Benjamin M, and Fu, Yanjun
- Subjects
Engineering ,Biomedical Engineering ,Biomedical Imaging ,Digestive Diseases ,Animals ,Bismuth ,Contrast Media ,Gastrointestinal Tract ,Intestine ,Small ,Iodine ,Rabbits ,Tantalum ,Tomography ,X-Ray Computed ,Tungsten ,dual-energy CT ,CT enterography ,contrast agent ,tungsten ,bismuth ,tantalum ,small bowel ,Medicinal and Biomolecular Chemistry ,Medical Biotechnology ,Nuclear Medicine & Medical Imaging ,Biomedical engineering - Abstract
To assess the ability of dual-energy CT (DECT) to separate intravenous contrast of bowel wall from intraluminal contrast, we scanned 16 rabbits on a clinical DECT scanner: n = 3 using only iodinated intravenous contrast, and n = 13 double-contrast enhanced scans using iodinated intravenous contrast and experimental enteric non-iodinated contrast agents in the bowel lumen (five bismuth, four tungsten, and four tantalum based). Representative image pairs from conventional CT images and DECT iodine density maps of small bowel (116 pairs from 232 images) were viewed by four abdominal imaging attending radiologists to independently score each comparison pair on a visual analog scale (-100 to +100%) for (1) preference in small bowel wall visualization and (2) preference in completeness of intraluminal enteric contrast subtraction. Median small bowel wall visualization was scored 39 and 42 percentage points (95% CI 30-44% and 36-45%, both p
- Published
- 2016
40. On the Centrality of Data: Data Resources in Radiologic Artificial Intelligence
- Author
-
Mongan, John, primary and Halabi, Safwan S., additional
- Published
- 2023
- Full Text
- View/download PDF
41. TotalSegmentator: A Gift to the Biomedical Imaging Community
- Author
-
Sebro, Ronnie, primary and Mongan, John, additional
- Published
- 2023
- Full Text
- View/download PDF
42. Can Radiologists Learn From Airport Baggage Screening?: A Survey About Using Fictional Patients for Quality Assurance
- Author
-
Phelps, Andrew, Callen, Andrew L., Marcovici, Peter, Naeger, David M., Mongan, John, and Webb, Emily M.
- Published
- 2018
- Full Text
- View/download PDF
43. Extravasated Contrast Material in Penetrating Abdominopelvic Trauma: Dual-Contrast Dual-Energy CT for Improved Diagnosis—Preliminary Results in an Animal Model
- Author
-
Mongan, John, Rathnayake, Samira, Fu, Yanjun, Gao, Dong-Wei, and Yeh, Benjamin M
- Subjects
Biomedical and Clinical Sciences ,Clinical Sciences ,Biomedical Imaging ,Abdominal Injuries ,Absorptiometry ,Photon ,Animals ,Contrast Media ,Extravasation of Diagnostic and Therapeutic Materials ,Female ,Humans ,Iodine ,Pelvis ,Pilot Projects ,Rabbits ,Radiographic Image Enhancement ,Reproducibility of Results ,Sensitivity and Specificity ,Tomography ,X-Ray Computed ,Wounds ,Penetrating ,Medical and Health Sciences ,Nuclear Medicine & Medical Imaging ,Clinical sciences - Abstract
PurposeTo compare the diagnostic performance of dual-energy (DE) computed tomography (CT) with two simultaneously administered contrast agents (hereafter, dual contrast) with that of conventional CT in the evaluation of the presence and source of extravasation in penetrating abdominopelvic trauma.Materials and methodsInstitutional animal care and use committee approval was obtained, and the study was performed in accordance with National Institutes of Health guidelines for the care and use of laboratory animals. Five rabbits with bowel trauma, vascular penetrating trauma, or both were imaged with simultaneous iodinated intravenous and bismuth subsalicylate enteric contrast material at DE CT. Four attending radiologists and six radiology residents without prior DE CT experience each evaluated 10 extraluminal collections to identify the vascular and/or enteric origin of extravasation and assess their level of diagnostic confidence, first with virtual monochromatic images simulating conventional CT and then with DE CT material decomposition attenuation maps.ResultsOverall accuracy of identification of source of extravasation increased from 78% with conventional CT to 92% with DE CT (157 of 200 diagnoses vs 184 of 200 diagnoses, respectively; P < .001). Nine radiologists were more accurate with DE CT; one had no change. Mean confidence increased from 67% to 81% with DE CT (P < .001).ConclusionIn a rabbit abdominopelvic trauma model, dual-contrast DE CT significantly increased accuracy and confidence in the diagnosis of vascular versus enteric extravasated contrast material.
- Published
- 2013
44. In Vivo Differentiation of Complementary Contrast Media at Dual-Energy CT
- Author
-
Mongan, John, Rathnayake, Samira, Fu, Yanjun, Wang, Runtang, Jones, Ella F, Gao, Dong-Wei, and Yeh, Benjamin M
- Subjects
Biomedical and Clinical Sciences ,Clinical Sciences ,Biomedical Imaging ,Administration ,Oral ,Animals ,Bismuth ,Contrast Media ,Feasibility Studies ,Image Processing ,Computer-Assisted ,Injections ,Intravenous ,Iohexol ,Organometallic Compounds ,Phantoms ,Imaging ,Rabbits ,Salicylates ,Tomography ,X-Ray Computed ,Tungsten Compounds ,Medical and Health Sciences ,Nuclear Medicine & Medical Imaging ,Clinical sciences - Abstract
PurposeTo evaluate the feasibility of using a commercially available clinical dual-energy computed tomographic (CT) scanner to differentiate the in vivo enhancement due to two simultaneously administered contrast media with complementary x-ray attenuation ratios.Materials and methodsApproval from the institutional animal care and use committee was obtained, and National Institutes of Health guidelines for the care and use of laboratory animals were observed. Dual-energy CT was performed in a set of iodine and tungsten solution phantoms and in a rabbit in which iodinated intravenous and bismuth subsalicylate oral contrast media were administered. In addition, a second rabbit was studied after intravenous administration of iodinated and tungsten cluster contrast media. Images were processed to produce virtual monochromatic images that simulated the appearance of conventional single-energy scans, as well as material decomposition images that separate the attenuation due to each contrast medium.ResultsClear separation of each of the contrast media pairs was seen in the phantom and in both in vivo animal models. Separation of bowel lumen from vascular contrast medium allowed visualization of bowel wall enhancement that was obscured by intraluminal bowel contrast medium on conventional CT scans. Separation of two vascular contrast media in different vascular phases enabled acquisition of a perfectly coregistered CT angiogram and venous phase-enhanced CT scan simultaneously in a single examination.ConclusionCommercially available clinical dual-energy CT scanners can help differentiate the enhancement of selected pairs of complementary contrast media in vivo.
- Published
- 2012
45. PRISMA AI reporting guidelines for systematic reviews and meta-analyses on AI in healthcare
- Author
-
Epi Methoden, Cancer, JC onderzoeksprogramma Methodology, Cacciamani, Giovanni E., Chu, Timothy N., Sanford, Daniel I., Abreu, Andre, Duddalwar, Vinay, Oberai, Assad, Kuo, C. C.Jay, Liu, Xiaoxuan, Denniston, Alastair K., Vasey, Baptiste, McCulloch, Peter, Wolff, Robert F., Mallett, Sue, Mongan, John, Kahn, Charles E., Sounderajah, Viknesh, Darzi, Ara, Dahm, Philipp, Moons, Karel G.M., Topol, Eric, Collins, Gary S., Moher, David, Gill, Inderbir S., Hung, Andrew J., Epi Methoden, Cancer, JC onderzoeksprogramma Methodology, Cacciamani, Giovanni E., Chu, Timothy N., Sanford, Daniel I., Abreu, Andre, Duddalwar, Vinay, Oberai, Assad, Kuo, C. C.Jay, Liu, Xiaoxuan, Denniston, Alastair K., Vasey, Baptiste, McCulloch, Peter, Wolff, Robert F., Mallett, Sue, Mongan, John, Kahn, Charles E., Sounderajah, Viknesh, Darzi, Ara, Dahm, Philipp, Moons, Karel G.M., Topol, Eric, Collins, Gary S., Moher, David, Gill, Inderbir S., and Hung, Andrew J.
- Published
- 2023
46. The Brain Tumor Segmentation (BraTS) Challenge 2023: Local Synthesis of Healthy Brain Tissue via Inpainting
- Author
-
Kofler, Florian, Meissen, Felix, Steinbauer, Felix, Graf, Robert, Oswald, Eva, de da Rosa, Ezequiel, Li, Hongwei Bran, Baid, Ujjwal, Hoelzl, Florian, Turgut, Oezguen, Horvath, Izabela, Waldmannstetter, Diana, Bukas, Christina, Adewole, Maruf, Anwar, Syed Muhammad, Janas, Anastasia, Kazerooni, Anahita Fathi, LaBella, Dominic, Moawad, Ahmed W, Farahani, Keyvan, Eddy, James, Bergquist, Timothy, Chung, Verena, Shinohara, Russell Takeshi, Dako, Farouk, Wiggins, Walter, Reitman, Zachary, Wang, Chunhao, Liu, Xinyang, Jiang, Zhifan, Familiar, Ariana, Conte, Gian-Marco, Johanson, Elaine, Meier, Zeke, Davatzikos, Christos, Freymann, John, Kirby, Justin, Bilello, Michel, Fathallah-Shaykh, Hassan M, Wiest, Roland, Kirschke, Jan, Colen, Rivka R, Kotrotsou, Aikaterini, Lamontagne, Pamela, Marcus, Daniel, Milchenko, Mikhail, Nazeri, Arash, Weber, Marc-André, Mahajan, Abhishek, Mohan, Suyash, Mongan, John, Hess, Christopher, Cha, Soonmee, Villanueva-Meyer, Javier, Colak, Errol, Crivellaro, Priscila, Jakab, Andras, Albrecht, Jake, Anazodo, Udunna, Aboian, Mariam, Iglesias, Juan Eugenio, Van Leemput, Koen, Bakas, Spyridon, Rueckert, Daniel, Wiestler, Benedikt, Ezhov, Ivan, Piraud, Marie, Menze, Bjoern, Kofler, Florian, Meissen, Felix, Steinbauer, Felix, Graf, Robert, Oswald, Eva, de da Rosa, Ezequiel, Li, Hongwei Bran, Baid, Ujjwal, Hoelzl, Florian, Turgut, Oezguen, Horvath, Izabela, Waldmannstetter, Diana, Bukas, Christina, Adewole, Maruf, Anwar, Syed Muhammad, Janas, Anastasia, Kazerooni, Anahita Fathi, LaBella, Dominic, Moawad, Ahmed W, Farahani, Keyvan, Eddy, James, Bergquist, Timothy, Chung, Verena, Shinohara, Russell Takeshi, Dako, Farouk, Wiggins, Walter, Reitman, Zachary, Wang, Chunhao, Liu, Xinyang, Jiang, Zhifan, Familiar, Ariana, Conte, Gian-Marco, Johanson, Elaine, Meier, Zeke, Davatzikos, Christos, Freymann, John, Kirby, Justin, Bilello, Michel, Fathallah-Shaykh, Hassan M, Wiest, Roland, Kirschke, Jan, Colen, Rivka R, Kotrotsou, Aikaterini, Lamontagne, Pamela, Marcus, Daniel, Milchenko, Mikhail, Nazeri, Arash, Weber, Marc-André, Mahajan, Abhishek, Mohan, Suyash, Mongan, John, Hess, Christopher, Cha, Soonmee, Villanueva-Meyer, Javier, Colak, Errol, Crivellaro, Priscila, Jakab, Andras, Albrecht, Jake, Anazodo, Udunna, Aboian, Mariam, Iglesias, Juan Eugenio, Van Leemput, Koen, Bakas, Spyridon, Rueckert, Daniel, Wiestler, Benedikt, Ezhov, Ivan, Piraud, Marie, and Menze, Bjoern
- Abstract
A myriad of algorithms for the automatic analysis of brain MR images is available to support clinicians in their decision-making. For brain tumor patients, the image acquisition time series typically starts with a scan that is already pathological. This poses problems, as many algorithms are designed to analyze healthy brains and provide no guarantees for images featuring lesions. Examples include but are not limited to algorithms for brain anatomy parcellation, tissue segmentation, and brain extraction. To solve this dilemma, we introduce the BraTS 2023 inpainting challenge. Here, the participants' task is to explore inpainting techniques to synthesize healthy brain scans from lesioned ones. The following manuscript contains the task formulation, dataset, and submission procedure. Later it will be updated to summarize the findings of the challenge. The challenge is organized as part of the BraTS 2023 challenge hosted at the MICCAI 2023 conference in Vancouver, Canada., Comment: 5 pages, 1 figure
- Published
- 2023
47. Behavioral “nudges” in the electronic health record to reduce waste and misuse: 3 interventions
- Author
-
Grouse, Carrie K, primary, Waung, Maggie W, additional, Holmgren, A Jay, additional, Mongan, John, additional, Neinstein, Aaron, additional, Josephson, S Andrew, additional, and Khanna, Raman R, additional
- Published
- 2022
- Full Text
- View/download PDF
48. Deep Learning to Simulate Contrast-enhanced Breast MRI of Invasive Breast Cancer
- Author
-
Chung, Maggie, primary, Calabrese, Evan, additional, Mongan, John, additional, Ray, Kimberly M., additional, Hayward, Jessica H., additional, Kelil, Tatiana, additional, Sieberg, Ryan, additional, Hylton, Nola, additional, Joe, Bonnie N., additional, and Lee, Amie Y., additional
- Published
- 2022
- Full Text
- View/download PDF
49. Behavioral "nudges" in the electronic health record to reduce waste and misuse: 3 interventions.
- Author
-
Grouse, Carrie K, Waung, Maggie W, Holmgren, A Jay, Mongan, John, Neinstein, Aaron, Josephson, S Andrew, and Khanna, Raman R
- Abstract
Electronic health records (EHRs) offer decision support in the form of alerts, which are often though not always interruptive. These alerts, though sometimes effective, can come at the cost of high cognitive burden and workflow disruption. Less well studied is the design of the EHR itself—the ordering provider's "choice architecture"—which "nudges" users toward alternatives, sometimes unintentionally toward waste and misuse, but ideally intentionally toward better practice. We studied 3 different workflows at our institution where the existing choice architecture was potentially nudging providers toward erroneous decisions, waste, and misuse in the form of inappropriate laboratory work, incorrectly specified computerized tomographic imaging, and excessive benzodiazepine dosing for imaging-related sedation. We changed the architecture to nudge providers toward better practice and found that the 3 nudges were successful to varying degrees in reducing erroneous decision-making and mitigating waste and misuse. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Imaging AI in Practice: Introducing the Special Issue
- Author
-
Mongan, John, primary, Vagal, Achala, additional, and Wu, Carol C., additional
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.