47 results on '"Mongan, John"'
Search Results
2. Clinical, Cultural, Computational, and Regulatory Considerations to Deploy AI in Radiology: Perspectives of RSNA and MICCAI Experts.
- Author
-
Linguraru, Marius, Bakas, Spyridon, Aboian, Mariam, Chang, Peter, Flanders, Adam, Kalpathy-Cramer, Jayashree, Kitamura, Felipe, Lungren, Matthew, Mongan, John, Prevedello, Luciano, Summers, Ronald, Wu, Carol, Adewole, Maruf, and Kahn, Charles
- Subjects
Adults and Pediatrics ,Computer Applications–General (Informatics) ,Diagnosis ,Prognosis ,Artificial Intelligence ,Humans ,Radiology ,Societies ,Medical - Abstract
The Radiological Society of North of America (RSNA) and the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society have led a series of joint panels and seminars focused on the present impact and future directions of artificial intelligence (AI) in radiology. These conversations have collected viewpoints from multidisciplinary experts in radiology, medical imaging, and machine learning on the current clinical penetration of AI technology in radiology and how it is impacted by trust, reproducibility, explainability, and accountability. The collective points-both practical and philosophical-define the cultural changes for radiologists and AI scientists working together and describe the challenges ahead for AI technologies to meet broad approval. This article presents the perspectives of experts from MICCAI and RSNA on the clinical, cultural, computational, and regulatory considerations-coupled with recommended reading materials-essential to adopt AI technology successfully in radiology and, more generally, in clinical practice. The report emphasizes the importance of collaboration to improve clinical deployment, highlights the need to integrate clinical and medical imaging data, and introduces strategies to ensure smooth and incentivized integration. Keywords: Adults and Pediatrics, Computer Applications-General (Informatics), Diagnosis, Prognosis © RSNA, 2024.
- Published
- 2024
3. The University of California San Francisco Adult Longitudinal Post-Treatment Diffuse Glioma MRI Dataset.
- Author
-
Fields, Brandon KK, Calabrese, Evan, Mongan, John, Cha, Soonmee, Hess, Christopher P, Sugrue, Leo P, Chang, Susan M, Luks, Tracy L, Villanueva-Meyer, Javier E, Rauschecker, Andreas M, and Rudie, Jeffrey D
- Subjects
Language ,Communication and Culture ,Linguistics ,Human Society ,Neurosciences ,Brain Cancer ,Cancer ,Brain Disorders ,Rare Diseases ,Humans ,Glioma ,Magnetic Resonance Imaging ,Brain Neoplasms ,Female ,Male ,Middle Aged ,Adult ,Longitudinal Studies ,San Francisco ,Aged ,Artificial Intelligence ,Deep Learning ,Diffuse Glioma ,Neuro-Oncology ,Resection Cavity - Abstract
Supplemental material is available for this article.
- Published
- 2024
4. The RSNA Abdominal Traumatic Injury CT (RATIC) Dataset
- Author
-
Rudie, Jeffrey D., Lin, Hui-Ming, Ball, Robyn L., Jalal, Sabeena, Prevedello, Luciano M., Nicolaou, Savvas, Marinelli, Brett S., Flanders, Adam E., Magudia, Kirti, Shih, George, Davis, Melissa A., Mongan, John, Chang, Peter D., Berger, Ferco H., Hermans, Sebastiaan, Law, Meng, Richards, Tyler, Grunz, Jan-Peter, Kunz, Andreas Steven, Mathur, Shobhit, Galea-Soler, Sandro, Chung, Andrew D., Afat, Saif, Kuo, Chin-Chi, Aweidah, Layal, Campos, Ana Villanueva, Somasundaram, Arjuna, Tijmes, Felipe Antonio Sanchez, Jantarangkoon, Attaporn, Bittencourt, Leonardo Kayat, Brassil, Michael, Hajjami, Ayoub El, Dogan, Hakan, Becircic, Muris, Bharatkumar, Agrahara G., Farina, Eduardo Moreno Júdice de Mattos, Group, Dataset Curator, Group, Dataset Contributor, Group, Dataset Annotator, and Colak, Errol
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
The RSNA Abdominal Traumatic Injury CT (RATIC) dataset is the largest publicly available collection of adult abdominal CT studies annotated for traumatic injuries. This dataset includes 4,274 studies from 23 institutions across 14 countries. The dataset is freely available for non-commercial use via Kaggle at https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection. Created for the RSNA 2023 Abdominal Trauma Detection competition, the dataset encourages the development of advanced machine learning models for detecting abdominal injuries on CT scans. The dataset encompasses detection and classification of traumatic injuries across multiple organs, including the liver, spleen, kidneys, bowel, and mesentery. Annotations were created by expert radiologists from the American Society of Emergency Radiology (ASER) and Society of Abdominal Radiology (SAR). The dataset is annotated at multiple levels, including the presence of injuries in three solid organs with injury grading, image-level annotations for active extravasations and bowel injury, and voxelwise segmentations of each of the potentially injured organs. With the release of this dataset, we hope to facilitate research and development in machine learning and abdominal trauma that can lead to improved patient care and outcomes., Comment: 40 pages, 2 figures, 3 tables
- Published
- 2024
5. The 2024 Brain Tumor Segmentation (BraTS) Challenge: Glioma Segmentation on Post-treatment MRI
- Author
-
de Verdier, Maria Correia, Saluja, Rachit, Gagnon, Louis, LaBella, Dominic, Baid, Ujjwall, Tahon, Nourel Hoda, Foltyn-Dumitru, Martha, Zhang, Jikai, Alafif, Maram, Baig, Saif, Chang, Ken, D'Anna, Gennaro, Deptula, Lisa, Gupta, Diviya, Haider, Muhammad Ammar, Hussain, Ali, Iv, Michael, Kontzialis, Marinos, Manning, Paul, Moodi, Farzan, Nunes, Teresa, Simon, Aaron, Sollmann, Nico, Vu, David, Adewole, Maruf, Albrecht, Jake, Anazodo, Udunna, Chai, Rongrong, Chung, Verena, Faghani, Shahriar, Farahani, Keyvan, Kazerooni, Anahita Fathi, Iglesias, Eugenio, Kofler, Florian, Li, Hongwei, Linguraru, Marius George, Menze, Bjoern, Moawad, Ahmed W., Velichko, Yury, Wiestler, Benedikt, Altes, Talissa, Basavasagar, Patil, Bendszus, Martin, Brugnara, Gianluca, Cho, Jaeyoung, Dhemesh, Yaseen, Fields, Brandon K. K., Garrett, Filip, Gass, Jaime, Hadjiiski, Lubomir, Hattangadi-Gluth, Jona, Hess, Christopher, Houk, Jessica L., Isufi, Edvin, Layfield, Lester J., Mastorakos, George, Mongan, John, Nedelec, Pierre, Nguyen, Uyen, Oliva, Sebastian, Pease, Matthew W., Rastogi, Aditya, Sinclair, Jason, Smith, Robert X., Sugrue, Leo P., Thacker, Jonathan, Vidic, Igor, Villanueva-Meyer, Javier, White, Nathan S., Aboian, Mariam, Conte, Gian Marco, Dale, Anders, Sabuncu, Mert R., Seibert, Tyler M., Weinberg, Brent, Abayazeed, Aly, Huang, Raymond, Turk, Sevcan, Rauschecker, Andreas M., Farid, Nikdokht, Vollmuth, Philipp, Nada, Ayman, Bakas, Spyridon, Calabrese, Evan, and Rudie, Jeffrey D.
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Gliomas are the most common malignant primary brain tumors in adults and one of the deadliest types of cancer. There are many challenges in treatment and monitoring due to the genetic diversity and high intrinsic heterogeneity in appearance, shape, histology, and treatment response. Treatments include surgery, radiation, and systemic therapies, with magnetic resonance imaging (MRI) playing a key role in treatment planning and post-treatment longitudinal assessment. The 2024 Brain Tumor Segmentation (BraTS) challenge on post-treatment glioma MRI will provide a community standard and benchmark for state-of-the-art automated segmentation models based on the largest expert-annotated post-treatment glioma MRI dataset. Challenge competitors will develop automated segmentation models to predict four distinct tumor sub-regions consisting of enhancing tissue (ET), surrounding non-enhancing T2/fluid-attenuated inversion recovery (FLAIR) hyperintensity (SNFH), non-enhancing tumor core (NETC), and resection cavity (RC). Models will be evaluated on separate validation and test datasets using standardized performance metrics utilized across the BraTS 2024 cluster of challenges, including lesion-wise Dice Similarity Coefficient and Hausdorff Distance. Models developed during this challenge will advance the field of automated MRI segmentation and contribute to their integration into clinical practice, ultimately enhancing patient care., Comment: 10 pages, 4 figures, 1 table
- Published
- 2024
6. Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge
- Author
-
LaBella, Dominic, Baid, Ujjwal, Khanna, Omaditya, McBurney-Lin, Shan, McLean, Ryan, Nedelec, Pierre, Rashid, Arif, Tahon, Nourel Hoda, Altes, Talissa, Bhalerao, Radhika, Dhemesh, Yaseen, Godfrey, Devon, Hilal, Fathi, Floyd, Scott, Janas, Anastasia, Kazerooni, Anahita Fathi, Kirkpatrick, John, Kent, Collin, Kofler, Florian, Leu, Kevin, Maleki, Nazanin, Menze, Bjoern, Pajot, Maxence, Reitman, Zachary J., Rudie, Jeffrey D., Saluja, Rachit, Velichko, Yury, Wang, Chunhao, Warman, Pranav, Adewole, Maruf, Albrecht, Jake, Anazodo, Udunna, Anwar, Syed Muhammad, Bergquist, Timothy, Chen, Sully Francis, Chung, Verena, Conte, Gian-Marco, Dako, Farouk, Eddy, James, Ezhov, Ivan, Khalili, Nastaran, Iglesias, Juan Eugenio, Jiang, Zhifan, Johanson, Elaine, Van Leemput, Koen, Li, Hongwei Bran, Linguraru, Marius George, Liu, Xinyang, Mahtabfar, Aria, Meier, Zeke, Moawad, Ahmed W., Mongan, John, Piraud, Marie, Shinohara, Russell Takeshi, Wiggins, Walter F., Abayazeed, Aly H., Akinola, Rachel, Jakab, András, Bilello, Michel, de Verdier, Maria Correia, Crivellaro, Priscila, Davatzikos, Christos, Farahani, Keyvan, Freymann, John, Hess, Christopher, Huang, Raymond, Lohmann, Philipp, Moassefi, Mana, Pease, Matthew W., Vollmuth, Phillipp, Sollmann, Nico, Diffley, David, Nandolia, Khanak K., Warren, Daniel I., Hussain, Ali, Fehringer, Pascal, Bronstein, Yulia, Deptula, Lisa, Stein, Evan G., Taherzadeh, Mahsa, de Oliveira, Eduardo Portela, Haughey, Aoife, Kontzialis, Marinos, Saba, Luca, Turner, Benjamin, Brüßeler, Melanie M. T., Ansari, Shehbaz, Gkampenis, Athanasios, Weiss, David Maximilian, Mansour, Aya, Shawali, Islam H., Yordanov, Nikolay, Stein, Joel M., Hourani, Roula, Moshebah, Mohammed Yahya, Abouelatta, Ahmed Magdy, Rizvi, Tanvir, Willms, Klara, Martin, Dann C., Okar, Abdullah, D'Anna, Gennaro, Taha, Ahmed, Sharifi, Yasaman, Faghani, Shahriar, Kite, Dominic, Pinho, Marco, Haider, Muhammad Ammar, Aristizabal, Alejandro, Karargyris, Alexandros, Kassem, Hasan, Pati, Sarthak, Sheller, Micah, Alonso-Basanta, Michelle, Villanueva-Meyer, Javier, Rauschecker, Andreas M., Nada, Ayman, Aboian, Mariam, Flanders, Adam E., Wiestler, Benedikt, Bakas, Spyridon, and Calabrese, Evan
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
We describe the design and results from the BraTS 2023 Intracranial Meningioma Segmentation Challenge. The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas, which are typically benign extra-axial tumors with diverse radiologic and anatomical presentation and a propensity for multiplicity. Nine participating teams each developed deep-learning automated segmentation models using image data from the largest multi-institutional systematically expert annotated multilabel multi-sequence meningioma MRI dataset to date, which included 1000 training set cases, 141 validation set cases, and 283 hidden test set cases. Each case included T2, T2/FLAIR, T1, and T1Gd brain MRI sequences with associated tumor compartment labels delineating enhancing tumor, non-enhancing tumor, and surrounding non-enhancing T2/FLAIR hyperintensity. Participant automated segmentation models were evaluated and ranked based on a scoring system evaluating lesion-wise metrics including dice similarity coefficient (DSC) and 95% Hausdorff Distance. The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor, respectively and a corresponding average DSC of 0.899, 0.904, and 0.871, respectively. These results serve as state-of-the-art benchmarks for future pre-operative meningioma automated segmentation algorithms. Additionally, we found that 1286 of 1424 cases (90.3%) had at least 1 compartment voxel abutting the edge of the skull-stripped image edge, which requires further investigation into optimal pre-processing face anonymization steps., Comment: 16 pages, 11 tables, 10 figures, MICCAI
- Published
- 2024
7. Lessons Learned in Building Expertly Annotated Multi-Institution Datasets and Hosting the RSNA AI Challenges.
- Author
-
Kitamura, Felipe, Prevedello, Luciano, Colak, Errol, Halabi, Safwan, Lungren, Matthew, Ball, Robyn, Kalpathy-Cramer, Jayashree, Kahn, Charles, Richards, Tyler, Shih, George, Lin, Hui, Andriole, Katherine, Vazirabad, Maryam, Erickson, Bradley, Flanders, Adam, Talbott, Jason, and Mongan, John
- Subjects
Artificial Intelligence ,Use of AI in Education ,Humans ,Artificial Intelligence ,Radiology ,Diagnostic Imaging ,Societies ,Medical ,North America - Abstract
The Radiological Society of North America (RSNA) has held artificial intelligence competitions to tackle real-world medical imaging problems at least annually since 2017. This article examines the challenges and processes involved in organizing these competitions, with a specific emphasis on the creation and curation of high-quality datasets. The collection of diverse and representative medical imaging data involves dealing with issues of patient privacy and data security. Furthermore, ensuring quality and consistency in data, which includes expert labeling and accounting for various patient and imaging characteristics, necessitates substantial planning and resources. Overcoming these obstacles requires meticulous project management and adherence to strict timelines. The article also highlights the potential of crowdsourced annotation to progress medical imaging research. Through the RSNA competitions, an effective global engagement has been realized, resulting in innovative solutions to complex medical imaging problems, thus potentially transforming health care by enhancing diagnostic accuracy and patient outcomes. Keywords: Use of AI in Education, Artificial Intelligence © RSNA, 2024.
- Published
- 2024
8. The University of California San Francisco Brain Metastases Stereotactic Radiosurgery (UCSF-BMSR) MRI Dataset.
- Author
-
Rudie, Jeffrey, Saluja, Rachit, Weiss, David, Nedelec, Pierre, Calabrese, Evan, Colby, John, Laguna, Benjamin, Rauschecker, Andreas, Sugrue, Leo, Hess, Christopher, Mongan, John, Braunstein, Steve, and Villanueva-Meyer, Javier
- Subjects
Artificial Intelligence ,Brain Metastases ,MRI ,Public Datasets ,Humans ,Radiosurgery ,San Francisco ,Brain Neoplasms ,Magnetic Resonance Imaging - Abstract
Supplemental material is available for this article.
- Published
- 2024
9. A Generalizable Deep Learning System for Cardiac MRI
- Author
-
Shad, Rohan, Zakka, Cyril, Kaur, Dhamanpreet, Fong, Robyn, Filice, Ross Warren, Mongan, John, Kalianos, Kimberly, Khandwala, Nishith, Eng, David, Leipzig, Matthew, Witschey, Walter, de Feria, Alejandro, Ferrari, Victor, Ashley, Euan, Acker, Michael A., Langlotz, Curtis, and Hiesinger, William
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning ,I.2.10 - Abstract
Cardiac MRI allows for a comprehensive assessment of myocardial structure, function, and tissue characteristics. Here we describe a foundational vision system for cardiac MRI, capable of representing the breadth of human cardiovascular disease and health. Our deep learning model is trained via self-supervised contrastive learning, by which visual concepts in cine-sequence cardiac MRI scans are learned from the raw text of the accompanying radiology reports. We train and evaluate our model on data from four large academic clinical institutions in the United States. We additionally showcase the performance of our models on the UK BioBank, and two additional publicly available external datasets. We explore emergent zero-shot capabilities of our system, and demonstrate remarkable performance across a range of tasks; including the problem of left ventricular ejection fraction regression, and the diagnosis of 35 different conditions such as cardiac amyloidosis and hypertrophic cardiomyopathy. We show that our deep learning system is capable of not only understanding the staggering complexity of human cardiovascular disease, but can be directed towards clinical problems of interest yielding impressive, clinical grade diagnostic accuracy with a fraction of the training data typically required for such tasks., Comment: 21 page main manuscript, 4 figures. Supplementary Appendix and code will be made available on publication
- Published
- 2023
10. FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
- Author
-
Lekadir, Karim, Feragen, Aasa, Fofanah, Abdul Joseph, Frangi, Alejandro F, Buyx, Alena, Emelie, Anais, Lara, Andrea, Porras, Antonio R, Chan, An-Wen, Navarro, Arcadi, Glocker, Ben, Botwe, Benard O, Khanal, Bishesh, Beger, Brigit, Wu, Carol C, Cintas, Celia, Langlotz, Curtis P, Rueckert, Daniel, Mzurikwao, Deogratias, Fotiadis, Dimitrios I, Zhussupov, Doszhan, Ferrante, Enzo, Meijering, Erik, Weicken, Eva, González, Fabio A, Asselbergs, Folkert W, Prior, Fred, Krestin, Gabriel P, Collins, Gary, Tegenaw, Geletaw S, Kaissis, Georgios, Misuraca, Gianluca, Tsakou, Gianna, Dwivedi, Girish, Kondylakis, Haridimos, Jayakody, Harsha, Woodruf, Henry C, Mayer, Horst Joachim, Aerts, Hugo JWL, Walsh, Ian, Chouvarda, Ioanna, Buvat, Irène, Tributsch, Isabell, Rekik, Islem, Duncan, James, Kalpathy-Cramer, Jayashree, Zahir, Jihad, Park, Jinah, Mongan, John, Gichoya, Judy W, Schnabel, Julia A, Kushibar, Kaisar, Riklund, Katrine, Mori, Kensaku, Marias, Kostas, Amugongo, Lameck M, Fromont, Lauren A, Maier-Hein, Lena, Alberich, Leonor Cerdá, Rittner, Leticia, Phiri, Lighton, Marrakchi-Kacem, Linda, Donoso-Bach, Lluís, Martí-Bonmatí, Luis, Cardoso, M Jorge, Bobowicz, Maciej, Shabani, Mahsa, Tsiknakis, Manolis, Zuluaga, Maria A, Bielikova, Maria, Fritzsche, Marie-Christine, Camacho, Marina, Linguraru, Marius George, Wenzel, Markus, De Bruijne, Marleen, Tolsgaard, Martin G, Ghassemi, Marzyeh, Ashrafuzzaman, Md, Goisauf, Melanie, Yaqub, Mohammad, Abadía, Mónica Cano, Mahmoud, Mukhtar M E, Elattar, Mustafa, Rieke, Nicola, Papanikolaou, Nikolaos, Lazrak, Noussair, Díaz, Oliver, Salvado, Olivier, Pujol, Oriol, Sall, Ousmane, Guevara, Pamela, Gordebeke, Peter, Lambin, Philippe, Brown, Pieta, Abolmaesumi, Purang, Dou, Qi, Lu, Qinghua, Osuala, Richard, Nakasi, Rose, Zhou, S Kevin, Napel, Sandy, Colantonio, Sara, Albarqouni, Shadi, Joshi, Smriti, Carter, Stacy, Klein, Stefan, Petersen, Steffen E, Aussó, Susanna, Awate, Suyash, Raviv, Tammy Riklin, Cook, Tessa, Mutsvangwa, Tinashe E M, Rogers, Wendy A, Niessen, Wiro J, Puig-Bosch, Xènia, Zeng, Yi, Mohammed, Yunusa G, Aquino, Yves Saint James, Salahuddin, Zohaib, and Starmans, Martijn P A
- Subjects
Computer Science - Computers and Society ,Computer Science - Artificial Intelligence ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning ,I.2.0 ,I.4.0 ,I.5.0 - Abstract
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI.
- Published
- 2023
11. The Brain Tumor Segmentation (BraTS) Challenge 2023: Brain MR Image Synthesis for Tumor Segmentation (BraSyn)
- Author
-
Li, Hongwei Bran, Conte, Gian Marco, Anwar, Syed Muhammad, Kofler, Florian, Ezhov, Ivan, van Leemput, Koen, Piraud, Marie, Diaz, Maria, Cole, Byrone, Calabrese, Evan, Rudie, Jeff, Meissen, Felix, Adewole, Maruf, Janas, Anastasia, Kazerooni, Anahita Fathi, LaBella, Dominic, Moawad, Ahmed W., Farahani, Keyvan, Eddy, James, Bergquist, Timothy, Chung, Verena, Shinohara, Russell Takeshi, Dako, Farouk, Wiggins, Walter, Reitman, Zachary, Wang, Chunhao, Liu, Xinyang, Jiang, Zhifan, Familiar, Ariana, Johanson, Elaine, Meier, Zeke, Davatzikos, Christos, Freymann, John, Kirby, Justin, Bilello, Michel, Fathallah-Shaykh, Hassan M., Wiest, Roland, Kirschke, Jan, Colen, Rivka R., Kotrotsou, Aikaterini, Lamontagne, Pamela, Marcus, Daniel, Milchenko, Mikhail, Nazeri, Arash, Weber, Marc André, Mahajan, Abhishek, Mohan, Suyash, Mongan, John, Hess, Christopher, Cha, Soonmee, Villanueva, Javier, Colak, Meyer Errol, Crivellaro, Priscila, Jakab, Andras, Albrecht, Jake, Anazodo, Udunna, Aboian, Mariam, Yu, Thomas, Baid, Ujjwal, Bakas, Spyridon, Linguraru, Marius George, Menze, Bjoern, Iglesias, Juan Eugenio, and Wiestler, Benedikt
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Automated brain tumor segmentation methods have become well-established and reached performance levels offering clear clinical utility. These methods typically rely on four input magnetic resonance imaging (MRI) modalities: T1-weighted images with and without contrast enhancement, T2-weighted images, and FLAIR images. However, some sequences are often missing in clinical practice due to time constraints or image artifacts, such as patient motion. Consequently, the ability to substitute missing modalities and gain segmentation performance is highly desirable and necessary for the broader adoption of these algorithms in the clinical routine. In this work, we present the establishment of the Brain MR Image Synthesis Benchmark (BraSyn) in conjunction with the Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023. The primary objective of this challenge is to evaluate image synthesis methods that can realistically generate missing MRI modalities when multiple available images are provided. The ultimate aim is to facilitate automated brain tumor segmentation pipelines. The image dataset used in the benchmark is diverse and multi-modal, created through collaboration with various hospitals and research institutions., Comment: Technical report of BraSyn
- Published
- 2023
12. The Brain Tumor Segmentation (BraTS) Challenge 2023: Local Synthesis of Healthy Brain Tissue via Inpainting
- Author
-
Kofler, Florian, Meissen, Felix, Steinbauer, Felix, Graf, Robert, Oswald, Eva, de da Rosa, Ezequiel, Li, Hongwei Bran, Baid, Ujjwal, Hoelzl, Florian, Turgut, Oezguen, Horvath, Izabela, Waldmannstetter, Diana, Bukas, Christina, Adewole, Maruf, Anwar, Syed Muhammad, Janas, Anastasia, Kazerooni, Anahita Fathi, LaBella, Dominic, Moawad, Ahmed W, Farahani, Keyvan, Eddy, James, Bergquist, Timothy, Chung, Verena, Shinohara, Russell Takeshi, Dako, Farouk, Wiggins, Walter, Reitman, Zachary, Wang, Chunhao, Liu, Xinyang, Jiang, Zhifan, Familiar, Ariana, Conte, Gian-Marco, Johanson, Elaine, Meier, Zeke, Davatzikos, Christos, Freymann, John, Kirby, Justin, Bilello, Michel, Fathallah-Shaykh, Hassan M, Wiest, Roland, Kirschke, Jan, Colen, Rivka R, Kotrotsou, Aikaterini, Lamontagne, Pamela, Marcus, Daniel, Milchenko, Mikhail, Nazeri, Arash, Weber, Marc-André, Mahajan, Abhishek, Mohan, Suyash, Mongan, John, Hess, Christopher, Cha, Soonmee, Villanueva-Meyer, Javier, Colak, Errol, Crivellaro, Priscila, Jakab, Andras, Albrecht, Jake, Anazodo, Udunna, Aboian, Mariam, Iglesias, Juan Eugenio, Van Leemput, Koen, Bakas, Spyridon, Rueckert, Daniel, Wiestler, Benedikt, Ezhov, Ivan, Piraud, Marie, and Menze, Bjoern
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
A myriad of algorithms for the automatic analysis of brain MR images is available to support clinicians in their decision-making. For brain tumor patients, the image acquisition time series typically starts with a scan that is already pathological. This poses problems, as many algorithms are designed to analyze healthy brains and provide no guarantees for images featuring lesions. Examples include but are not limited to algorithms for brain anatomy parcellation, tissue segmentation, and brain extraction. To solve this dilemma, we introduce the BraTS 2023 inpainting challenge. Here, the participants' task is to explore inpainting techniques to synthesize healthy brain scans from lesioned ones. The following manuscript contains the task formulation, dataset, and submission procedure. Later it will be updated to summarize the findings of the challenge. The challenge is organized as part of the BraTS 2023 challenge hosted at the MICCAI 2023 conference in Vancouver, Canada., Comment: 5 pages, 1 figure
- Published
- 2023
13. The ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2023: Intracranial Meningioma
- Author
-
LaBella, Dominic, Adewole, Maruf, Alonso-Basanta, Michelle, Altes, Talissa, Anwar, Syed Muhammad, Baid, Ujjwal, Bergquist, Timothy, Bhalerao, Radhika, Chen, Sully, Chung, Verena, Conte, Gian-Marco, Dako, Farouk, Eddy, James, Ezhov, Ivan, Godfrey, Devon, Hilal, Fathi, Familiar, Ariana, Farahani, Keyvan, Iglesias, Juan Eugenio, Jiang, Zhifan, Johanson, Elaine, Kazerooni, Anahita Fathi, Kent, Collin, Kirkpatrick, John, Kofler, Florian, Van Leemput, Koen, Li, Hongwei Bran, Liu, Xinyang, Mahtabfar, Aria, McBurney-Lin, Shan, McLean, Ryan, Meier, Zeke, Moawad, Ahmed W, Mongan, John, Nedelec, Pierre, Pajot, Maxence, Piraud, Marie, Rashid, Arif, Reitman, Zachary, Shinohara, Russell Takeshi, Velichko, Yury, Wang, Chunhao, Warman, Pranav, Wiggins, Walter, Aboian, Mariam, Albrecht, Jake, Anazodo, Udunna, Bakas, Spyridon, Flanders, Adam, Janas, Anastasia, Khanna, Goldey, Linguraru, Marius George, Menze, Bjoern, Nada, Ayman, Rauschecker, Andreas M, Rudie, Jeff, Tahon, Nourel Hoda, Villanueva-Meyer, Javier, Wiestler, Benedikt, and Calabrese, Evan
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Meningiomas are the most common primary intracranial tumor in adults and can be associated with significant morbidity and mortality. Radiologists, neurosurgeons, neuro-oncologists, and radiation oncologists rely on multiparametric MRI (mpMRI) for diagnosis, treatment planning, and longitudinal treatment monitoring; yet automated, objective, and quantitative tools for non-invasive assessment of meningiomas on mpMRI are lacking. The BraTS meningioma 2023 challenge will provide a community standard and benchmark for state-of-the-art automated intracranial meningioma segmentation models based on the largest expert annotated multilabel meningioma mpMRI dataset to date. Challenge competitors will develop automated segmentation models to predict three distinct meningioma sub-regions on MRI including enhancing tumor, non-enhancing tumor core, and surrounding nonenhancing T2/FLAIR hyperintensity. Models will be evaluated on separate validation and held-out test datasets using standardized metrics utilized across the BraTS 2023 series of challenges including the Dice similarity coefficient and Hausdorff distance. The models developed during the course of this challenge will aid in incorporation of automated meningioma MRI segmentation into clinical practice, which will ultimately improve care of patients with meningioma.
- Published
- 2023
14. The University of California San Francisco Brain Metastases Stereotactic Radiosurgery (UCSF-BMSR) MRI Dataset
- Author
-
Rudie, Jeffrey D., Saluja, Rachit, Weiss, David A., Nedelec, Pierre, Calabrese, Evan, Colby, John B., Laguna, Benjamin, Mongan, John, Braunstein, Steve, Hess, Christopher P., Rauschecker, Andreas M., Sugrue, Leo P., and Villanueva-Meyer, Javier E.
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
The University of California San Francisco Brain Metastases Stereotactic Radiosurgery (UCSF-BMSR) dataset is a public, clinical, multimodal brain MRI dataset consisting of 560 brain MRIs from 412 patients with expert annotations of 5136 brain metastases. Data consists of registered and skull stripped T1 post-contrast, T1 pre-contrast, FLAIR and subtraction (T1 pre-contrast - T1 post-contrast) images and voxelwise segmentations of enhancing brain metastases in NifTI format. The dataset also includes patient demographics, surgical status and primary cancer types. The UCSF-BSMR has been made publicly available in the hopes that researchers will use these data to push the boundaries of AI applications for brain metastases. The dataset is freely available for non-commercial use at https://imagingdatasets.ucsf.edu/dataset/1, Comment: 15 pages, 2 tables, 2 figures
- Published
- 2023
- Full Text
- View/download PDF
15. Deep Learning to Simulate Contrast-enhanced Breast MRI of Invasive Breast Cancer.
- Author
-
Chung, Maggie, Calabrese, Evan, Mongan, John, Hayward, Jessica, Sieberg, Ryan, Joe, Bonnie, Hylton, Nola, Lee, Amie, Kelil, Tatiana, and Ray, Kimberly
- Subjects
Female ,Humans ,Middle Aged ,Breast Neoplasms ,Deep Learning ,Retrospective Studies ,Breast ,Magnetic Resonance Imaging ,Contrast Media - Abstract
Background There is increasing interest in noncontrast breast MRI alternatives for tumor visualization to increase the accessibility of breast MRI. Purpose To evaluate the feasibility and accuracy of generating simulated contrast-enhanced T1-weighted breast MRI scans from precontrast MRI sequences in biopsy-proven invasive breast cancer with use of deep learning. Materials and Methods Women with invasive breast cancer and a contrast-enhanced breast MRI examination that was performed for initial evaluation of the extent of disease between January 2015 and December 2019 at a single academic institution were retrospectively identified. A three-dimensional, fully convolutional deep neural network simulated contrast-enhanced T1-weighted breast MRI scans from five precontrast sequences (T1-weighted non-fat-suppressed [FS], T1-weighted FS, T2-weighted FS, apparent diffusion coefficient, and diffusion-weighted imaging). For qualitative assessment, four breast radiologists (with 3-15 years of experience) blinded to whether the method of contrast was real or simulated assessed image quality (excellent, acceptable, good, poor, or unacceptable), presence of tumor enhancement, and maximum index mass size by using 22 pairs of real and simulated contrast-enhanced MRI scans. Quantitative comparison was performed using whole-breast similarity and error metrics and Dice coefficient analysis of enhancing tumor overlap. Results Ninety-six MRI examinations in 96 women (mean age, 52 years ± 12 [SD]) were evaluated. The readers assessed all simulated MRI scans as having the appearance of a real MRI scan with tumor enhancement. Index mass sizes on real and simulated MRI scans demonstrated good to excellent agreement (intraclass correlation coefficient, 0.73-0.86; P < .001) without significant differences (mean differences, -0.8 to 0.8 mm; P = .36-.80). Almost all simulated MRI scans (84 of 88 [95%]) were considered of diagnostic quality (ratings of excellent, acceptable, or good). Quantitative analysis demonstrated strong similarity (structural similarity index, 0.88 ± 0.05), low voxel-wise error (symmetric mean absolute percent error, 3.26%), and Dice coefficient of enhancing tumor overlap of 0.75 ± 0.25. Conclusion It is feasible to generate simulated contrast-enhanced breast MRI scans with use of deep learning. Simulated and real contrast-enhanced MRI scans demonstrated comparable tumor sizes, areas of tumor enhancement, and image quality without significant qualitative or quantitative differences. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Slanetz in this issue. An earlier incorrect version appeared online. This article was corrected on January 17, 2023.
- Published
- 2023
16. Behavioral nudges in the electronic health record to reduce waste and misuse: 3 interventions.
- Author
-
Grouse, Carrie, Waung, Maggie, Holmgren, A, Mongan, John, Neinstein, Aaron, Josephson, S, and Khanna, Raman
- Subjects
choice architecture ,computerized tomography ,decision support ,nudge ,overuse ,waste ,Electronic Health Records ,Workflow - Abstract
Electronic health records (EHRs) offer decision support in the form of alerts, which are often though not always interruptive. These alerts, though sometimes effective, can come at the cost of high cognitive burden and workflow disruption. Less well studied is the design of the EHR itself-the ordering providers choice architecture-which nudges users toward alternatives, sometimes unintentionally toward waste and misuse, but ideally intentionally toward better practice. We studied 3 different workflows at our institution where the existing choice architecture was potentially nudging providers toward erroneous decisions, waste, and misuse in the form of inappropriate laboratory work, incorrectly specified computerized tomographic imaging, and excessive benzodiazepine dosing for imaging-related sedation. We changed the architecture to nudge providers toward better practice and found that the 3 nudges were successful to varying degrees in reducing erroneous decision-making and mitigating waste and misuse.
- Published
- 2023
17. Automated detection of IVC filters on radiographs with deep convolutional neural networks.
- Author
-
Mongan, John, Kohli, Marc D, Houshyar, Roozbeh, Chang, Peter D, Glavis-Bloom, Justin, and Taylor, Andrew G
- Subjects
Humans ,Radiography ,Retrospective Studies ,Vena Cava Filters ,Algorithms ,Neural Networks ,Computer ,Artificial intelligence ,Deep learning ,Inferior vena cava filter ,Screening - Abstract
PurposeTo create an algorithm able to accurately detect IVC filters on radiographs without human assistance, capable of being used to screen radiographs to identify patients needing IVC filter retrieval.MethodsA primary dataset of 5225 images, 30% of which included IVC filters, was assembled and annotated. 85% of the data was used to train a Cascade R-CNN (Region Based Convolutional Neural Network) object detection network incorporating a pre-trained ResNet-50 backbone. The remaining 15% of the data, independently annotated by three radiologists, was used as a test set to assess performance. The algorithm was also assessed on an independently constructed 1424-image dataset, drawn from a different institution than the primary dataset.ResultsOn the primary test set, the algorithm achieved a sensitivity of 96.2% (95% CI 92.7-98.1%) and a specificity of 98.9% (95% CI 97.4-99.5%). Results were similar on the external test set: sensitivity 97.9% (95% CI 96.2-98.9%), specificity 99.6 (95% CI 98.9-99.9%).ConclusionFully automated detection of IVC filters on radiographs with high sensitivity and excellent specificity required for an automated screening system can be achieved using object detection neural networks. Further work will develop a system for identifying patients for IVC filter retrieval based on this algorithm.
- Published
- 2023
18. The University of California San Francisco Preoperative Diffuse Glioma MRI Dataset.
- Author
-
Calabrese, Evan, Villanueva-Meyer, Javier E, Rudie, Jeffrey D, Rauschecker, Andreas M, Baid, Ujjwal, Bakas, Spyridon, Cha, Soonmee, Mongan, John T, and Hess, Christopher P
- Subjects
Brain/Brain Stem ,CNS ,Informatics ,MR Diffusion Tensor Imaging ,MR Imaging ,MR Perfusion ,Neuro-Oncology ,Oncology ,Radiogenomics ,Radiology-Pathology Integration ,Rare Diseases ,Brain Cancer ,Biomedical Imaging ,Cancer ,Neurosciences ,Brain Disorders ,Neurological ,Good Health and Well Being - Abstract
Supplemental material is available for this article. Keywords: Informatics, MR Diffusion Tensor Imaging, MR Perfusion, MR Imaging, Neuro-Oncology, CNS, Brain/Brain Stem, Oncology, Radiogenomics, Radiology-Pathology Integration © RSNA, 2022.
- Published
- 2022
19. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement From the ACR, CAR, ESR, RANZCR & RSNA
- Author
-
Brady, Adrian P., Allen, Bibb, Chong, Jaron, Kotter, Elmar, Kottler, Nina, Mongan, John, Oakden-Rayner, Lauren, Pinto dos Santos, Daniel, Tang, An, Wald, Christoph, and Slavotinek, John
- Published
- 2024
- Full Text
- View/download PDF
20. Proceedings From the 2022 ACR-RSNA Workshop on Safety, Effectiveness, Reliability, and Transparency in AI
- Author
-
Larson, David B., Doo, Florence X., Allen, Bibb, Jr, Mongan, John, Flanders, Adam E., and Wald, Christoph
- Published
- 2024
- Full Text
- View/download PDF
21. Machine Learning Tools for Image-Based Glioma Grading and the Quality of Their Reporting: Challenges and Opportunities
- Author
-
Merkaj, Sara, Bahar, Ryan C, Zeevi, Tal, Lin, MingDe, Ikuta, Ichiro, Bousabarah, Khaled, Petersen, Gabriel I Cassinelli, Staib, Lawrence, Payabvash, Seyedmehdi, Mongan, John T, Cha, Soonmee, and Aboian, Mariam S
- Subjects
Biomedical and Clinical Sciences ,Clinical Sciences ,Brain Disorders ,Networking and Information Technology R&D (NITRD) ,Brain Cancer ,Neurosciences ,Bioengineering ,Rare Diseases ,Cancer ,artificial intelligence ,glioma ,machine learning ,deep learning ,reporting quality ,Oncology and Carcinogenesis ,Oncology and carcinogenesis - Abstract
Technological innovation has enabled the development of machine learning (ML) tools that aim to improve the practice of radiologists. In the last decade, ML applications to neuro-oncology have expanded significantly, with the pre-operative prediction of glioma grade using medical imaging as a specific area of interest. We introduce the subject of ML models for glioma grade prediction by remarking upon the models reported in the literature as well as by describing their characteristic developmental workflow and widely used classifier algorithms. The challenges facing these models-including data sources, external validation, and glioma grade classification methods -are highlighted. We also discuss the quality of how these models are reported, explore the present and future of reporting guidelines and risk of bias tools, and provide suggestions for the reporting of prospective works. Finally, this review offers insights into next steps that the field of ML glioma grade prediction can take to facilitate clinical implementation.
- Published
- 2022
22. RSNA-MICCAI Panel Discussion: Machine Learning for Radiology from Challenges to Clinical Applications.
- Author
-
Mongan, John, Kalpathy-Cramer, Jayashree, Flanders, Adam, and George Linguraru, Marius
- Subjects
Artificial Neural Network Algorithms ,Back-Propagation ,Machine Learning Algorithms - Abstract
On October 5, 2020, the Medical Image Computing and Computer Assisted Intervention Society (MICCAI) 2020 conference hosted a virtual panel discussion with members of the Machine Learning Steering Subcommittee of the Radiological Society of North America. The MICCAI Society brings together scientists, engineers, physicians, educators, and students from around the world. Both societies share a vision to develop radiologic and medical imaging techniques through advanced quantitative imaging biomarkers and artificial intelligence. The panel elaborated on how collaborations between radiologists and machine learning scientists facilitate the creation and clinical success of imaging technology for radiology. This report presents structured highlights of the moderated dialogue at the panel. Keywords: Back-Propagation, Artificial Neural Network Algorithms, Machine Learning Algorithms © RSNA, 2021.
- Published
- 2021
23. Abstract 13588: A Generalizable Deep Learning System for Cardiac MRI
- Author
-
Shad, Rohan, Zakka, Cyril R, Kaur, Dhamanpreet, MONGAN, JOHN, Kallianos, Kimberly G, Filice, Ross, Khandwala, Nishith, Eng, David, Langlotz, Curtis, and Hiesinger, William
- Published
- 2023
- Full Text
- View/download PDF
24. Updating the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) for reporting AI research
- Author
-
Tejani, Ali S., Klontzas, Michail E., Gatti, Anthony A., Mongan, John, Moy, Linda, Park, Seong Ho, and Kahn, Jr, Charles E.
- Published
- 2023
- Full Text
- View/download PDF
25. The University of California San Francisco Adult Longitudinal Post-Treatment Diffuse Glioma (UCSF-ALPTDG) MRI Dataset
- Author
-
Fields, Brandon K. K., primary, Calabrese, Evan, additional, Mongan, John, additional, Cha, Soonmee, additional, Hess, Christopher P., additional, Sugrue, Leo P., additional, Chang, Susan M., additional, Luks, Tracy L., additional, Villanueva-Meyer, Javier E., additional, Rauschecker, Andreas M., additional, and Rudie, Jeffrey D., additional
- Published
- 2024
- Full Text
- View/download PDF
26. The Global Reading Room: Purchasing a Radiology Artificial Intelligence System
- Author
-
Huisman, Merel, primary, Kitamura, Felipe Campos, additional, Mongan, John, additional, and Yanagawa, Masahiro, additional
- Published
- 2024
- Full Text
- View/download PDF
27. PRISMA AI reporting guidelines for systematic reviews and meta-analyses on AI in healthcare
- Author
-
Cacciamani, Giovanni E., Chu, Timothy N., Sanford, Daniel I., Abreu, Andre, Duddalwar, Vinay, Oberai, Assad, Kuo, C.-C. Jay, Liu, Xiaoxuan, Denniston, Alastair K., Vasey, Baptiste, McCulloch, Peter, Wolff, Robert F., Mallett, Sue, Mongan, John, Kahn, Jr, Charles E., Sounderajah, Viknesh, Darzi, Ara, Dahm, Philipp, Moons, Karel G. M., Topol, Eric, Collins, Gary S., Moher, David, Gill, Inderbir S., and Hung, Andrew J.
- Published
- 2023
- Full Text
- View/download PDF
28. Developing, purchasing, implementing and monitoring AI tools in radiology: Practical considerations. A multi‐society statement from the ACR, CAR, ESR, RANZCR & RSNA
- Author
-
Brady, Adrian P, primary, Allen, Bibb, additional, Chong, Jaron, additional, Kotter, Elmar, additional, Kottler, Nina, additional, Mongan, John, additional, Oakden‐Rayner, Lauren, additional, Pinto dos Santos, Daniel, additional, Tang, An, additional, Wald, Christoph, additional, and Slavotinek, John, additional
- Published
- 2024
- Full Text
- View/download PDF
29. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement From the ACR, CAR, ESR, RANZCR & RSNA
- Author
-
Brady, Adrian P., primary, Allen, Bibb, additional, Chong, Jaron, additional, Kotter, Elmar, additional, Kottler, Nina, additional, Mongan, John, additional, Oakden-Rayner, Lauren, additional, dos Santos, Daniel Pinto, additional, Tang, An, additional, Wald, Christoph, additional, and Slavotinek, John, additional
- Published
- 2024
- Full Text
- View/download PDF
30. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA
- Author
-
Brady, Adrian P., primary, Allen, Bibb, additional, Chong, Jaron, additional, Kotter, Elmar, additional, Kottler, Nina, additional, Mongan, John, additional, Oakden-Rayner, Lauren, additional, dos Santos, Daniel Pinto, additional, Tang, An, additional, Wald, Christoph, additional, and Slavotinek, John, additional
- Published
- 2024
- Full Text
- View/download PDF
31. Automated coronary calcium scoring using deep learning with multicenter external validation
- Author
-
Eng, David, Chute, Christopher, Khandwala, Nishith, Rajpurkar, Pranav, Long, Jin, Shleifer, Sam, Khalaf, Mohamed H., Sandhu, Alexander T., Rodriguez, Fatima, Maron, David J., Seyyedi, Saeed, Marin, Daniele, Golub, Ilana, Budoff, Matthew, Kitamura, Felipe, Takahashi, Marcelo Straus, Filice, Ross W., Shah, Rajesh, Mongan, John, Kallianos, Kimberly, Langlotz, Curtis P., Lungren, Matthew P., Ng, Andrew Y., and Patel, Bhavik N.
- Published
- 2021
- Full Text
- View/download PDF
32. On the Centrality of Data: Data Resources in Radiologic Artificial Intelligence
- Author
-
Mongan, John, primary and Halabi, Safwan S., additional
- Published
- 2023
- Full Text
- View/download PDF
33. TotalSegmentator: A Gift to the Biomedical Imaging Community
- Author
-
Sebro, Ronnie, primary and Mongan, John, additional
- Published
- 2023
- Full Text
- View/download PDF
34. Radiologists staunchly support patient safety and autonomy, in opposition to the SCOTUS decision to overturn Roe v Wade
- Author
-
Karandikar, Aditya, primary, Solberg, Agnieszka, additional, Fung, Alice, additional, Lee, Amie Y., additional, Farooq, Amina, additional, Taylor, Amy C., additional, Oliveira, Amy, additional, Narayan, Anand, additional, Senter, Andi, additional, Majid, Aneesa, additional, Tong, Angela, additional, McGrath, Anika L., additional, Malik, Anjali, additional, Brown, Ann Leylek, additional, Roberts, Anne, additional, Fleischer, Arthur, additional, Vettiyil, Beth, additional, Zigmund, Beth, additional, Park, Brian, additional, Curran, Bruce, additional, Henry, Cameron, additional, Jaimes, Camilo, additional, Connolly, Cara, additional, Robson, Caroline, additional, Meltzer, Carolyn C., additional, Phillips, Catherine H., additional, Dove, Christine, additional, Glastonbury, Christine, additional, Pomeranz, Christy, additional, Kirsch, Claudia F.E., additional, Burgan, Constantine M., additional, Scher, Courtney, additional, Tomblinson, Courtney, additional, Fuss, Cristina, additional, Santillan, Cynthia, additional, Daye, Dania, additional, Brown, Daniel B., additional, Young, Daniel J., additional, Kopans, Daniel, additional, Vargas, Daniel, additional, Martin, Dann, additional, Thompson, David, additional, Jordan, David W., additional, Shatzkes, Deborah, additional, Sun, Derek, additional, Mastrodicasa, Domenico, additional, Smith, Elainea, additional, Korngold, Elena, additional, Dibble, Elizabeth H., additional, Arleo, Elizabeth K., additional, Hecht, Elizabeth M., additional, Morris, Elizabeth, additional, Maltin, Elizabeth P., additional, Cooke, Erin A., additional, Schwartz, Erin Simon, additional, Lehrman, Evan, additional, Sodagari, Faezeh, additional, Shah, Faisal, additional, Doo, Florence X., additional, Rigiroli, Francesca, additional, Vilanilam, George K., additional, Landinez, Gina, additional, Kim, Grace Gwe-Ya, additional, Rahbar, Habib, additional, Choi, Hailey, additional, Bandesha, Harmanpreet, additional, Ojeda-Fournier, Haydee, additional, Ikuta, Ichiro, additional, Dragojevic, Irena, additional, Schroeder, Jamie Lee Twist, additional, Ivanidze, Jana, additional, Katzen, Janine T., additional, Chiang, Jason, additional, Nguyen, Jeffers, additional, Robinson, Jeffrey D., additional, Broder, Jennifer C., additional, Kemp, Jennifer, additional, Weaver, Jennifer S., additional, Conyers, Jesse M., additional, Robbins, Jessica B., additional, Leschied, Jessica R., additional, Wen, Jessica, additional, Park, Jocelyn, additional, Mongan, John, additional, Perchik, Jordan, additional, Barbero, José Pablo Martínez, additional, Jacob, Jubin, additional, Ledbetter, Karyn, additional, Macura, Katarzyna J., additional, Maturen, Katherine E., additional, Frederick-Dyer, Katherine, additional, Dodelzon, Katia, additional, Cort, Kayla, additional, Kisling, Kelly, additional, Babagbemi, Kemi, additional, McGill, Kevin C., additional, Chang, Kevin J., additional, Feigin, Kimberly, additional, Winsor, Kimberly S., additional, Seifert, Kimberly, additional, Patel, Kirang, additional, Porter, Kristin K., additional, Foley, Kristin M., additional, Patel-Lippmann, Krupa, additional, McIntosh, Lacey J., additional, Padilla, Laura, additional, Groner, Lauren, additional, Harry, Lauren M., additional, Ladd, Lauren M., additional, Wang, Lisa, additional, Spalluto, Lucy B., additional, Mahesh, M., additional, Marx, M. Victoria, additional, Sugi, Mark D., additional, Sammer, Marla B.K., additional, Sun, Maryellen, additional, Barkovich, Matthew J., additional, Miller, Matthew J., additional, Vella, Maya, additional, Davis, Melissa A., additional, Englander, Meridith J., additional, Durst, Michael, additional, Oumano, Michael, additional, Wood, Monica J., additional, McBee, Morgan P., additional, Fischbein, Nancy J., additional, Kovalchuk, Nataliya, additional, Lall, Neil, additional, Eclov, Neville, additional, Madhuripan, Nikhil, additional, Ariaratnam, Nikki S., additional, Vincoff, Nina S., additional, Kothary, Nishita, additional, Yahyavi-Firouz-Abadi, Noushin, additional, Brook, Olga R., additional, Glenn, Orit A., additional, Woodard, Pamela K., additional, Mazaheri, Parisa, additional, Rhyner, Patricia, additional, Eby, Peter R., additional, Raghu, Preethi, additional, Gerson, Rachel F., additional, Patel, Rina, additional, Gutierrez, Robert L., additional, Gebhard, Robyn, additional, Andreotti, Rochelle F., additional, Masum, Rukya, additional, Woods, Ryan, additional, Mandava, Sabala, additional, Harrington, Samantha G., additional, Parikh, Samir, additional, Chu, Sammy, additional, Arora, Sandeep S., additional, Meyers, Sandra M., additional, Prabhu, Sanjay, additional, Shams, Sara, additional, Pittman, Sarah, additional, Patel, Sejal N., additional, Payne, Shelby, additional, Hetts, Steven W., additional, Hijaz, Tarek A., additional, Chapman, Teresa, additional, Loehfelm, Thomas W., additional, Juang, Titania, additional, Clark, Toshimasa J., additional, Potigailo, Valeria, additional, Shah, Vinil, additional, Planz, Virginia, additional, Kalia, Vivek, additional, DeMartini, Wendy, additional, Dillon, William P., additional, Gupta, Yasha, additional, Koethe, Yilun, additional, Hartley-Blossom, Zachary, additional, Wang, Zhen Jane, additional, McGinty, Geraldine, additional, Haramati, Adina, additional, Allen, Laveil M., additional, and Germaine, Pauline, additional
- Published
- 2023
- Full Text
- View/download PDF
35. Behavioral “nudges” in the electronic health record to reduce waste and misuse: 3 interventions
- Author
-
Grouse, Carrie K, primary, Waung, Maggie W, additional, Holmgren, A Jay, additional, Mongan, John, additional, Neinstein, Aaron, additional, Josephson, S Andrew, additional, and Khanna, Raman R, additional
- Published
- 2022
- Full Text
- View/download PDF
36. Deep Learning to Simulate Contrast-enhanced Breast MRI of Invasive Breast Cancer
- Author
-
Chung, Maggie, primary, Calabrese, Evan, additional, Mongan, John, additional, Ray, Kimberly M., additional, Hayward, Jessica H., additional, Kelil, Tatiana, additional, Sieberg, Ryan, additional, Hylton, Nola, additional, Joe, Bonnie N., additional, and Lee, Amie Y., additional
- Published
- 2022
- Full Text
- View/download PDF
37. Automated detection of IVC filters on radiographs with deep convolutional neural networks
- Author
-
Mongan, John, primary, Kohli, Marc D., additional, Houshyar, Roozbeh, additional, Chang, Peter D., additional, Glavis-Bloom, Justin, additional, and Taylor, Andrew G., additional
- Published
- 2022
- Full Text
- View/download PDF
38. Effect of an ultrasound-first clinical decision tool in emergency department patients with suspected nephrolithiasis: A randomized trial
- Author
-
Wang, Ralph C., primary, Fahimi, Jahan, additional, Dillon, David, additional, Shyy, William, additional, Mongan, John, additional, McCulloch, Charles, additional, and Smith-Bindman, Rebecca, additional
- Published
- 2022
- Full Text
- View/download PDF
39. Behavioral "nudges" in the electronic health record to reduce waste and misuse: 3 interventions.
- Author
-
Grouse, Carrie K, Waung, Maggie W, Holmgren, A Jay, Mongan, John, Neinstein, Aaron, Josephson, S Andrew, and Khanna, Raman R
- Abstract
Electronic health records (EHRs) offer decision support in the form of alerts, which are often though not always interruptive. These alerts, though sometimes effective, can come at the cost of high cognitive burden and workflow disruption. Less well studied is the design of the EHR itself—the ordering provider's "choice architecture"—which "nudges" users toward alternatives, sometimes unintentionally toward waste and misuse, but ideally intentionally toward better practice. We studied 3 different workflows at our institution where the existing choice architecture was potentially nudging providers toward erroneous decisions, waste, and misuse in the form of inappropriate laboratory work, incorrectly specified computerized tomographic imaging, and excessive benzodiazepine dosing for imaging-related sedation. We changed the architecture to nudge providers toward better practice and found that the 3 nudges were successful to varying degrees in reducing erroneous decision-making and mitigating waste and misuse. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Imaging AI in Practice: Introducing the Special Issue
- Author
-
Mongan, John, primary, Vagal, Achala, additional, and Wu, Carol C., additional
- Published
- 2022
- Full Text
- View/download PDF
41. Gastrointestinal Stromal Tumors: Radiomics may Increase the Role of Imaging in Malignant Risk Assessment
- Author
-
Webb, Emily M., primary and Mongan, John, additional
- Published
- 2022
- Full Text
- View/download PDF
42. The 2021 SIIM-FISABIO-RSNA Machine Learning COVID-19 Challenge: Annotation and Standard Exam Classification of COVID-19 Chest Radiographs.
- Author
-
Lakhani, Paras, primary, Mongan, John, additional, Singhal, Chinmay, additional, Zhou, Quan, additional, Andriole, Katherine P., additional, Auffermann, William F., additional, Prasanna, Prasanth, additional, Pham, Tessie, additional, Peterson, Michael, additional, Bergquist, Peter J., additional, Cook, Tessa S., additional, Ferraciolli, Suely Fazio, additional, de Antonio Corradi, Gustavo César, additional, Takahashi, Marcelo, additional, Workman, Spencer S, additional, Parekh, Maansi, additional, Kamel, Sarah, additional, Galant, Joaquin Herrero, additional, Mas-Sanchez, Alba, additional, Benítez, Emi C., additional, Sánchez-Valverde, Mariola, additional, Jaques, Lara, additional, Panadero, María, additional, Vidal, Marta, additional, Culiáñez-Casas, María, additional, Angulo-Gonzalez, Diego M., additional, Langer, Steve G., additional, de la Iglesia Vaya, Maria, additional, and Shih, George, additional
- Published
- 2021
- Full Text
- View/download PDF
43. On CSO’s 10-Year Anniversary Stabilization Operations in Perspective.
- Author
-
FAUCHER, ROBERT J. and MONGAN, JOHN H.
- Subjects
- *
CONFLICT management , *ECONOMIC stabilization , *ORGANIZATIONAL resilience - Abstract
The article discusses about State's Bureau of Conflict and Stabilization Operations. Topics of discussion includes the experience and expertise has provided lead on stabilization and resilience policy. Joseph Biden and Richard Lugar proposed the Reconstruction and Stabilization Civilian Management Act for providing the continued development of an effective expert civilian response capability for reconstruction and stabilization operations.
- Published
- 2021
44. The Global Reading Room: Purchasing a Radiology Artificial Intelligence System.
- Author
-
Huisman M, Kitamura FC, Mongan J, and Yanagawa M
- Published
- 2024
- Full Text
- View/download PDF
45. Developing, purchasing, implementing and monitoring AI tools in radiology: practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA.
- Author
-
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Dos Santos DP, Tang A, Wald C, and Slavotinek J
- Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones.This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.Key points • The incorporation of artificial intelligence (AI) in radiological practice demands increased monitoring of its utility and safety.• Cooperation between developers, clinicians, and regulators will allow all involved to address ethical issues and monitor AI performance.• AI can fulfil its promise to advance patient well-being if all steps from development to integration in healthcare are rigorously evaluated., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
46. The Brain Tumor Segmentation (BraTS) Challenge 2023: Brain MR Image Synthesis for Tumor Segmentation (BraSyn) .
- Author
-
Li HB, Conte GM, Anwar SM, Kofler F, Ezhov I, van Leemput K, Piraud M, Diaz M, Cole B, Calabrese E, Rudie J, Meissen F, Adewole M, Janas A, Kazerooni AF, LaBella D, Moawad AW, Farahani K, Eddy J, Bergquist T, Chung V, Shinohara RT, Dako F, Wiggins W, Reitman Z, Wang C, Liu X, Jiang Z, Familiar A, Johanson E, Meier Z, Davatzikos C, Freymann J, Kirby J, Bilello M, Fathallah-Shaykh HM, Wiest R, Kirschke J, Colen RR, Kotrotsou A, Lamontagne P, Marcus D, Milchenko M, Nazeri A, Weber MA, Mahajan A, Mohan S, Mongan J, Hess C, Cha S, Villanueva-Meyer J, Colak E, Crivellaro P, Jakab A, Albrecht J, Anazodo U, Aboian M, Yu T, Chung V, Bergquist T, Eddy J, Albrecht J, Baid U, Bakas S, Linguraru MG, Menze B, Iglesias JE, and Wiestler B
- Abstract
Automated brain tumor segmentation methods have become well-established and reached performance levels offering clear clinical utility. These methods typically rely on four input magnetic resonance imaging (MRI) modalities: T1-weighted images with and without contrast enhancement, T2-weighted images, and FLAIR images. However, some sequences are often missing in clinical practice due to time constraints or image artifacts, such as patient motion. Consequently, the ability to substitute missing modalities and gain segmentation performance is highly desirable and necessary for the broader adoption of these algorithms in the clinical routine. In this work, we present the establishment of the Brain MR Image Synthesis Benchmark (BraSyn) in conjunction with the Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023. The primary objective of this challenge is to evaluate image synthesis methods that can realistically generate missing MRI modalities when multiple available images are provided. The ultimate aim is to facilitate automated brain tumor segmentation pipelines. The image dataset used in the benchmark is diverse and multi-modal, created through collaboration with various hospitals and research institutions.
- Published
- 2023
47. The ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2023: Intracranial Meningioma.
- Author
-
LaBella D, Adewole M, Alonso-Basanta M, Altes T, Anwar SM, Baid U, Bergquist T, Bhalerao R, Chen S, Chung V, Conte GM, Dako F, Eddy J, Ezhov I, Godfrey D, Hilal F, Familiar A, Farahani K, Iglesias JE, Jiang Z, Johanson E, Kazerooni AF, Kent C, Kirkpatrick J, Kofler F, Leemput KV, Li HB, Liu X, Mahtabfar A, McBurney-Lin S, McLean R, Meier Z, Moawad AW, Mongan J, Nedelec P, Pajot M, Piraud M, Rashid A, Reitman Z, Shinohara RT, Velichko Y, Wang C, Warman P, Wiggins W, Aboian M, Albrecht J, Anazodo U, Bakas S, Flanders A, Janas A, Khanna G, Linguraru MG, Menze B, Nada A, Rauschecker AM, Rudie J, Tahon NH, Villanueva-Meyer J, Wiestler B, and Calabrese E
- Abstract
Meningiomas are the most common primary intracranial tumor in adults and can be associated with significant morbidity and mortality. Radiologists, neurosurgeons, neuro-oncologists, and radiation oncologists rely on multiparametric MRI (mpMRI) for diagnosis, treatment planning, and longitudinal treatment monitoring; yet automated, objective, and quantitative tools for non-invasive assessment of meningiomas on mpMRI are lacking. The BraTS meningioma 2023 challenge will provide a community standard and benchmark for state-of-the-art automated intracranial meningioma segmentation models based on the largest expert annotated multilabel meningioma mpMRI dataset to date. Challenge competitors will develop automated segmentation models to predict three distinct meningioma sub-regions on MRI including enhancing tumor, non-enhancing tumor core, and surrounding nonenhancing T2/FLAIR hyperintensity. Models will be evaluated on separate validation and held-out test datasets using standardized metrics utilized across the BraTS 2023 series of challenges including the Dice similarity coefficient and Hausdorff distance. The models developed during the course of this challenge will aid in incorporation of automated meningioma MRI segmentation into clinical practice, which will ultimately improve care of patients with meningioma.
- Published
- 2023
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.