141 results on '"U. Bagci"'
Search Results
2. MULTI-INSTITUTIONAL LARGE-SCALE VALIDATION OF 8 METHODS FOR AUTOMATIC KNEE MRI SEGMENTATION FOR USE IN CLINICAL TRIALS
- Author
-
E.B. Dam, A. Desai, C. Deniz, H. Rajamohan, R. Regatte, C. Iriondo, V. Pedoia, S. Majumdar, M. Perslev, C. Igel, A. Pai, S. Gaj, M. Yang, K. Nakamura, X. Li, H. Maqbool, I. Irmakci, S-E. Song, U. Bagci, B. Hargreaves, G. Gold, and A. Chaudhari
- Subjects
Rheumatology ,Biomedical Engineering ,Orthopedics and Sports Medicine - Published
- 2022
3. Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges.
- Author
-
Jha D, Sharma V, Banik D, Bhattacharya D, Roy K, Hicks SA, Tomar NK, Thambawita V, Krenzer A, Ji GP, Poudel S, Batchkala G, Alam S, Ahmed AMA, Trinh QH, Khan Z, Nguyen TP, Shrestha S, Nathan S, Gwak J, Jha RK, Zhang Z, Schlaefer A, Bhattacharjee D, Bhuyan MK, Das PK, Fan DP, Parasa S, Ali S, Riegler MA, Halvorsen P, de Lange T, and Bagci U
- Abstract
Automatic analysis of colonoscopy images has been an active field of research motivated by the importance of early detection of precancerous polyps. However, detecting polyps during the live examination can be challenging due to various factors such as variation of skills and experience among the endoscopists, lack of attentiveness, and fatigue leading to a high polyp miss-rate. Therefore, there is a need for an automated system that can flag missed polyps during the examination and improve patient care. Deep learning has emerged as a promising solution to this challenge as it can assist endoscopists in detecting and classifying overlooked polyps and abnormalities in real time, improving the accuracy of diagnosis and enhancing treatment. In addition to the algorithm's accuracy, transparency and interpretability are crucial to explaining the whys and hows of the algorithm's prediction. Further, conclusions based on incorrect decisions may be fatal, especially in medicine. Despite these pitfalls, most algorithms are developed in private data, closed source, or proprietary software, and methods lack reproducibility. Therefore, to promote the development of efficient and transparent methods, we have organized the "Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image Segmentation (MedAI 2021)" competitions. The Medico 2020 challenge received submissions from 17 teams, while the MedAI 2021 challenge also gathered submissions from another 17 distinct teams in the following year. We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic. Our analysis revealed that the participants improved dice coefficient metrics from 0.8607 in 2020 to 0.8993 in 2021 despite adding diverse and challenging frames (containing irregular, smaller, sessile, or flat polyps), which are frequently missed during a routine clinical examination. For the instrument segmentation task, the best team obtained a mean Intersection over union metric of 0.9364. For the transparency task, a multi-disciplinary team, including expert gastroenterologists, accessed each submission and evaluated the team based on open-source practices, failure case analysis, ablation studies, usability and understandability of evaluations to gain a deeper understanding of the models' credibility for clinical deployment. The best team obtained a final transparency score of 21 out of 25. Through the comprehensive analysis of the challenge, we not only highlight the advancements in polyp and surgical instrument segmentation but also encourage subjective evaluation for building more transparent and understandable AI-based colonoscopy systems. Moreover, we discuss the need for multi-center and out-of-distribution testing to address the current limitations of the methods to reduce the cancer burden and improve patient care., Competing Interests: Declaration of competing interest 1. Financial Interests: Author have no financial interests, direct or indirect, in the research or its outcomes presented in the manuscript. 2. Non-Financial Interests: Author have no non-financial interests that could be perceived as having influenced the research or its presentation in the manuscript. 3. Conflicts of Interest: Author confirm that there are no known conflicts of interest that could potentially bias the results, analysis, or conclusions presented in the manuscript., (Copyright © 2024 The Author(s). Published by Elsevier B.V. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
4. Deep Learning-Based Detection and Classification of Bone Lesions on Staging Computed Tomography in Prostate Cancer: A Development Study.
- Author
-
Belue MJ, Harmon SA, Yang D, An JY, Gaur S, Law YM, Turkbey E, Xu Z, Tetreault J, Lay NS, Yilmaz EC, Phelps TE, Simon B, Lindenberg L, Mena E, Pinto PA, Bagci U, Wood BJ, Citrin DE, Dahut WL, Madan RA, Gulley JL, Xu D, Choyke PL, and Turkbey B
- Subjects
- Humans, Male, Retrospective Studies, Aged, Middle Aged, Radiographic Image Interpretation, Computer-Assisted methods, Prostatic Neoplasms diagnostic imaging, Prostatic Neoplasms pathology, Deep Learning, Bone Neoplasms diagnostic imaging, Bone Neoplasms secondary, Tomography, X-Ray Computed methods, Neoplasm Staging
- Abstract
Rationale and Objectives: Efficiently detecting and characterizing metastatic bone lesions on staging CT is crucial for prostate cancer (PCa) care. However, it demands significant expert time and additional imaging such as PET/CT. We aimed to develop an ensemble of two automated deep learning AI models for 1) bone lesion detection and segmentation and 2) benign vs. metastatic lesion classification on staging CTs and to compare its performance with radiologists., Materials and Methods: This retrospective study developed two AI models using 297 staging CT scans (81 metastatic) with 4601 benign and 1911 metastatic lesions in PCa patients. Metastases were validated by follow-up scans, bone biopsy, or PET/CT. Segmentation AI (3DAISeg) was developed using the lesion contours delineated by a radiologist. 3DAISeg performance was evaluated with the Dice similarity coefficient, and classification AI (3DAIClass) performance on AI and radiologist contours was assessed with F1-score and accuracy. Training/validation/testing data partitions of 70:15:15 were used. A multi-reader study was performed with two junior and two senior radiologists within a subset of the testing dataset (n = 36)., Results: In 45 unseen staging CT scans (12 metastatic PCa) with 669 benign and 364 metastatic lesions, 3DAISeg detected 73.1% of metastatic (266/364) and 72.4% of benign lesions (484/669). Each scan averaged 12 extra segmentations (range: 1-31). All metastatic scans had at least one detected metastatic lesion, achieving a 100% patient-level detection. The mean Dice score for 3DAISeg was 0.53 (median: 0.59, range: 0-0.87). The F1 for 3DAIClass was 94.8% (radiologist contours) and 92.4% (3DAISeg contours), with a median false positive of 0 (range: 0-3). Using radiologist contours, 3DAIClass had PPV and NPV rates comparable to junior and senior radiologists: PPV (semi-automated approach AI 40.0% vs. Juniors 32.0% vs. Seniors 50.0%) and NPV (AI 96.2% vs. Juniors 95.7% vs. Seniors 91.9%). When using 3DAISeg, 3DAIClass mimicked junior radiologists in PPV (pure-AI 20.0% vs. Juniors 32.0% vs. Seniors 50.0%) but surpassed seniors in NPV (pure-AI 93.8% vs. Juniors 95.7% vs. Seniors 91.9%)., Conclusion: Our lesion detection and classification AI model performs on par with junior and senior radiologists in discerning benign and metastatic lesions on staging CTs obtained for PCa., Competing Interests: Declaration of Competing Interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Bradford J. Wood: Principal investigator on cooperative research and development agreement (CRADA) between National Institutes of Health (NIH) and Philips and CRADAs with industry partners unrelated to this work; travel support related to CRADAs; royalties from NIH related to Philips licensing agreement; patents planned, issued, or pending. Peter L. Choyke: Receives payment from royalties paid to the U.S. government for patents on MRI US fusion biopsy licensed to Philips Medical. Peter A. Pinto: Institutional CRADA with Philips; royalties from NIH related to Philips licensing agreement; NIH-related patents planned, issued, or pending (U.S. patent nos. 8 447 384 and 10 215 830). Baris Turkbey: CRADAs with NVIDIA and Philips; royalties from NIH; patents planned, issued, or pending in the field of artificial intelligence. Dong Yang, Ziyue Xu, Jesse Tetreault, Daguang Xu: employee of NVIDIA Corporation. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Published by Elsevier Inc.)
- Published
- 2024
- Full Text
- View/download PDF
5. RETHINKING INTERMEDIATE LAYERS DESIGN IN KNOWLEDGE DISTILLATION FOR KIDNEY AND LIVER TUMOR SEGMENTATION.
- Author
-
Gorade V, Mittal S, Jha D, and Bagci U
- Abstract
Knowledge distillation (KD) has demonstrated remarkable success across various domains, but its application to medical imaging tasks, such as kidney and liver tumor segmentation, has encountered challenges. Many existing KD methods are not specifically tailored for these tasks. Moreover, prevalent KD methods often lack a careful consideration of 'what' and 'from where' to distill knowledge from the teacher to the student. This oversight may lead to issues like the accumulation of training bias within shallower student layers, potentially compromising the effectiveness of KD. To address these challenges, we propose Hierarchical Layer-selective Feedback Distillation (HLFD). HLFD strategically distills knowledge from a combination of middle layers to earlier layers and transfers final layer knowledge to intermediate layers at both the feature and pixel levels. This design allows the model to learn higher-quality representations from earlier layers, resulting in a robust and compact student model. Extensive quantitative evaluations reveal that HLFD outperforms existing methods by a significant margin. For example, in the kidney segmentation task, HLFD surpasses the student model (without KD) by over 10%, significantly improving its focus on tumor-specific features. From a qualitative standpoint, the student model trained using HLFD excels at suppressing irrelevant information and can focus sharply on tumor-specific details, which opens a new pathway for more efficient and accurate diagnostic tools. Code is available here., Competing Interests: 7.CONFLICTS OF INTEREST The authors have no relevant financial or non-financial interests to disclose.
- Published
- 2024
6. Omicron detection with large language models and YouTube audio data.
- Author
-
Anibal JT, Landa AJ, Hang NTT, Song MJ, Peltekian AK, Shin A, Huth HB, Hazen LA, Christou AS, Rivera J, Morhard RA, Bagci U, Li M, Bensoussan Y, Clifton DA, and Wood BJ
- Abstract
Publicly available audio data presents a unique opportunity for the development of digital health technologies with large language models (LLMs). In this study, YouTube was mined to collect audio data from individuals with self-declared positive COVID-19 tests as well as those with other upper respiratory infections (URI) and healthy subjects discussing a diverse range of topics. The resulting dataset was transcribed with the Whisper model and used to assess the capacity of LLMs for detecting self-reported COVID-19 cases and performing variant classification. Following prompt optimization, LLMs achieved accuracies of 0.89, 0.97, respectively, in the tasks of identifying self-reported COVID-19 cases and other respiratory illnesses. The model also obtained a mean accuracy of 0.77 at identifying the variant of self-reported COVID-19 cases using only symptoms and other health-related factors described in the YouTube videos. In comparison with past studies, which used scripted, standardized voice samples to capture biomarkers, this study focused on extracting meaningful information from public online audio data. This work introduced novel design paradigms for pandemic management tools, showing the potential of audio data in clinical and public health applications., Competing Interests: Disclosures / Conflicts of Interest: The authors declare no competing non-financial interests but the following competing financial interests. NIH may own intellectual property in the field. NIH and BJW receive royalties for licensed patents from Philips, unrelated to this work. BW is Principal Investigator on the following CRADA’s = Cooperative Research & Development Agreements, between NIH and industry: Philips, Philips Research, Celsion Corp, BTG Biocompatibles / Boston Scientific, Siemens, NVIDIA, XACT Robotics. Promaxo (in progress). The following industry partners also support research in CIO lab via equipment, personnel, devices and/ or drugs: 3T Technologies (devices), Exact Imaging (data), AngioDynamics (equipment), AstraZeneca (pharmaceuticals, NCI CRADA), ArciTrax (devices and equipment), Imactis (Equipment), Johnson & Johnson (equipment), Medtronic (equipment), Theromics (Supplies), Profound Medical (equipment and supplies), QT Imaging (equipment and supplies). The content of this manuscript does not necessarily reflect the views, policies, or opinions of the National Institutes of Health (NIH), the U.S. Department of Health and Human Services, the U.K. National Health Service, the U.K. National Institute for Health Research, the U.K. Department of Health, InnoHK – ITC, or the University of Oxford. The mention of commercial products, their source, or their use in connection with material reported herein is not to be construed as an actual or implied endorsement of such products by the U.S. government.
- Published
- 2024
- Full Text
- View/download PDF
7. COVID-19 Detection From Respiratory Sounds With Hierarchical Spectrogram Transformers.
- Author
-
Aytekin I, Dalmaz O, Gonc K, Ankishan H, Saritas EU, Bagci U, Celik H, and Cukur T
- Subjects
- Humans, Auscultation, Cough, Electric Power Supplies, Respiratory Sounds diagnosis, COVID-19 diagnosis
- Abstract
Monitoring of prevalent airborne diseases such as COVID-19 characteristically involves respiratory assessments. While auscultation is a mainstream method for preliminary screening of disease symptoms, its utility is hampered by the need for dedicated hospital visits. Remote monitoring based on recordings of respiratory sounds on portable devices is a promising alternative, which can assist in early assessment of COVID-19 that primarily affects the lower respiratory tract. In this study, we introduce a novel deep learning approach to distinguish patients with COVID-19 from healthy controls given audio recordings of cough or breathing sounds. The proposed approach leverages a novel hierarchical spectrogram transformer (HST) on spectrogram representations of respiratory sounds. HST embodies self-attention mechanisms over local windows in spectrograms, and window size is progressively grown over model stages to capture local to global context. HST is compared against state-of-the-art conventional and deep-learning baselines. Demonstrations on crowd-sourced multi-national datasets indicate that HST outperforms competing methods, achieving over 90% area under the receiver operating characteristic curve (AUC) in detecting COVID-19 cases.
- Published
- 2024
- Full Text
- View/download PDF
8. Evaluation of pan-Immuno-Inflammation value for In-hospital mortality in acute pulmonary embolism patients.
- Author
-
Çiçek V, Yavuz S, Şaylık F, Taşlıçukur Ş, Öz A, Babaoğlu M, Erdem A, Yılmaz İ, Bagci U, and Cinar T
- Subjects
- Humans, Male, Female, Aged, Middle Aged, Acute Disease, Prognosis, Risk Factors, Tomography, X-Ray Computed, Aged, 80 and over, Natriuretic Peptide, Brain blood, Peptide Fragments blood, L-Lactate Dehydrogenase blood, Biomarkers, Predictive Value of Tests, Logistic Models, Pulmonary Embolism mortality, Hospital Mortality, Inflammation, Severity of Illness Index
- Abstract
Background: Pan-immuno-inflammation value (PIV) is a new and comprehensive index that reflects both the immune response and systemic inflammation in the body., Objective: The aim of this study was to investigate the prognostic relevance of PIV in predicting in-hospital mortality in acute pulmonary embolism (PE) patients and to compare it with the well-known risk scoring system, PE severity index (PESI), which is commonly used for a short-term mortality prediction in such patients., Methods: In total, 373 acute PE patients diagnosed with contrast-enhanced computed tomography were included in the study. Detailed cardiac evaluation of each patient was performed and PESI and PIV were calculated., Results: In total, 60 patients died during their hospital stay. The multivariable logistic regression analysis revealed that baseline heart rate, N-terminal pro-B-type natriuretic peptide, lactate dehydrogenase, PIV, and PESI were independent risk factors for in-hospital mortality in acute PE patients. When comparing with PESI, PIV was non-inferior in terms of predicting the survival status in patients with acute PE., Conclusion: In our study, we found that the PIV was statistically significant in predicting in-hospital mortality in acute PE patients and was non-inferior to the PESI.
- Published
- 2024
- Full Text
- View/download PDF
9. Domain Generalization with Correlated Style Uncertainty.
- Author
-
Zhang Z, Wang B, Jha D, Demir U, and Bagci U
- Abstract
Domain generalization (DG) approaches intend to extract domain invariant features that can lead to a more robust deep learning model. In this regard, style augmentation is a strong DG method taking advantage of instance-specific feature statistics containing informative style characteristics to synthetic novel domains. While it is one of the state-of-the-art methods, prior works on style augmentation have either disregarded the interdependence amongst distinct feature channels or have solely constrained style augmentation to linear interpolation. To address these research gaps, in this work, we introduce a novel augmentation approach, named Correlated Style Uncertainty (CSU), surpassing the limitations of linear interpolation in style statistic space and simultaneously preserving vital correlation information. Our method's efficacy is established through extensive experimentation on diverse cross-domain computer vision and medical imaging classification tasks: PACS, Office-Home, and Camelyon17 datasets, and the Duke-Market1501 instance retrieval task. The results showcase a remarkable improvement margin over existing state-of-the-art techniques. The source code is available https://github.com/freshman97/CSU.
- Published
- 2024
- Full Text
- View/download PDF
10. The past, current, and future of neonatal intensive care units with artificial intelligence: a systematic review.
- Author
-
Keles E and Bagci U
- Abstract
Machine learning and deep learning are two subsets of artificial intelligence that involve teaching computers to learn and make decisions from any sort of data. Most recent developments in artificial intelligence are coming from deep learning, which has proven revolutionary in almost all fields, from computer vision to health sciences. The effects of deep learning in medicine have changed the conventional ways of clinical application significantly. Although some sub-fields of medicine, such as pediatrics, have been relatively slow in receiving the critical benefits of deep learning, related research in pediatrics has started to accumulate to a significant level, too. Hence, in this paper, we review recently developed machine learning and deep learning-based solutions for neonatology applications. We systematically evaluate the roles of both classical machine learning and deep learning in neonatology applications, define the methodologies, including algorithmic developments, and describe the remaining challenges in the assessment of neonatal diseases by using PRISMA 2020 guidelines. To date, the primary areas of focus in neonatology regarding AI applications have included survival analysis, neuroimaging, analysis of vital parameters and biosignals, and retinopathy of prematurity diagnosis. We have categorically summarized 106 research articles from 1996 to 2022 and discussed their pros and cons, respectively. In this systematic review, we aimed to further enhance the comprehensiveness of the study. We also discuss possible directions for new AI models and the future of neonatology with the rising power of AI, suggesting roadmaps for the integration of AI into neonatal intensive care units., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
11. A multi-institutional pediatric dataset of clinical radiology MRIs by the Children's Brain Tumor Network.
- Author
-
Familiar AM, Kazerooni AF, Anderson H, Lubneuski A, Viswanathan K, Breslow R, Khalili N, Bagheri S, Haldar D, Kim MC, Arif S, Madhogarhia R, Nguyen TQ, Frenkel EA, Helili Z, Harrison J, Farahani K, Linguraru MG, Bagci U, Velichko Y, Stevens J, Leary S, Lober RM, Campion S, Smith AA, Morinigo D, Rood B, Diamond K, Pollack IF, Williams M, Vossough A, Ware JB, Mueller S, Storm PB, Heath AP, Waanders AJ, Lilly J, Mason JL, Resnick AC, and Nabavizadeh A
- Abstract
Pediatric brain and spinal cancers remain the leading cause of cancer-related death in children. Advancements in clinical decision-support in pediatric neuro-oncology utilizing the wealth of radiology imaging data collected through standard care, however, has significantly lagged other domains. Such data is ripe for use with predictive analytics such as artificial intelligence (AI) methods, which require large datasets. To address this unmet need, we provide a multi-institutional, large-scale pediatric dataset of 23,101 multi-parametric MRI exams acquired through routine care for 1,526 brain tumor patients, as part of the Children's Brain Tumor Network. This includes longitudinal MRIs across various cancer diagnoses, with associated patient-level clinical information, digital pathology slides, as well as tissue genotype and omics data. To facilitate downstream analysis, treatment-naïve images for 370 subjects were processed and released through the NCI Childhood Cancer Data Initiative via the Cancer Data Service. Through ongoing efforts to continuously build these imaging repositories, our aim is to accelerate discovery and translational AI models with real-world data, to ultimately empower precision medicine for children., Competing Interests: Competing interests The authors have no conflicts of interest to declare.
- Published
- 2023
12. Radiomics Boosts Deep Learning Model for IPMN Classification.
- Author
-
Yao L, Zhang Z, Demir U, Keles E, Vendrami C, Agarunov E, Bolan C, Schoots I, Bruno M, Keswani R, Miller F, Gonda T, Yazici C, Tirkes T, Wallace M, Spampinato C, and Bagci U
- Abstract
Intraductal Papillary Mucinous Neoplasm (IPMN) cysts are pre-malignant pancreas lesions, and they can progress into pancreatic cancer. Therefore, detecting and stratifying their risk level is of ultimate importance for effective treatment planning and disease control. However, this is a highly challenging task because of the diverse and irregular shape, texture, and size of the IPMN cysts as well as the pancreas. In this study, we propose a novel computer-aided diagnosis pipeline for IPMN risk classification from multi-contrast MRI scans. Our proposed analysis framework includes an efficient volumetric self-adapting segmentation strategy for pancreas delineation, followed by a newly designed deep learning-based classification scheme with a radiomics-based predictive approach. We test our proposed decision-fusion model in multi-center data sets of 246 multi-contrast MRI scans and obtain superior performance to the state of the art (SOTA) in this field. Our ablation studies demonstrate the significance of both radiomics and deep learning modules for achieving the new SOTA performance compared to international guidelines and published studies (81.9% vs 61.3% in accuracy). Our findings have important implications for clinical decision-making. In a series of rigorous experiments on multi-center data sets (246 MRI scans from five centers), we achieved unprecedented performance (81.9% accuracy). The code is available upon publication.
- Published
- 2023
- Full Text
- View/download PDF
13. Self-supervised Semantic Segmentation: Consistency over Transformation.
- Author
-
Karimijafarbigloo S, Azad R, Kazerouni A, Velichko Y, Bagci U, and Merhof D
- Abstract
Accurate medical image segmentation is of utmost importance for enabling automated clinical decision procedures. However, prevailing supervised deep learning approaches for medical image segmentation encounter significant challenges due to their heavy dependence on extensive labeled training data. To tackle this issue, we propose a novel self-supervised algorithm, S 3 - Net , which integrates a robust framework based on the proposed Inception Large Kernel Attention (I-LKA) modules. This architectural enhancement makes it possible to comprehensively capture contextual information while preserving local intricacies, thereby enabling precise semantic segmentation. Furthermore, considering that lesions in medical images often exhibit deformations, we leverage deformable convolution as an integral component to effectively capture and delineate lesion deformations for superior object boundary definition. Additionally, our self-supervised strategy emphasizes the acquisition of invariance to affine transformations, which is commonly encountered in medical scenarios. This emphasis on robustness with respect to geometric distortions significantly enhances the model's ability to accurately model and handle such distortions. To enforce spatial consistency and promote the grouping of spatially connected image pixels with similar feature representations, we introduce a spatial consistency loss term. This aids the network in effectively capturing the relationships among neighboring pixels and enhancing the overall segmentation quality. The S 3 - N e t approach iteratively learns pixel-level feature representations for image content clustering in an end-to-end manner. Our experimental results on skin lesion and lung organ segmentation tasks show the superior performance of our method compared to the SOTA approaches.
- Published
- 2023
- Full Text
- View/download PDF
14. Laplacian-Former: Overcoming the Limitations of Vision Transformers in Local Texture Detection.
- Author
-
Azad R, Kazerouni A, Azad B, Aghdam EK, Velichko Y, Bagci U, and Merhof D
- Abstract
Vision Transformer (ViT) models have demonstrated a breakthrough in a wide range of computer vision tasks. However, compared to the Convolutional Neural Network (CNN) models, it has been observed that the ViT models struggle to capture high-frequency components of images, which can limit their ability to detect local textures and edge information. As abnormalities in human tissue, such as tumors and lesions, may greatly vary in structure, texture, and shape, high-frequency information such as texture is crucial for effective semantic segmentation tasks. To address this limitation in ViT models, we propose a new technique, Laplacian-Former, that enhances the self-attention map by adaptively re-calibrating the frequency information in a Laplacian pyramid. More specifically, our proposed method utilizes a dual attention mechanism via efficient attention and frequency attention while the efficient attention mechanism reduces the complexity of self-attention to linear while producing the same output, selectively intensifying the contribution of shape and texture features. Furthermore, we introduce a novel efficient enhancement multi-scale bridge that effectively transfers spatial information from the encoder to the decoder while preserving the fundamental features. We demonstrate the efficacy of Laplacian-former on multi-organ and skin lesion segmentation tasks with +1.87% and +0.76% dice scores compared to SOTA approaches, respectively. Our implementation is publically available at GitHub.
- Published
- 2023
- Full Text
- View/download PDF
15. Selecting the best optimizers for deep learning-based medical image segmentation.
- Author
-
Mortazi A, Cicek V, Keles E, and Bagci U
- Abstract
Purpose: The goal of this work is to explore the best optimizers for deep learning in the context of medical image segmentation and to provide guidance on how to design segmentation networks with effective optimization strategies., Approach: Most successful deep learning networks are trained using two types of stochastic gradient descent (SGD) algorithms: adaptive learning and accelerated schemes. Adaptive learning helps with fast convergence by starting with a larger learning rate (LR) and gradually decreasing it. Momentum optimizers are particularly effective at quickly optimizing neural networks within the accelerated schemes category. By revealing the potential interplay between these two types of algorithms [LR and momentum optimizers or momentum rate (MR) in short], in this article, we explore the two variants of SGD algorithms in a single setting. We suggest using cyclic learning as the base optimizer and integrating optimal values of learning rate and momentum rate. The new optimization function proposed in this work is based on the Nesterov accelerated gradient optimizer, which is more efficient computationally and has better generalization capabilities compared to other adaptive optimizers., Results: We investigated the relationship of LR and MR under an important problem of medical image segmentation of cardiac structures from MRI and CT scans. We conducted experiments using the cardiac imaging dataset from the ACDC challenge of MICCAI 2017, and four different architectures were shown to be successful for cardiac image segmentation problems. Our comprehensive evaluations demonstrated that the proposed optimizer achieved better results (over a 2% improvement in the dice metric) than other optimizers in the deep learning literature with similar or lower computational cost in both single and multi-object segmentation settings., Conclusions: We hypothesized that the combination of accelerated and adaptive optimization methods can have a drastic effect in medical image segmentation performances. To this end, we proposed a new cyclic optimization method ( Cyclic Learning/Momentum Rate ) to address the efficiency and accuracy problems in deep learning-based medical image segmentation. The proposed strategy yielded better generalization in comparison to adaptive optimizers., Competing Interests: AM was employed by the company Volastra Therapeutics. The remaining authors declare that research was conducted in the absence of any commercial or financial relationships that could be constructed as a potential conflict of interest. The authors AM and UB declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision., (© 2023 Mortazi, Cicek, Keles and Bagci.)
- Published
- 2023
- Full Text
- View/download PDF
16. A review of deep learning and radiomics approaches for pancreatic cancer diagnosis from medical imaging.
- Author
-
Yao L, Zhang Z, Keles E, Yazici C, Tirkes T, and Bagci U
- Subjects
- Humans, Artificial Intelligence, Pancreas, Tomography, X-Ray Computed, Deep Learning, Pancreatic Neoplasms diagnostic imaging
- Abstract
Purpose of Review: Early and accurate diagnosis of pancreatic cancer is crucial for improving patient outcomes, and artificial intelligence (AI) algorithms have the potential to play a vital role in computer-aided diagnosis of pancreatic cancer. In this review, we aim to provide the latest and relevant advances in AI, specifically deep learning (DL) and radiomics approaches, for pancreatic cancer diagnosis using cross-sectional imaging examinations such as computed tomography (CT) and magnetic resonance imaging (MRI)., Recent Findings: This review highlights the recent developments in DL techniques applied to medical imaging, including convolutional neural networks (CNNs), transformer-based models, and novel deep learning architectures that focus on multitype pancreatic lesions, multiorgan and multitumor segmentation, as well as incorporating auxiliary information. We also discuss advancements in radiomics, such as improved imaging feature extraction, optimized machine learning classifiers and integration with clinical data. Furthermore, we explore implementing AI-based clinical decision support systems for pancreatic cancer diagnosis using medical imaging in practical settings., Summary: Deep learning and radiomics with medical imaging have demonstrated strong potential to improve diagnostic accuracy of pancreatic cancer, facilitate personalized treatment planning, and identify prognostic and predictive biomarkers. However, challenges remain in translating research findings into clinical practice. More studies are required focusing on refining these methods, addressing significant limitations, and developing integrative approaches for data analysis to further advance the field of pancreatic cancer diagnosis., (Copyright © 2023 Wolters Kluwer Health, Inc. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
17. TransResU-Net: A Transformer based ResU-Net for Real-Time Colon Polyp Segmentation.
- Author
-
Tomar NK, Shergill A, Rieders B, Bagci U, and Jha D
- Subjects
- Humans, Early Detection of Cancer, Colorectal Neoplasms diagnosis, Colonic Polyps diagnostic imaging, Colonic Neoplasms diagnostic imaging, Adenoma diagnostic imaging
- Abstract
Colorectal cancer (CRC) is one of the most common causes of cancer and cancer-related mortality worldwide. Performing colon cancer screening in a timely fashion is the key to early detection. Colonoscopy is the primary modality used to diagnose colon cancer. However, the miss rate of polyps, adenomas and advanced adenomas remains significantly high. Early detection of polyps at the precancerous stage can help reduce the mortality rate and the economic burden associated with colorectal cancer. Deep learning-based computer-aided diagnosis (CADx) system may help gastroenterologists to identify polyps that may otherwise be missed, thereby improving the polyp detection rate. Additionally, CADx system could prove to be a cost-effective system that improves long-term colorectal cancer prevention. In this study, we proposed a deep learning-based architecture for automatic polyp segmentation called Transformer ResU-Net (TransResU-Net). Our proposed architecture is built upon residual blocks with ResNet-50 as the backbone and takes advantage of the transformer self-attention mechanism as well as dilated convolution(s). Our experimental results on two publicly available polyp segmentation benchmark datasets showed that TransResU-Net obtained a highly promising dice score and a real-time speed. With high efficacy in our performance metrics, we concluded that TransResU-Net could be a strong benchmark for building a real-time polyp detection system for the early diagnosis, treatment, and prevention of colorectal cancer. The source code of the proposed TransResU-Net is publicly available at https://github.com/nikhilroxtomar/TransResUNet.
- Published
- 2023
- Full Text
- View/download PDF
18. An Efficient Multi-Scale Fusion Network for 3D Organs at Risk (OARs) Segmentation.
- Author
-
Srivastava A, Jha D, Keles E, Aydogan B, Abazeed M, and Bagci U
- Subjects
- Radiotherapy Planning, Computer-Assisted methods, Organs at Risk, Tomography, X-Ray Computed methods
- Abstract
Accurate segmentation of organs-at-risks (OARs) is a precursor for optimizing radiation therapy planning. Existing deep learning-based multi-scale fusion architectures have demonstrated a tremendous capacity for 2D medical image segmentation. The key to their success is aggregating global context and maintaining high resolution representations. However, when translated into 3D segmentation problems, existing multi-scale fusion architectures might underperform due to their heavy computation overhead and substantial data diet. To address this issue, we propose a new OAR segmentation framework, called OARFocalFuseNet, which fuses multi-scale features and employs focal modulation for capturing global-local context across multiple scales. Each resolution stream is enriched with features from different resolution scales, and multi-scale information is aggregated to model diverse contextual ranges. As a result, feature representations are further boosted. The comprehensive comparisons in our experimental setup with OAR segmentation as well as multi-organ segmentation show that our proposed OARFocalFuseNet outperforms the recent state-of-the-art methods on publicly available OpenKBP datasets and Synapse multi-organ segmentation. Both of the proposed methods (3D-MSF and OARFocalFuseNet) showed promising performance in terms of standard evaluation metrics. Our best performing method (OARFocalFuseNet) obtained a dice coefficient of 0.7995 and hausdorff distance of 5.1435 on OpenKBP datasets and dice coefficient of 0.8137 on Synapse multi-organ segmentation dataset. Our code is available at https://github.com/NoviceMAn-prog/OARFocalFuse.
- Published
- 2023
- Full Text
- View/download PDF
19. Current State of Diffusion-Weighted Imaging and Diffusion Tensor Imaging for Traumatic Brain Injury Prognostication.
- Author
-
Grant M, Liu J, Wintermark M, Bagci U, and Douglas D
- Subjects
- Humans, Diffusion Tensor Imaging methods, Diffusion Magnetic Resonance Imaging, Prognosis, Brain diagnostic imaging, Brain Injuries, Traumatic diagnostic imaging, Brain Concussion
- Abstract
Advanced imaging techniques are needed to assist in providing a prognosis for patients with traumatic brain injury (TBI), particularly mild TBI (mTBI). Diffusion tensor imaging (DTI) is one promising advanced imaging technique, but has shown variable results in patients with TBI and is not without limitations, especially when considering individual patients. Efforts to resolve these limitations are being explored and include developing advanced diffusion techniques, creating a normative database, improving study design, and testing machine learning algorithms. This article will review the fundamentals of DTI, providing an overview of the current state of its utility in evaluating and providing prognosis in patients with TBI., (Copyright © 2023 Elsevier Inc. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
20. Towards Automatic Cartilage Quantification in Clinical Trials - Continuing from the 2019 IWOAI Knee Segmentation Challenge.
- Author
-
Dam EB, Desai AD, Deniz CM, Rajamohan HR, Regatte R, Iriondo C, Pedoia V, Majumdar S, Perslev M, Igel C, Pai A, Gaj S, Yang M, Nakamura K, Li X, Maqbool H, Irmakci I, Song SE, Bagci U, Hargreaves B, Gold G, and Chaudhari A
- Abstract
Objective: To evaluate whether the deep learning (DL) segmentation methods from the six teams that participated in the IWOAI 2019 Knee Cartilage Segmentation Challenge are appropriate for quantifying cartilage loss in longitudinal clinical trials., Design: We included 556 subjects from the Osteoarthritis Initiative study with manually read cartilage volume scores for the baseline and 1-year visits. The teams used their methods originally trained for the IWOAI 2019 challenge to segment the 1130 knee MRIs. These scans were anonymized and the teams were blinded to any subject or visit identifiers. Two teams also submitted updated methods. The resulting 9,040 segmentations are available online.The segmentations included tibial, femoral, and patellar compartments. In post-processing, we extracted medial and lateral tibial compartments and geometrically defined central medial and lateral femoral sub-compartments. The primary study outcome was the sensitivity to measure cartilage loss as defined by the standardized response mean (SRM)., Results: For the tibial compartments, several of the DL segmentation methods had SRMs similar to the gold standard manual method. The highest DL SRM was for the lateral tibial compartment at 0.38 (the gold standard had 0.34). For the femoral compartments, the gold standard had higher SRMs than the automatic methods at 0.31/0.30 for medial/lateral compartments., Conclusion: The lower SRMs for the DL methods in the femoral compartments at 0.2 were possibly due to the simple sub-compartment extraction done during post-processing. The study demonstrated that state-of-the-art DL segmentation methods may be used in standardized longitudinal single-scanner clinical trials for well-defined cartilage compartments.
- Published
- 2023
- Full Text
- View/download PDF
21. Relational reasoning network for anatomical landmarking.
- Author
-
Torosdagli N, Anwar S, Verma P, Liberton DK, Lee JS, Han WW, and Bagci U
- Abstract
Purpose: We perform anatomical landmarking for craniomaxillofacial (CMF) bones without explicitly segmenting them. Toward this, we propose a simple, yet efficient, deep network architecture, called relational reasoning network (RRN), to accurately learn the local and the global relations among the landmarks in CMF bones; specifically, mandible, maxilla, and nasal bones., Approach: The proposed RRN works in an end-to-end manner, utilizing learned relations of the landmarks based on dense-block units. For a given few landmarks as input, RRN treats the landmarking process similar to a data imputation problem where predicted landmarks are considered missing., Results: We applied RRN to cone-beam computed tomography scans obtained from 250 patients. With a fourfold cross-validation technique, we obtained an average root mean squared error of < 2 mm per landmark. Our proposed RRN has revealed unique relationships among the landmarks that help us in inferring informativeness of the landmark points. The proposed system identifies the missing landmark locations accurately even when severe pathology or deformations are present in the bones., Conclusions: Accurately identifying anatomical landmarks is a crucial step in deformation analysis and surgical planning for CMF surgeries. Achieving this goal without the need for explicit bone segmentation addresses a major limitation of segmentation-based approaches, where segmentation failure (as often is the case in bones with severe pathology or deformation) could easily lead to incorrect landmarking. To the best of our knowledge, this is the first-of-its-kind algorithm finding anatomical relations of the objects using deep learning., (© 2023 The Authors.)
- Published
- 2023
- Full Text
- View/download PDF
22. An automatic segmentation framework for computer-assisted renal scintigraphy procedure.
- Author
-
Rahimi A, Hosntalab M, Babapour Mofrad F, Amoui M, and Bagci U
- Subjects
- Abdomen, Liver, Computers, Image Processing, Computer-Assisted methods, Algorithms, Kidney diagnostic imaging
- Abstract
One of the techniques for achieving unique and reliable information in medicine is renal scintigraphy. A key step for quantitative renal scintigraphy is segmentation of the kidneys. Here, an automatic segmentation framework was proposed for computer-aided renal scintigraphy procedures. To extract kidney boundary in dynamic renal scintigraphic images, a multi-step approach was proposed. This technique is featured with key steps, namely, localization and segmentation. At first, the ROI of each kidney was estimated using Otsu's thresholding, anatomical constraint, and integral projection, which is done in an automatic process. Afterwards, the ROI obtained for the kidneys was used as the initial contours to create the final counter of kidneys using geometric active contours. At this step and for the segmentation, an improved variational level set was utilized through Mumford-Shah formulation. Using e.cam gamma camera system (SIEMENS), 30 data sets were used to assess the proposed method. By comparing the manually outlined borders, the performance of the proposed method was shown. Different measures were used to examine the performance. It was found that the proposed segmentation method managed to extract the kidney boundary in renal scintigraphic images. The proposed technique achieved a sensitivity of 95.15% and a specificity of 95.33%. In addition, the section under the curve in the ROC analysis was equal to 0.974. The proposed technique successfully segmented the renal contour in dynamic renal scintigraphy. Using all the data sets, a correct segmentation of the kidney was performed. In addition, the technique was successful with noisy and low-resolution images and challenging cases with close interfering activities such as liver and spleen activities., (© 2022. International Federation for Medical and Biological Engineering.)
- Published
- 2023
- Full Text
- View/download PDF
23. Extended Reality (XR) for Condition Assessment of Civil Engineering Structures: A Literature Review.
- Author
-
Catbas FN, Luleci F, Zakaria M, Bagci U, LaViola JJ Jr, Cruz-Neira C, and Reiners D
- Subjects
- Engineering, Technology, Augmented Reality, Virtual Reality
- Abstract
Condition assessment of civil engineering structures has been an active research area due to growing concerns over the safety of aged as well as new civil structures. Utilization of emerging immersive visualization technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) in the architectural, engineering, and construction (AEC) industry has demonstrated that these visualization tools can be paradigm-shifting. Extended Reality (XR), an umbrella term for VR, AR, and MR technologies, has found many diverse use cases in the AEC industry. Despite this exciting trend, there is no review study on the usage of XR technologies for the condition assessment of civil structures. Thus, the present paper aims to fill this gap by presenting a literature review encompassing the utilization of XR technologies for the condition assessment of civil structures. This study aims to provide essential information and guidelines for practitioners and researchers on using XR technologies to maintain the integrity and safety of civil structures.
- Published
- 2022
- Full Text
- View/download PDF
24. Out of Distribution Detection, Generalization, and Robustness Triangle with Maximum Probability Theorem.
- Author
-
Marvasti AE, Marvasti EE, and Bagci U
- Abstract
Maximum Probability Framework, powered by Maximum Probability Theorem, is a recent theoretical development in artificial intelligence, aiming to formally define probabilistic models, guiding development of objective functions, and regularization of probabilistic models. MPT uses the probability distribution that the models assume on random variables to provide an upper bound on the probability of the model. We apply MPT to challenging out-of-distribution (OOD) detection problems in computer vision by incorporating MPT as a regularization scheme in the training of CNNs and their energy-based variants. We demonstrate the effectiveness of the proposed method on 1080 trained models, with varying hyperparameters, and conclude that the MPT-based regularization strategy stabilizes and improves the generalization and robustness of base models in addition to enhanced OOD performance on CIFAR10, CIFAR100, and MNIST datasets.
- Published
- 2022
- Full Text
- View/download PDF
25. Dynamic Linear Transformer for 3D Biomedical Image Segmentation.
- Author
-
Zhang Z and Bagci U
- Abstract
Transformer-based neural networks have surpassed promising performance on many biomedical image segmentation tasks due to a better global information modeling from the self-attention mechanism. However, most methods are still designed for 2D medical images while ignoring the essential 3D volume information. The main challenge for 3D Transformer-based segmentation methods is the quadratic complexity introduced by the self-attention mechanism [17]. In this paper, we are addressing these two research gaps, lack of 3D methods and computational complexity in Transformers, by proposing a novel Transformer architecture that has an encoder-decoder style architecture with linear complexity. Furthermore, we newly introduce a dynamic token concept to further reduce the token numbers for self-attention calculation. Taking advantage of the global information modeling, we provide uncertainty maps from different hierarchy stages. We evaluate this method on multiple challenging CT pancreas segmentation datasets. Our results show that our novel 3D Transformer-based segmentor could provide promising highly feasible segmentation performance and accurate uncertainty quantification using single annotation. Code is available https://github.com/freshman97/LinTransUNet.
- Published
- 2022
- Full Text
- View/download PDF
26. TGANet: Text-guided attention for improved polyp segmentation.
- Author
-
Tomar NK, Jha D, Bagci U, and Ali S
- Abstract
Colonoscopy is a gold standard procedure but is highly operator-dependent. Automated polyp segmentation, a precancerous precursor, can minimize missed rates and timely treatment of colon cancer at an early stage. Even though there are deep learning methods developed for this task, variability in polyp size can impact model training, thereby limiting it to the size attribute of the majority of samples in the training dataset that may provide sub-optimal results to differently sized polyps. In this work, we exploit size-related and polyp number-related features in the form of text attention during training. We introduce an auxiliary classification task to weight the text-based embedding that allows network to learn additional feature representations that can distinctly adapt to differently sized polyps and can adapt to cases with multiple polyps. Our experimental results demonstrate that these added text embeddings improve the overall performance of the model compared to state-of-the-art segmentation methods. We explore four different datasets and provide insights for size-specific improvements. Our proposed text-guided attention network (TGANet) can generalize well to variable-sized polyps in different datasets. Codes are available at https://github.com/nikhilroxtomar/TGANet.
- Published
- 2022
- Full Text
- View/download PDF
27. Musculoskeletal MR Image Segmentation with Artificial Intelligence.
- Author
-
Keles E, Irmakci I, and Bagci U
- Published
- 2022
- Full Text
- View/download PDF
28. Overall Survival Prediction of Glioma Patients With Multiregional Radiomics.
- Author
-
Shaheen A, Bukhari ST, Nadeem M, Burigat S, Bagci U, and Mohy-Ud-Din H
- Abstract
Radiomics-guided prediction of overall survival (OS) in brain gliomas is seen as a significant problem in Neuro-oncology. The ultimate goal is to develop a robust MRI-based approach (i.e., a radiomics model) that can accurately classify a novel subject as a short-term survivor, a medium-term survivor, or a long-term survivor. The BraTS 2020 challenge provides radiological imaging and clinical data ( 178 subjects) to develop and validate radiomics-based methods for OS classification in brain gliomas. In this study, we empirically evaluated the efficacy of four multiregional radiomic models, for OS classification, and quantified the robustness of predictions to variations in automatic segmentation of brain tumor volume. More specifically, we evaluated four radiomic models, namely, the Whole Tumor ( WT ) radiomics model, the 3-subregions radiomics model, the 6-subregions radiomics model, and the 21-subregions radiomics model. The 3-subregions radiomics model is based on a physiological segmentation of whole tumor volume (WT) into three non-overlapping subregions. The 6-subregions and 21-subregions radiomic models are based on an anatomical segmentation of the brain tumor into 6 and 21 anatomical regions, respectively. Moreover, we employed six segmentation schemes - five CNNs and one STAPLE-fusion method - to quantify the robustness of radiomic models. Our experiments revealed that the 3-subregions radiomics model had the best predictive performance (mean AUC = 0.73) but poor robustness (RSD = 1.99) and the 6-subregions and 21-subregions radiomics models were more robust (RSD 1.39) with lower predictive performance (mean AUC 0.71). The poor robustness of the 3-subregions radiomics model was associated with highly variable and inferior segmentation of tumor core and active tumor subregions as quantified by the Hausdorff distance metric (4.4-6.5mm) across six segmentation schemes. Failure analysis revealed that the WT radiomics model, the 6-subregions radiomics model, and the 21-subregions radiomics model failed for the same subjects which is attributed to the common requirement of accurate segmentation of the WT volume. Moreover, short-term survivors were largely misclassified by the radiomic models and had large segmentation errors (average Hausdorff distance of 7.09mm). Lastly, we concluded that while STAPLE-fusion can reduce segmentation errors, it is not a solution to learning accurate and robust radiomic models., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2022 Shaheen, Bukhari, Nadeem, Burigat, Bagci and Mohy-ud-Din.)
- Published
- 2022
- Full Text
- View/download PDF
29. Neural Transformers for Intraductal Papillary Mucosal Neoplasms (IPMN) Classification in MRI images.
- Author
-
Salanitri FP, Bellitto G, Palazzo S, Irmakci I, Wallace M, Bolan C, Engels M, Hoogenboom S, Aldinucci M, Bagci U, Giordano D, and Spampinato C
- Subjects
- Electric Power Supplies, Humans, Magnetic Resonance Imaging, Records, Artificial Intelligence, Pancreatic Intraductal Neoplasms
- Abstract
Early detection of precancerous cysts or neoplasms, i.e., Intraductal Papillary Mucosal Neoplasms (IPMN), in pancreas is a challenging and complex task, and it may lead to a more favourable outcome. Once detected, grading IPMNs accurately is also necessary, since low-risk IPMNs can be under surveillance program, while high-risk IPMNs have to be surgically resected before they turn into cancer. Current standards (Fukuoka and others) for IPMN classification show significant intra- and inter-operator variability, beside being error-prone, making a proper diagnosis unreliable. The established progress in artificial intelligence, through the deep learning paradigm, may provide a key tool for an effective support to medical decision for pancreatic cancer. In this work, we follow this trend, by proposing a novel AI-based IPMN classifier that leverages the recent success of transformer networks in generalizing across a wide variety of tasks, including vision ones. We specifically show that our transformer-based model exploits pre-training better than standard convolutional neural networks, thus supporting the sought architectural universalism of transformers in vision, including the medical image domain and it allows for a better interpretation of the obtained results.
- Published
- 2022
- Full Text
- View/download PDF
30. Multi-Contrast MRI Segmentation Trained on Synthetic Images.
- Author
-
Irmakci I, Unel ZE, Ikizler-Cinbis N, and Bagci U
- Subjects
- Records, Algorithms, Magnetic Resonance Imaging methods
- Abstract
In our comprehensive experiments and evaluations, we show that it is possible to generate multiple contrast (even all synthetically) and use synthetically generated images to train an image segmentation engine. We showed promising segmentation results tested on real multi-contrast MRI scans when delineating muscle, fat, bone and bone marrow, all trained on synthetic images. Based on synthetic image training, our segmentation results were as high as 93.91%, 94.11%, 91.63%, 95.33%, for muscle, fat, bone, and bone marrow delineation, respectively. Results were not significantly different from the ones obtained when real images were used for segmentation training: 94.68%, 94.67%, 95.91%, and 96.82%, respectively. Clinical relevance- Synthetically generated images could potentially be used in large-scale training of deep networks for segmentation purpose. Small data set problem of many clinical imaging problems can potentially be addressed with the proposed algorithm.
- Published
- 2022
- Full Text
- View/download PDF
31. Video Capsule Endoscopy Classification using Focal Modulation Guided Convolutional Neural Network.
- Author
-
Srivastava A, Tomar NK, Bagci U, and Jha D
- Abstract
Video capsule endoscopy is a hot topic in computer vision and medicine. Deep learning can have a positive impact on the future of video capsule endoscopy technology. It can improve the anomaly detection rate, reduce physicians' time for screening, and aid in real-world clinical analysis. Computer-Aided diagnosis (CADx) classification system for video capsule endoscopy has shown a great promise for further improvement. For example, detection of cancerous polyp and bleeding can lead to swift medical response and improve the survival rate of the patients. To this end, an automated CADx system must have high throughput and decent accuracy. In this study, we propose FocalConvNet , a focal modulation network integrated with lightweight convolutional layers for the classification of small bowel anatomical landmarks and luminal findings. FocalConvNet leverages focal modulation to attain global context and allows global-local spatial interactions throughout the forward pass. Moreover, the convolutional block with its intrinsic inductive/learning bias and capacity to extract hierarchical features allows our FocalConvNet to achieve favourable results with high throughput. We compare our FocalConvNet with other state-of-the-art (SOTA) on Kvasir-Capsule, a large-scale VCE dataset with 44,228 frames with 13 classes of different anomalies. We achieved the weighted F1-score, recall and Matthews correlation coefficient (MCC) of 0.6734, 0.6373 and 0.2974, respectively, outperforming SOTA methodologies. Further, we obtained the highest throughput of 148.02 images/second rate to establish the potential of FocalConvNet in a real-time clinical environment. The code of the proposed FocalConvNet is available at https://github.com/NoviceMAn-prog/FocalConvNet.
- Published
- 2022
- Full Text
- View/download PDF
32. Automatic Polyp Segmentation with Multiple Kernel Dilated Convolution Network.
- Author
-
Tomar NK, Srivastava A, Bagci U, and Jha D
- Abstract
The detection and removal of precancerous polyps through colonoscopy is the primary technique for the prevention of colorectal cancer worldwide. However, the miss rate of colorectal polyp varies significantly among the endoscopists. It is well known that a computer-aided diagnosis (CAD) system can assist endoscopists in detecting colon polyps and minimize the variation among endoscopists. In this study, we introduce a novel deep learning architecture, named MKDCNet, for automatic polyp segmentation robust to significant changes in polyp data distribution. MKDCNet is simply an encoder-decoder neural network that uses the pre-trained ResNet50 as the encoder and novel multiple kernel dilated convolution (MKDC) block that expands the field of view to learn more robust and heterogeneous representation. Extensive experiments on four publicly available polyp datasets and cell nuclei dataset show that the proposed MKDCNet outperforms the state-of-the-art methods when trained and tested on the same dataset as well when tested on unseen polyp datasets from different distributions. With rich results, we demonstrated the robustness of the proposed architecture. From an efficiency perspective, our algorithm can process at ( ≈ 45) frames per second on RTX 3090 GPU. MKDCNet can be a strong benchmark for building real-time systems for clinical colonoscopies. The code of the proposed MKDCNet is available at https://github.com/nikhilroxtomar/MKDCNet.
- Published
- 2022
- Full Text
- View/download PDF
33. Design and Rationale for the Use of Magnetic Resonance Imaging Biomarkers to Predict Diabetes After Acute Pancreatitis in the Diabetes RElated to Acute Pancreatitis and Its Mechanisms Study: From the Type 1 Diabetes in Acute Pancreatitis Consortium.
- Author
-
Tirkes T, Chinchilli VM, Bagci U, Parker JG, Zhao X, Dasyam AK, Feranec N, Grajo JR, Shah ZK, Poullos PD, Spilseth B, Zaheer A, Xie KL, Wachsman AM, Campbell-Thompson M, Conwell DL, Fogel EL, Forsmark CE, Hart PA, Pandol SJ, Park WG, Pratley RE, Yazici C, Laughlin MR, Andersen DK, Serrano J, Bellin MD, and Yadav D
- Subjects
- Acute Disease, Artificial Intelligence, Biomarkers, Humans, Magnetic Resonance Imaging methods, Diabetes Mellitus, Type 1 complications, Diabetes Mellitus, Type 1 diagnosis, Pancreatitis diagnostic imaging, Pancreatitis etiology
- Abstract
Abstract: This core component of the Diabetes RElated to Acute pancreatitis and its Mechanisms (DREAM) study will examine the hypothesis that advanced magnetic resonance imaging (MRI) techniques can reflect underlying pathophysiologic changes and provide imaging biomarkers that predict diabetes mellitus (DM) after acute pancreatitis (AP). A subset of participants in the DREAM study will enroll and undergo serial MRI examinations using a specific research protocol. The aim of the study is to differentiate at-risk individuals from those who remain euglycemic by identifying parenchymal features after AP. Performing longitudinal MRI will enable us to observe and understand the natural history of post-AP DM. We will compare MRI parameters obtained by interrogating tissue properties in euglycemic, prediabetic, and incident diabetes subjects and correlate them with metabolic, genetic, and immunological phenotypes. Differentiating imaging parameters will be combined to develop a quantitative composite risk score. This composite risk score will potentially have the ability to monitor the risk of DM in clinical practice or trials. We will use artificial intelligence, specifically deep learning, algorithms to optimize the predictive ability of MRI. In addition to the research MRI, the DREAM study will also correlate clinical computed tomography and MRI scans with DM development., Competing Interests: Besides the funding support from NIH listed about, C.E.F. receives consultant fee or honorarium from Nestle HealthCare Nutrition, Inc, Parexel International Corp, and Medialis, Ltd. M.D.B. is an advisory board member of Insulet and receives research support from Dexcom and Viacyte. S.J.P. owns stock options of Avenzoar Pharmaceuticals, Phyteau, and Lucid Sciences. W.G.P. is a consultant for AbbVie. B.S. is a consultant for Francis Medical and Botimage. T.T. receives royalties from Springer Nature. J.R.G. receives royalties from Elsevier, Inc. D.K.A. receives royalties from McGraw-Hill. R.E.P. is consultant for Bayer AG, Corcept Therapeutics Incorporated, Dexcom, Gasherbrum Bio, Inc, Hanmi Pharmaceutical Co, Hengrui (USA) Ltd, Merck, Novo Nordisk, Pfizer, Rivus Pharmaceuticals, Inc, Sanofi, Scohia Pharma Inc, and Sun Pharmaceutical Industries and receives speaker fees from Novo Nordisk. The other authors declare no conflict of interest., (Copyright © 2022 Wolters Kluwer Health, Inc. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
34. Video Analytics in Elite Soccer: A Distributed Computing Perspective.
- Author
-
Jha D, Rauniyar A, Johansen HD, Johansen D, Riegler MA, Halvorsen P, and Bagci U
- Abstract
Ubiquitous sensors and Internet of Things (IoT) technologies have revolutionized the sports industry, providing new methodologies for planning, effective coordination of training, and match analysis post game. New methods, including machine learning, image and video processing, have been developed for performance evaluation, allowing the analyst to track the performance of a player in real-time. Following FIFA's 2015 approval of electronics performance and tracking system during games, performance data of a single player or the entire team is allowed to be collected using GPS-based wearables. Data from practice sessions outside the sporting arena is being collected in greater numbers than ever before. Realizing the significance of data in professional soccer, this paper presents video analytics, examines recent state-of-the-art literature in elite soccer, and summarizes existing real-time video analytics algorithms. We also discuss real-time crowdsourcing of the obtained data, tactical and technical performance, distributed computing and its importance in video analytics and propose a future research perspective.
- Published
- 2022
- Full Text
- View/download PDF
35. Nanoencapsulation of Origanum vulgare essential oil into liposomes with anticancer potential.
- Author
-
Kryeziu TL, Haloci E, Loshaj-Shala A, Bagci U, Oral A, Stefkov GJ, Zimmer A, and Basholli-Salihu M
- Subjects
- Antioxidants chemistry, Antioxidants pharmacology, Liposomes, Oils, Volatile chemistry, Oils, Volatile pharmacology, Origanum chemistry
- Abstract
Origanum vulgare L. essential oil possesses a wide spectrum of biological activities. Nanoencapsulation of O. vulgare essential oil into liposomes seems to be a promising strategy to maintain and improve these biological properties. This research was carried out to develop a suitable liposomal formulation for the effective encapsulation of O. vulgare essential oil in order to improve the antioxidant and cytotoxic activities. The characterization of liposomal nanocarriers was conducted in terms of size, zeta potential, and encapsulation efficiency. An MTT assay was used to assess the cytotoxic activity of the prepared and characterized O. vulgare essential oil liposomes in MCF-7 cancer cell lines. Antioxidant activity was determined by assessing DPPH scavenging activity. O. vulgare essential oil exerted cytotoxic activity with an IC
50 of 50 μg/ml. The essential oil of O. vulgare was effectively encapsulated in liposomes, with no significant change observed among the formulations. The antioxidant activity was significantly enhanced after encapsulating the essential oil in liposomes. Origanum vulgare essential-oil-loaded Phospholipon 90H liposomes demonstrated considerably increased cytotoxic activity against MCF-7 cells, whereas Lipoid S100 liposomes showed no significant differences from the non-encapsulated essential oil. Phospholipon 85G liposomes had the least cytotoxic impact. As a result, liposomes containing O. vulgare essential oil may be promising nanocarriers for the development of anticancer agents.- Published
- 2022
- Full Text
- View/download PDF
36. Transformer based Generative Adversarial Network for Liver Segmentation.
- Author
-
Demir U, Zhang Z, Wang B, Antalek M, Keles E, Jha D, Borhani A, Ladner D, and Bagci U
- Abstract
Automated liver segmentation from radiology scans (CT, MRI) can improve surgery and therapy planning and follow-up assessment in addition to conventional use for diagnosis and prognosis. Although convolutional neural networks (CNNs) have became the standard image segmentation tasks, more recently this has started to change towards Transformers based architectures because Transformers are taking advantage of capturing long range dependence modeling capability in signals, so called attention mechanism. In this study, we propose a new segmentation approach using a hybrid approach combining the Transformer(s) with the Generative Adversarial Network (GAN) approach. The premise behind this choice is that the self-attention mechanism of the Transformers allows the network to aggregate the high dimensional feature and provide global information modeling. This mechanism provides better segmentation performance compared with traditional methods. Furthermore, we encode this generator into the GAN based architecture so that the discriminator network in the GAN can classify the credibility of the generated segmentation masks compared with the real masks coming from human (expert) annotations. This allows us to extract the high dimensional topology information in the mask for biomedical image segmentation and provide more reliable segmentation results. Our model achieved a high dice coefficient of 0.9433, recall of 0.9515, and precision of 0.9376 and outperformed other Transformer based approaches. The implementation details of the proposed architecture can be found at https://github.com/UgurDemir/tranformer_liver_segmentation.
- Published
- 2022
- Full Text
- View/download PDF
37. Predicting RF Heating of Conductive Leads During Magnetic Resonance Imaging at 1.5 T: A Machine Learning Approach .
- Author
-
Zheng C, Chen X, Nguyen BT, Sanpitak P, Vu J, Bagci U, and Golestanirad L
- Subjects
- Humans, Machine Learning, Magnetic Resonance Imaging, Phantoms, Imaging, Heating, Hot Temperature
- Abstract
The number of patients with active implantable medical devices continues to rise in the United States and around the world. It is estimated that 50-75% of patients with conductive implants will need magnetic resonance imaging (MRI) in their lifetime. A major risk of performing MRI in patients with elongated conductive implants is the radiofrequency (RF) heating of the tissue surrounding the implant's tip due to the antenna effect. Currently, applying full-wave electromagnetic simulation is the standard way to predict the interaction of MRI RF fields with the human body in the presence of conductive implants; however, these simulations are notoriously extensive in terms of memory requirement and computational time. Here we present a proof-of-concept simulation study to demonstrate the feasibility of applying machine learning to predict MRI-induced power deposition in the tissue surrounding conductive wires. We generated 600 clinically relevant trajectories of leads as observed in patients with cardiac conductive implants and trained a feedforward neural network to predict the 1g-averaged SAR at the lead tips knowing only the background field of MRI RF coil and coordinates of points along the lead trajectory. Training of the network was completed in 11.54 seconds and predictions were made within a second with R
2 = 0.87 and Root Mean Squared Error (RMSE) = 14.5 W/kg. Our results suggest that machine learning could provide a promising approach for safety assessment of MRI in patients with conductive leads.Clinical Relevance- Machine learning can potentially allow real-time assessment of MRI RF safety in patients with conductive leads when only the knowledge of lead's trajectory (image-based) and MRI RF coil features (vendor-specific) are in hand.- Published
- 2021
- Full Text
- View/download PDF
38. Machine learning-based prediction of MRI-induced power absorption in the tissue in patients with simplified deep brain stimulation lead models.
- Author
-
Vu J, Nguyen BT, Bhusal B, Baraboo J, Rosenow J, Bagci U, Bright MG, and Golestanirad L
- Abstract
Interaction of an active electronic implant such as a deep brain stimulation (DBS) system and MRI RF fields can induce excessive tissue heating, limiting MRI accessibility. Efforts to quantify RF heating mostly rely on electromagnetic (EM) simulations to assess individualized specific absorption rate (SAR), but such simulations require extensive computational resources. Here, we investigate if a predictive model using machine learning (ML) can predict the local SAR in the tissue around tips of implanted leads from the distribution of the tangential component of the MRI incident electric field, E
tan . A dataset of 260 unique patient-derived and artificial DBS lead trajectories was constructed, and the 1 g-averaged SAR, 1gSARmax , at the lead-tip during 1.5 T MRI was determined by EM simulations. Etan values along each lead's trajectory and the simulated SAR values were used to train and test the ML algorithm. The resulting predictions of the ML algorithm indicated that the distribution of Etan could effectively predict 1gSARmax at the DBS lead-tip (R = 0.82). Our results indicate that ML has the potential to provide a fast method for predicting MR-induced power absorption in the tissue around tips of implanted leads such as those in active electronic medical devices.- Published
- 2021
- Full Text
- View/download PDF
39. Information Bottleneck Attribution for Visual Explanations of Diagnosis and Prognosis.
- Author
-
Demir U, Irmakci I, Keles E, Topcu A, Xu Z, Spampinato C, Jambawalikar S, Turkbey E, Turkbey B, and Bagci U
- Abstract
Visual explanation methods have an important role in the prognosis of the patients where the annotated data is limited or unavailable. There have been several attempts to use gradient-based attribution methods to localize pathology from medical scans without using segmentation labels. This research direction has been impeded by the lack of robustness and reliability. These methods are highly sensitive to the network parameters. In this study, we introduce a robust visual explanation method to address this problem for medical applications. We provide an innovative visual explanation algorithm for general purpose and as an example application we demonstrate its effectiveness for quantifying lesions in the lungs caused by the Covid-19 with high accuracy and robustness without using dense segmentation labels. This approach overcomes the drawbacks of commonly used Grad-CAM and its extended versions. The premise behind our proposed strategy is that the information flow is minimized while ensuring the classifier prediction stays similar. Our findings indicate that the bottleneck condition provides a more stable severity estimation than the similar attribution methods. The source code will be publicly available upon publication.
- Published
- 2021
- Full Text
- View/download PDF
40. Hierarchical 3D Feature Learning for Pancreas Segmentation.
- Author
-
Salanitri FP, Bellitto G, Irmakci I, Palazzo S, Bagci U, and Spampinato C
- Abstract
We propose a novel 3D fully convolutional deep network for automated pancreas segmentation from both MRI and CT scans. More specifically, the proposed model consists of a 3D encoder that learns to extract volume features at different scales; features taken at different points of the encoder hierarchy are then sent to multiple 3D decoders that individually predict intermediate segmentation maps. Finally, all segmentation maps are combined to obtain a unique detailed segmentation mask. We test our model on both CT and MRI imaging data: the publicly available NIH Pancreas-CT dataset (consisting of 82 contrast-enhanced CTs) and a private MRI dataset (consisting of 40 MRI scans). Experimental results show that our model outperforms existing methods on CT pancreas segmentation, obtaining an average Dice score of about 88%, and yields promising segmentation performance on a very challenging MRI data set (average Dice score is about 77%). Additional control experiments demonstrate that the achieved performance is due to the combination of our 3D fully-convolutional deep network and the hierarchical representation decoding, thus substantiating our architectural design.
- Published
- 2021
- Full Text
- View/download PDF
41. Morphometric and Functional Brain Connectivity Differentiates Chess Masters From Amateur Players.
- Author
-
RaviPrakash H, Anwar SM, Biassou NM, and Bagci U
- Abstract
A common task in brain image analysis includes diagnosis of a certain medical condition wherein groups of healthy controls and diseased subjects are analyzed and compared. On the other hand, for two groups of healthy participants with different proficiency in a certain skill, a distinctive analysis of the brain function remains a challenging problem. In this study, we develop new computational tools to explore the functional and anatomical differences that could exist between the brain of healthy individuals identified on the basis of different levels of task experience/proficiency. Toward this end, we look at a dataset of amateur and professional chess players, where we utilize resting-state functional magnetic resonance images to generate functional connectivity (FC) information. In addition, we utilize T1-weighted magnetic resonance imaging to estimate morphometric connectivity (MC) information. We combine functional and anatomical features into a new connectivity matrix, which we term as the functional morphometric similarity connectome (FMSC) . Since, both the FC and MC information is susceptible to redundancy, the size of this information is reduced using statistical feature selection. We employ off-the-shelf machine learning classifier, support vector machine, for both single- and multi-modality classifications. From our experiments, we establish that the saliency and ventral attention network of the brain is functionally and anatomically different between two groups of healthy subjects (chess players). We argue that, since chess involves many aspects of higher order cognition such as systematic thinking and spatial reasoning and the identified network is task-positive to cognition tasks requiring a response, our results are valid and supporting the feasibility of the proposed computational pipeline. Moreover, we quantitatively validate an existing neuroscience hypothesis that learning a certain skill could cause a change in the brain (functional connectivity and anatomy) and this can be tested via our novel FMSC algorithm., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2021 RaviPrakash, Anwar, Biassou and Bagci.)
- Published
- 2021
- Full Text
- View/download PDF
42. The International Workshop on Osteoarthritis Imaging Knee MRI Segmentation Challenge: A Multi-Institute Evaluation and Analysis Framework on a Standardized Dataset.
- Author
-
Desai AD, Caliva F, Iriondo C, Mortazi A, Jambawalikar S, Bagci U, Perslev M, Igel C, Dam EB, Gaj S, Yang M, Li X, Deniz CM, Juras V, Regatte R, Gold GE, Hargreaves BA, Pedoia V, and Chaudhari AS
- Abstract
Purpose: To organize a multi-institute knee MRI segmentation challenge for characterizing the semantic and clinical efficacy of automatic segmentation methods relevant for monitoring osteoarthritis progression., Materials and Methods: A dataset partition consisting of three-dimensional knee MRI from 88 retrospective patients at two time points (baseline and 1-year follow-up) with ground truth articular (femoral, tibial, and patellar) cartilage and meniscus segmentations was standardized. Challenge submissions and a majority-vote ensemble were evaluated against ground truth segmentations using Dice score, average symmetric surface distance, volumetric overlap error, and coefficient of variation on a holdout test set. Similarities in automated segmentations were measured using pairwise Dice coefficient correlations. Articular cartilage thickness was computed longitudinally and with scans. Correlation between thickness error and segmentation metrics was measured using the Pearson correlation coefficient. Two empirical upper bounds for ensemble performance were computed using combinations of model outputs that consolidated true positives and true negatives., Results: Six teams ( T
1 - T6 ) submitted entries for the challenge. No differences were observed across any segmentation metrics for any tissues ( P = .99) among the four top-performing networks ( T2 , T3 , T4 , T6 ). Dice coefficient correlations between network pairs were high (> 0.85). Per-scan thickness errors were negligible among networks T1 - T4 ( P = .99), and longitudinal changes showed minimal bias (< 0.03 mm). Low correlations (ρ < 0.41) were observed between segmentation metrics and thickness error. The majority-vote ensemble was comparable to top-performing networks ( P = .99). Empirical upper-bound performances were similar for both combinations (P = .99)., Conclusion: Diverse networks learned to segment the knee similarly, where high segmentation accuracy did not correlate with cartilage thickness accuracy and voting ensembles did not exceed individual network performance.See also the commentary by Elhalawani and Mak in this issue. Keywords: Cartilage, Knee, MR-Imaging, Segmentation © RSNA, 2020 Supplemental material is available for this article., Competing Interests: Disclosures of Conflicts of Interest: A.D.D. Activities related to the present article: grants and travel support from the National Science Foundation, the National Institute of Arthritis and Musculoskeletal and Skin Diseases, the National Institute of Biomedical Imaging and Bioengineering, GE Healthcare, and Philips. Activities not related to the present article: grants from the National Institutes of Health. Other relationships: disclosed no relevant relationships. F.C. disclosed no relevant relationships. C. Iriondo disclosed no relevant relationships. A.M. disclosed no relevant relationships. S.J. disclosed no relevant relationships. U.B. disclosed no relevant relationships. M.P. Activities related to the present article: grant from the Independent Research Fund Denmark. Activities not related to the present article: disclosed no relevant relationships. Other relationships: disclosed no relevant relationships. C. Igel Activities related to the present article: grant from the Danish Council for Independent Research. Activities not related to the present article: disclosed no relevant relationships. Other relationships: disclosed no relevant relationships. E.B.D. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: stockholder in Biomediq and Cerebriu. Other relationships: disclosed no relevant relationships. S.G. disclosed no relevant relationships. M.Y. disclosed no relevant relationships. X.L. disclosed no relevant relationships. C.M.D. Activities related to the present article: grant from the National Institute of Arthritis and Musculoskeletal and Skin Diseases. Activities not related to the present article: disclosed no relevant relationships. Other relationships: disclosed no relevant relationships. V.J. disclosed no relevant relationships. R.R. disclosed no relevant relationships. G.E.G. Activities related to the present article: grants from the National Institutes of Health. Activities not related to the present article: board member for HeartVista; consultant for Canon; grants from GE Healthcare. Other relationships: disclosed no relevant relationships. B.A.H. Activities related to the present article: grant from the National Institutes of Health. Activities not related to the present article: royalties from patents licensed by Siemens and GE Healthcare; stockholder in LVIS. Other relationships: disclosed no relevant relationships. V.P. disclosed no relevant relationships. A.S.C. Activities related to the present article: grants from the National Institutes of Health, GE Healthcare, and Philips. Activities not related to the present article: board member for BrainKey and Chondrometrics; consultant for Skope, Subtle Medical, Chondrometrics, Image Analysis Group, Edge Analytics, ICM, and Culvert Engineering; stockholder in Subtle Medical, LVIS, and BrainKey; travel support from Paracelsus Medical Private University. Other relationships: disclosed no relevant relationships., (2021 by the Radiological Society of North America, Inc.)- Published
- 2021
- Full Text
- View/download PDF
43. Capsules for biomedical image segmentation.
- Author
-
LaLonde R, Xu Z, Irmakci I, Jain S, and Bagci U
- Subjects
- Capsules, Humans, Magnetic Resonance Imaging, Tomography, X-Ray Computed, Image Processing, Computer-Assisted, Neural Networks, Computer
- Abstract
Our work expands the use of capsule networks to the task of object segmentation for the first time in the literature. This is made possible via the introduction of locally-constrained routing and transformation matrix sharing, which reduces the parameter/memory burden and allows for the segmentation of objects at large resolutions. To compensate for the loss of global information in constraining the routing, we propose the concept of "deconvolutional" capsules to create a deep encoder-decoder style network, called SegCaps. We extend the masked reconstruction regularization to the task of segmentation and perform thorough ablation experiments on each component of our method. The proposed convolutional-deconvolutional capsule network, SegCaps, shows state-of-the-art results while using a fraction of the parameters of popular segmentation networks. To validate our proposed method, we perform experiments segmenting pathological lungs from clinical and pre-clinical thoracic computed tomography (CT) scans and segmenting muscle and adipose (fat) tissue from magnetic resonance imaging (MRI) scans of human subjects' thighs. Notably, our experiments in lung segmentation represent the largest-scale study in pathological lung segmentation in the literature, where we conduct experiments across five extremely challenging datasets, containing both clinical and pre-clinical subjects, and nearly 2000 computed-tomography scans. Our newly developed segmentation platform outperforms other methods across all datasets while utilizing less than 5% of the parameters in the popular U-Net for biomedical image segmentation. Further, we demonstrate capsules' ability to generalize to unseen handling of rotations/reflections on natural images., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Published by Elsevier B.V.)
- Published
- 2021
- Full Text
- View/download PDF
44. A machine learning-based prediction of the micropapillary/solid growth pattern in invasive lung adenocarcinoma with radiomics.
- Author
-
He B, Song Y, Wang L, Wang T, She Y, Hou L, Zhang L, Wu C, Babu BA, Bagci U, Waseem T, Yang M, Xie D, and Chen C
- Abstract
Background: Micropapillary/solid (MP/S) growth patterns of lung adenocarcinoma are vital for making clinical decisions regarding surgical intervention. This study aimed to predict the presence of a MP/S component in lung adenocarcinoma using radiomics analysis., Methods: Between January 2011 and December 2013, patients undergoing curative invasive lung adenocarcinoma resection were included. Using the "PyRadiomics" package, we extracted 90 radiomics features from the preoperative computed tomography (CT) images. Subsequently, four prediction models were built by utilizing conventional machine learning approaches fitting into radiomics analysis: a generalized linear model (GLM), Naïve Bayes, support vector machine (SVM), and random forest classifiers. The models' accuracy was assessed using a receiver operating curve (ROC) analysis, and the models' stability was validated both internally and externally., Results: A total of 268 patients were included as a primary cohort, and 36.6% (98/268) of them had lung adenocarcinoma with an MP/S component. Patients with an MP/S component had a higher rate of lymph node metastasis (18.4% versus 5.3%) and worse recurrence-free and overall survival. Five radiomics features were selected for model building, and in the internal validation, the four models achieved comparable performance of MP/S prediction in terms of area under the curve (AUC): GLM, 0.74 [95% confidence interval (CI): 0.65-0.83]; Naïve Bayes, 0.75 (95% CI: 0.65-0.85); SVM, 0.73 (95% CI: 0.61-0.83); and random forest, 0.72 (95% CI: 0.63-0.81). External validation was performed using a test cohort with 193 patients, and the AUC values were 0.70, 0.72, 0.73, and 0.69 for Naïve Bayes, SVM, random forest, and GLM, respectively., Conclusions: Radiomics-based machine learning approach is a very strong tool for preoperatively predicting the presence of MP/S growth patterns in lung adenocarcinoma, and can help customize treatment and surveillance strategies., Competing Interests: Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/tlcr-21-44). The authors have no conflicts of interest to declare., (2021 Translational Lung Cancer Research. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
45. Quick guide on radiology image pre-processing for deep learning applications in prostate cancer research.
- Author
-
Masoudi S, Harmon SA, Mehralivand S, Walker SM, Raviprakash H, Bagci U, Choyke PL, and Turkbey B
- Abstract
Purpose : Deep learning has achieved major breakthroughs during the past decade in almost every field. There are plenty of publicly available algorithms, each designed to address a different task of computer vision in general. However, most of these algorithms cannot be directly applied to images in the medical domain. Herein, we are focused on the required preprocessing steps that should be applied to medical images prior to deep neural networks. Approach: To be able to employ the publicly available algorithms for clinical purposes, we must make a meaningful pixel/voxel representation from medical images which facilitates the learning process. Based on the ultimate goal expected from an algorithm (classification, detection, or segmentation), one may infer the required pre-processing steps that can ideally improve the performance of that algorithm. Required pre-processing steps for computed tomography (CT) and magnetic resonance (MR) images in their correct order are discussed in detail. We further supported our discussion by relevant experiments to investigate the efficiency of the listed preprocessing steps. Results: Our experiments confirmed how using appropriate image pre-processing in the right order can improve the performance of deep neural networks in terms of better classification and segmentation. Conclusions: This work investigates the appropriate pre-processing steps for CT and MR images of prostate cancer patients, supported by several experiments that can be useful for educating those new to the field (https://github.com/NIH-MIP/Radiology_Image_Preprocessing_for_Deep_Learning)., (© 2021 The Authors.)
- Published
- 2021
- Full Text
- View/download PDF
46. Deep Learning Based Staging of Bone Lesions From Computed Tomography Scans.
- Author
-
Masoudi S, Mehralivand S, Harmon SA, Lay N, Lindenberg L, Mena E, Pinto PA, Citrin DE, Gulley JL, Wood BJ, Dahut WL, Madan RA, Bagci U, Choyke PL, and Turkbey B
- Abstract
In this study, we formulated an efficient deep learning-based classification strategy for characterizing metastatic bone lesions using computed tomography scans (CTs) of prostate cancer patients. For this purpose, 2,880 annotated bone lesions from CT scans of 114 patients diagnosed with prostate cancer were used for training, validation, and final evaluation. These annotations were in the form of lesion full segmentation, lesion type and labels of either benign or malignant. In this work, we present our approach in developing the state-of-the-art model to classify bone lesions as benign or malignant, where (1) we introduce a valuable dataset to address a clinically important problem, (2) we increase the reliability of our model by patient-level stratification of our dataset following lesion-aware distribution at each of the training, validation, and test splits, (3) we explore the impact of lesion texture, morphology, size, location, and volumetric information on the classification performance, (4) we investigate the functionality of lesion classification using different algorithms including lesion-based average 2D ResNet-50, lesion-based average 2D ResNeXt-50, 3D ResNet-18, 3D ResNet-50, as well as the ensemble of 2D ResNet-50 and 3D ResNet-18. For this purpose, we employed a train/validation/test split equal to 75%/12%/13% with several data augmentation methods applied to the training dataset to avoid overfitting and to increase reliability. We achieved an accuracy of 92.2% for correct classification of benign vs. malignant bone lesions in the test set using an ensemble of lesion-based average 2D ResNet-50 and 3D ResNet-18 with texture, volumetric information, and morphology having the greatest discriminative power respectively. To the best of our knowledge, this is the highest ever achieved lesion-level accuracy having a very comprehensive data set for such a clinically important problem. This level of classification performance in the early stages of metastasis development bodes well for clinical translation of this strategy.
- Published
- 2021
- Full Text
- View/download PDF
47. Integrating Eye Tracking and Speech Recognition Accurately Annotates MR Brain Images for Deep Learning: Proof of Principle.
- Author
-
Stember JN, Celik H, Gutman D, Swinburne N, Young R, Eskreis-Winkler S, Holodny A, Jambawalikar S, Wood BJ, Chang PD, Krupinski E, and Bagci U
- Abstract
Purpose: To generate and assess an algorithm combining eye tracking and speech recognition to extract brain lesion location labels automatically for deep learning (DL)., Materials and Methods: In this retrospective study, 700 two-dimensional brain tumor MRI scans from the Brain Tumor Segmentation database were clinically interpreted. For each image, a single radiologist dictated a standard phrase describing the lesion into a microphone, simulating clinical interpretation. Eye-tracking data were recorded simultaneously. Using speech recognition, gaze points corresponding to each lesion were obtained. Lesion locations were used to train a keypoint detection convolutional neural network to find new lesions. A network was trained to localize lesions for an independent test set of 85 images. The statistical measure to evaluate our method was percent accuracy., Results: Eye tracking with speech recognition was 92% accurate in labeling lesion locations from the training dataset, thereby demonstrating that fully simulated interpretation can yield reliable tumor location labels. These labels became those that were used to train the DL network. The detection network trained on these labels predicted lesion location of a separate testing set with 85% accuracy., Conclusion: The DL network was able to locate brain tumors on the basis of training data that were labeled automatically from simulated clinical image interpretation.© RSNA, 2020., Competing Interests: Disclosures of Conflicts of Interest: J.N.S. disclosed no relevant relationships. H.C. disclosed no relevant relationships. D.G. disclosed no relevant relationships. N.S. disclosed no relevant relationships. R.Y. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: author paid consultant for Agios, Puma, ICON, and NordicNeuroLab; institution has grant from Agios. Other relationships: disclosed no relevant relationships. S.E. disclosed no relevant relationships. A.H. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: fMRI Consultants (purely educational entity). Other relationships: disclosed no relevant relationships. S.J. disclosed no relevant relationships. B.W. Activities related to the present article: institution receives NIH Intramural grants (work supported in part by NIH Center for Interventional Oncology and the Intramural Research Program of the NIH. Activities not related to the present article: eye tracking patents pending for imaging regarding 2D and 3D transformations. Other relationships: NIH and University of Central Florida may own intellectual property in the space; NIH and NVIDIA have a cooperative research and development agreement; NIH and Siemens have a cooperative research and development agreement; NIH and Philips have a cooperative research and development agreement. P.C. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: author has stock in Avicenna.ai and is cofounder; author received travel accommodations from Canon Medical as a consultant. Other relationships: disclosed no relevant relationships. E.K. disclosed no relevant relationships. U.B. disclosed no relevant relationships., (2020 by the Radiological Society of North America, Inc.)
- Published
- 2020
- Full Text
- View/download PDF
48. The Impact of COVID-19 on African American Communities in the United States.
- Author
-
Cyrus E, Clarke R, Hadley D, Bursac Z, Trepka MJ, Dévieux JG, Bagci U, Furr-Holden D, Coudray M, Mariano Y, Kiplagat S, Noel I, Ravelo G, Paley M, and Wagner EF
- Abstract
Purpose: The purpose of this ecological study was to understand the impact of the density of African American (AA) communities on coronavirus disease 2019 (COVID-19) prevalence and death rate within the three most populous counties in each U.S. state and territory ( n =152). Methods: An ecological design was employed for the study. The top three most populous counties of each U.S. state and territory were included in analyses for a final sample size of n =152 counties. Confirmed COVID-19 cases and deaths that were accumulated between January 22, 2020 and April 12, 2020 in each of the three most populous counties in each U.S. state and territory were included. Linear regression was used to determine the association between AA density and COVID-19 prevalence (defined as the percentage of cases for the county population), and death rate (defined as number of deaths per 100,000 population). The models were adjusted for median age and poverty. Results: There was a direct association between AA density and COVID-19 prevalence; COVID-19 prevalence increased 5% for every 1% increase in county AA density ( p <0.01). There was also an association between county AA density and COVID-19 deaths; the death rate increased 2 per 100,000 for every percentage increase in county AA density ( p =0.02). Conclusion: These findings indicate that communities with a high AA density have been disproportionately burdened with COVID-19. To help develop effective interventions and programs that address this disparity, further study is needed to understand social determinants of health driving inequities for this community., Competing Interests: No competing financial interests exist., (© Elena Cyrus et al. 2020; Published by Mary Ann Liebert, Inc.)
- Published
- 2020
- Full Text
- View/download PDF
49. Proceedings from the First Global Artificial Intelligence in Gastroenterology and Endoscopy Summit.
- Author
-
Parasa S, Wallace M, Bagci U, Antonino M, Berzin T, Byrne M, Celik H, Farahani K, Golding M, Gross S, Jamali V, Mendonca P, Mori Y, Ninh A, Repici A, Rex D, Skrinak K, Thakkar SJ, van Hooft JE, Vargo J, Yu H, Xu Z, and Sharma P
- Subjects
- Diagnostic Imaging, Endoscopy, Humans, Machine Learning, Artificial Intelligence, Gastroenterology
- Abstract
Background and Aims: Artificial intelligence (AI), specifically deep learning, offers the potential to enhance the field of GI endoscopy in areas ranging from lesion detection and classification to quality metrics and documentation. Progress in this field will be measured by whether AI implementation can lead to improved patient outcomes and more efficient clinical workflow for GI endoscopists. The aims of this article are to report the findings of a multidisciplinary group of experts focusing on issues in AI research and applications related to gastroenterology and endoscopy, to review the current status of the field, and to produce recommendations for investigators developing and studying new AI technologies for gastroenterology., Methods: A multidisciplinary meeting was held on September 28, 2019, bringing together academic, industry, and regulatory experts in diverse fields including gastroenterology, computer and imaging sciences, machine learning, computer vision, U.S. Food and Drug Administration, and the National Institutes of Health. Recent and ongoing studies in gastroenterology and current technology in AI were presented and discussed, key gaps in knowledge were identified, and recommendations were made for research that would have the highest impact in making advances and implementation in the field of AI to gastroenterology., Results: There was a consensus that AI will transform the field of gastroenterology, particularly endoscopy and image interpretation. Powered by advanced machine learning algorithms, the use of computer vision in endoscopy has the potential to result in better prediction and treatment outcomes for patients with gastroenterology disorders and cancer. Large libraries of endoscopic images, "EndoNet," will be important to facilitate development and application of AI systems. The regulatory environment for implementation of AI systems is evolving, but common outcomes such as colon polyp detection have been highlighted as potential clinical trial endpoints. Other threshold outcomes will be important, as well as clarity on iterative improvement of clinical systems., Conclusions: Gastroenterology is a prime candidate for early adoption of AI. AI is rapidly moving from an experimental phase to a clinical implementation phase in gastroenterology. It is anticipated that the implementation of AI in gastroenterology over the next decade will have a significant and positive impact on patient care and clinical workflows. Ongoing collaboration among gastroenterologists, industry experts, and regulatory agencies will be important to ensure that progress is rapid and clinically meaningful. However, several constraints and areas will benefit from further exploration, including potential clinical applications, implementation, structure and governance, role of gastroenterologists, and potential impact of AI in gastroenterology., (Copyright © 2020 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.)
- Published
- 2020
- Full Text
- View/download PDF
50. Artificial intelligence for the detection of COVID-19 pneumonia on chest CT using multinational datasets.
- Author
-
Harmon SA, Sanford TH, Xu S, Turkbey EB, Roth H, Xu Z, Yang D, Myronenko A, Anderson V, Amalou A, Blain M, Kassin M, Long D, Varble N, Walker SM, Bagci U, Ierardi AM, Stellato E, Plensich GG, Franceschelli G, Girlando C, Irmici G, Labella D, Hammoud D, Malayeri A, Jones E, Summers RM, Choyke PL, Xu D, Flores M, Tamura K, Obinata H, Mori H, Patella F, Cariati M, Carrafiello G, An P, Wood BJ, and Turkbey B
- Subjects
- Adolescent, Adult, Aged, Aged, 80 and over, Algorithms, Betacoronavirus isolation & purification, COVID-19, COVID-19 Testing, Child, Child, Preschool, Coronavirus Infections diagnosis, Coronavirus Infections virology, Deep Learning, Female, Humans, Imaging, Three-Dimensional methods, Lung diagnostic imaging, Male, Middle Aged, Pandemics, Pneumonia, Viral virology, Radiographic Image Interpretation, Computer-Assisted methods, SARS-CoV-2, Young Adult, Artificial Intelligence, Clinical Laboratory Techniques methods, Coronavirus Infections diagnostic imaging, Pneumonia, Viral diagnostic imaging, Tomography, X-Ray Computed methods
- Abstract
Chest CT is emerging as a valuable diagnostic tool for clinical management of COVID-19 associated lung disease. Artificial intelligence (AI) has the potential to aid in rapid evaluation of CT scans for differentiation of COVID-19 findings from other clinical entities. Here we show that a series of deep learning algorithms, trained in a diverse multinational cohort of 1280 patients to localize parietal pleura/lung parenchyma followed by classification of COVID-19 pneumonia, can achieve up to 90.8% accuracy, with 84% sensitivity and 93% specificity, as evaluated in an independent test set (not included in training and validation) of 1337 patients. Normal controls included chest CTs from oncology, emergency, and pneumonia-related indications. The false positive rate in 140 patients with laboratory confirmed other (non COVID-19) pneumonias was 10%. AI-based algorithms can readily identify CT scans with COVID-19 associated pneumonia, as well as distinguish non-COVID related pneumonias with high specificity in diverse patient populations.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.