9,472 results on '"COMPUTER-AIDED DIAGNOSIS"'
Search Results
2. ItpCtrl-AI: End-to-end interpretable and controllable artificial intelligence by modeling radiologists’ intentions
- Author
-
Pham, Trong-Thang, Brecheisen, Jacob, Wu, Carol C., Nguyen, Hien, Deng, Zhigang, Adjeroh, Donald, Doretto, Gianfranco, Choudhary, Arabinda, and Le, Ngan
- Published
- 2025
- Full Text
- View/download PDF
3. PsyneuroNet architecture for multi-class prediction of neurological disorders
- Author
-
Rawat, Kavita and Sharma, Trapti
- Published
- 2025
- Full Text
- View/download PDF
4. Wireless capsule endoscopy anomaly classification via dynamic multi-task learning
- Author
-
Li, Xingcun, Wu, Qinghua, and Wu, Kun
- Published
- 2025
- Full Text
- View/download PDF
5. FocalNeXt: A ConvNeXt augmented FocalNet architecture for lung cancer classification from CT-scan images
- Author
-
Gulsoy, Tolgahan and Baykal Kablan, Elif
- Published
- 2025
- Full Text
- View/download PDF
6. Computer-aided diagnosis of pituitary microadenoma on dynamic contrast-enhanced MRI based on spatio-temporal features
- Author
-
Guo, Te, Luan, Jixin, Gao, Jingyuan, Liu, Bing, Shen, Tianyu, Yu, Hongwei, Ma, Guolin, and Wang, Kunfeng
- Published
- 2025
- Full Text
- View/download PDF
7. A deep neural network model with spectral correlation function for electrocardiogram classification and diagnosis of atrial fibrillation
- Author
-
Mihandoost, Sara
- Published
- 2024
- Full Text
- View/download PDF
8. Class distance weighted cross entropy loss for classification of disease severity
- Author
-
Polat, Gorkem, Çağlar, Ümit Mert, and Temizel, Alptekin
- Published
- 2025
- Full Text
- View/download PDF
9. An efficient vision transformer for Alzheimer’s disease classification using magnetic resonance images
- Author
-
Lu, Si-Yuan, Zhang, Yu-Dong, and Yao, Yu-Dong
- Published
- 2025
- Full Text
- View/download PDF
10. Application of computer-aided diagnosis to predict malignancy in BI-RADS 3 breast lesions
- Author
-
He, Ping, Chen, Wen, Bai, Ming-Yu, Li, Jun, Wang, Qing-Qing, Fan, Li-Hong, Zheng, Jian, Liu, Chun-Tao, Zhang, Xiao-Rong, Yuan, Xi-Rong, Song, Peng-Jie, and Cui, Li-Gang
- Published
- 2024
- Full Text
- View/download PDF
11. YOLO and residual network for colorectal cancer cell detection and counting
- Author
-
Haq, Inayatul, Mazhar, Tehseen, Asif, Rizwana Naz, Ghadi, Yazeed Yasin, Ullah, Najib, Khan, Muhammad Amir, and Al-Rasheed, Amal
- Published
- 2024
- Full Text
- View/download PDF
12. Computer-aided Diagnosis of Sarcoidosis Based on CT Images
- Author
-
Prokop, Paweł
- Published
- 2024
- Full Text
- View/download PDF
13. Fully-automatic end-to-end approaches for 3D drusen segmentation in Optical Coherence Tomography images
- Author
-
Goyanes, Elena, Leyva, Saúl, Herrero, Paula, de Moura, Joaquim, Novo, Jorge, and Ortega, Marcos
- Published
- 2024
- Full Text
- View/download PDF
14. Predicting survival of Iranian COVID-19 patients infected by various variants including omicron from CT Scan images and clinical data using deep neural networks
- Author
-
Ghafoori, Mahyar, Hamidi, Mehrab, Modegh, Rassa Ghavami, Aziz-Ahari, Alireza, Heydari, Neda, Tavafizadeh, Zeynab, Pournik, Omid, Emdadi, Sasan, Samimi, Saeed, Mohseni, Amir, Khaleghi, Mohammadreza, Dashti, Hamed, and Rabiee, Hamid R.
- Published
- 2023
- Full Text
- View/download PDF
15. Diagnostic performance of ultrasound with computer-aided diagnostic system in detecting breast cancer
- Author
-
Song, Pengjie, Zhang, Li, Bai, Longmei, Wang, Qing, and Wang, Yanlei
- Published
- 2023
- Full Text
- View/download PDF
16. Computer aided diagnosis of neurodevelopmental disorders and genetic syndromes based on facial images – A systematic literature review
- Author
-
Rosindo Daher de Barros, Fábio, Novais F. da Silva, Caio, de Castro Michelassi, Gabriel, Brentani, Helena, Nunes, Fátima L.S., and Machado-Lima, Ariane
- Published
- 2023
- Full Text
- View/download PDF
17. MIDRC-MetricTree: a decision tree-based tool for recommending performance metrics in artificial intelligence-assisted medical image analysis.
- Author
-
Drukker, Karen, Sahiner, Berkman, Hu, Tingting, Kim, Grace Hyun, Whitney, Heather M, Baughan, Natalie, Myers, Kyle J, Giger, Maryellen L, and McNitt-Gray, Michael
- Subjects
artificial intelligence ,computer-aided diagnosis ,machine learning ,performance evaluation ,Clinical sciences ,Biomedical engineering - Abstract
PURPOSE: The Medical Imaging and Data Resource Center (MIDRC) was created to facilitate medical imaging machine learning (ML) research for tasks including early detection, diagnosis, prognosis, and assessment of treatment response related to the coronavirus disease 2019 pandemic and beyond. The purpose of this work was to create a publicly available metrology resource to assist researchers in evaluating the performance of their medical image analysis ML algorithms. APPROACH: An interactive decision tree, called MIDRC-MetricTree, has been developed, organized by the type of task that the ML algorithm was trained to perform. The criteria for this decision tree were that (1) users can select information such as the type of task, the nature of the reference standard, and the type of the algorithm output and (2) based on the user input, recommendations are provided regarding appropriate performance evaluation approaches and metrics, including literature references and, when possible, links to publicly available software/code as well as short tutorial videos. RESULTS: Five types of tasks were identified for the decision tree: (a) classification, (b) detection/localization, (c) segmentation, (d) time-to-event (TTE) analysis, and (e) estimation. As an example, the classification branch of the decision tree includes two-class (binary) and multiclass classification tasks and provides suggestions for methods, metrics, software/code recommendations, and literature references for situations where the algorithm produces either binary or non-binary (e.g., continuous) output and for reference standards with negligible or non-negligible variability and unreliability. CONCLUSIONS: The publicly available decision tree is a resource to assist researchers in conducting task-specific performance evaluations, including classification, detection/localization, segmentation, TTE, and estimation tasks.
- Published
- 2024
18. Intelligent mask image reconstruction for cardiac image segmentation through local–global fusion.
- Author
-
Boukhamla, Assia, Azizi, Nabiha, and Belhaouari, Samir Brahim
- Subjects
CARDIAC magnetic resonance imaging ,COMPUTER-aided diagnosis ,CARDIOVASCULAR disease diagnosis ,TRANSFORMER models ,IMAGE processing - Abstract
Accurate segmentation of cardiac structures in magnetic resonance imaging (MRI) is essential for reliable diagnosis and management of cardiovascular disease. Although numerous robust models have been proposed, no single segmentation model consistently outperforms others across all cases, and models that excel on one dataset may not achieve similar accuracy on others or when the same dataset is expanded. This study introduces FCTransNet, an ensemble-based computer-aided diagnosis system that leverages the complementary strengths of Vision Transformer (ViT) models (specifically TransUNet, SwinUNet, and SegFormer) to address these challenges. To achieve this, we propose a novel pixel-level fusion technique, the Intelligent Weighted Summation Technique (IWST), which reconstructs the final segmentation mask by integrating the outputs of the ViT models and accounting for their diversity. First, a dedicated U-Net module isolates the region of interest (ROI) from cine MRI images, which is then processed by each ViT to generate preliminary segmentation masks. The IWST subsequently fuses these masks to produce a refined final segmentation. By using a local window around each pixel, IWST captures specific neighborhood details while incorporating global context to enhance segmentation accuracy. Experimental validation on the ACDC dataset shows that FCTransNet significantly outperforms individual ViTs and other deep learning-based methods, achieving a Dice Score (DSC) of 0.985 and a mean Intersection over Union (IoU) of 0.914 in the end-diastolic phase. In addition, FCTransNet maintains high accuracy in the end-systolic phase with a DSC of 0.989 and an IoU of 0.908. These results underscore FCTransNet's ability to improve cardiac MRI segmentation accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
19. Needle tracking and segmentation in breast ultrasound imaging based on spatio-temporal memory network.
- Author
-
Zhang, Qiyun, Chen, Jiawei, Wang, Jinhong, Wang, Haolin, He, Yi, Li, Bin, Zhuang, Zhemin, and Zeng, Huancheng
- Subjects
BREAST biopsy ,OPTICAL flow ,ULTRASONIC imaging ,NEEDLE biopsy ,CANCER diagnosis - Abstract
Introduction: Ultrasound-guided needle biopsy is a commonly employed technique in modern medicine for obtaining tissue samples, such as those from breast tumors, for pathological analysis. However, it is limited by the low signal-to-noise ratio and the complex background of breast ultrasound imaging. In order to assist physicians in accurately performing needle biopsies on pathological tissues, minimize complications, and avoid damage to surrounding tissues, computer-aided needle segmentation and tracking has garnered increasing attention, with notable progress made in recent years. Nevertheless, challenges remain, including poor ultrasound image quality, high computational resource requirements, and various needle shape. Methods: This study introduces a novel Spatio-Temporal Memory Network designed for ultrasound-guided breast tumor biopsy. The proposed network integrates a hybrid encoder that employs CNN-Transformer architectures, along with an optical flow estimation method. From the Ultrasound Imaging Department at the First Affiliated Hospital of Shantou University, we developed a real-time segmentation dataset specifically designed for ultrasound-guided needle puncture procedures in breast tumors, which includes ultrasound biopsy video data collected from 11 patients. Results: Experimental results demonstrate that this model significantly outperforms existing methods in improving the positioning accuracy of needle and enhancing the tracking stability. Specifically, the performance metrics of the proposed model is as follows: IoU is 0.731, Dice is 0.817, Precision is 0.863, Recall is 0.803, and F1 score is 0.832. By advancing the precision of needle localization, this model contributes to enhanced reliability in ultrasound-guided breast tumor biopsy, ultimately supporting safer and more effective clinical outcomes. Discussion: The model proposed in this paper demonstrates robust performance in the computer-aided tracking and segmentation of biopsy needles in ultrasound imaging, specifically for ultrasound-guided breast tumor biopsy, offering dependable technical support for clinical procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
20. A quantum-optimized approach for breast cancer detection using SqueezeNet-SVM.
- Author
-
Bilal, Anas, Alkhathlan, Ali, Kateb, Faris A., Tahir, Alishba, Shafiq, Muhammad, and Long, Haixia
- Subjects
- *
GREY Wolf Optimizer algorithm , *COMPUTER-aided diagnosis , *MEDICAL sciences , *IMAGE analysis , *SUPPORT vector machines - Abstract
Breast cancer is one of the most aggressive types of cancer, and its early diagnosis is crucial for reducing mortality rates and ensuring timely treatment. Computer-aided diagnosis systems provide automated mammography image processing, interpretation, and grading. However, since the currently existing methods suffer from such issues as overfitting, lack of adaptability, and dependence on massive annotated datasets, the present work introduces a hybrid approach to enhance breast cancer classification accuracy. The proposed Q-BGWO-SQSVM approach utilizes an improved quantum-inspired binary Grey Wolf Optimizer and combines it with SqueezeNet and Support Vector Machines to exhibit sophisticated performance. SqueezeNet's fire modules and complex bypass mechanisms extract distinct features from mammography images. Then, these features are optimized by the Q-BGWO for determining the best SVM parameters. Since the current CAD system is more reliable, accurate, and sensitive, its application is advantageous for healthcare. The proposed Q-BGWO-SQSVM was evaluated using diverse databases: MIAS, INbreast, DDSM, and CBIS-DDSM, analyzing its performance regarding accuracy, sensitivity, specificity, precision, F1 score, and MCC. Notably, on the CBIS-DDSM dataset, the Q-BGWO-SQSVM achieved remarkable results at 99% accuracy, 98% sensitivity, and 100% specificity in 15-fold cross-validation. Finally, it can be observed that the performance of the designed Q-BGWO-SQSVM model is excellent, and its potential realization in other datasets and imaging conditions is promising. The novel Q-BGWO-SQSVM model outperforms the state-of-the-art classification methods and offers accurate and reliable early breast cancer detection, which is essential for further healthcare development. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
21. Classification of CT scan and X-ray dataset based on deep learning and particle swarm optimization.
- Author
-
Liu, Honghua, Zhao, Mingwei, She, Chang, Peng, Han, Liu, Mailan, and Li, Bo
- Subjects
- *
FEATURE extraction , *COMPUTER-aided diagnosis , *COMPUTED tomography , *PARTICLE swarm optimization , *IMAGE processing - Abstract
In 2019, the novel coronavirus swept the world, exposing the monitoring and early warning problems of the medical system. Computer-aided diagnosis models based on deep learning have good universality and can well alleviate these problems. However, traditional image processing methods may lead to high false positive rates, which is unacceptable in disease monitoring and early warning. This paper proposes a low false positive rate disease detection method based on COVID-19 lung images and establishes a two-stage optimization model. In the first stage, the model is trained using classical gradient descent, and relevant features are extracted; in the second stage, an objective function that minimizes the false positive rate is constructed to obtain a network model with high accuracy and low false positive rate. Therefore, the proposed method has the potential to effectively classify medical images. The proposed model was verified using a public COVID-19 radiology dataset and a public COVID-19 lung CT scan dataset. The results show that the model has made significant progress, with the false positive rate reduced to 11.3% and 7.5%, and the area under the ROC curve increased to 92.8% and 97.01%. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
22. Alzheimer's disease diagnosis by 3D-SEConvNeXt.
- Author
-
Hu, Zhongyi, Wang, Yuhang, and Xiao, Lei
- Subjects
CONVOLUTIONAL neural networks ,COMPUTER-aided diagnosis ,IMAGE recognition (Computer vision) ,ALZHEIMER'S disease ,MAGNETIC resonance imaging - Abstract
Alzheimer's disease (AD) constitutes a fatal neurodegenerative disorder and represents the most prevalent form of dementia among the elderly population. Traditional manual AD classification methods, such as clinical diagnosis, are known to be time-consuming and labor-intensive, with relatively low accuracy. Therefore, our work aims to develop a new deep learning framework to tackle this challenge. Our proposed model integrates ConvNeXt with three-dimensional (3D) convolution and incorporates a 3D Squeeze-and-Excitation (3D-SE) attention mechanism to enhance early classification of AD. The experimental data is sourced from the publicly accessible Alzheimer's disease Neuroimaging Initiative (ADNI) database, with raw Magnetic Resonance Imaging (MRI) data preprocessed using SPM12 software. Subsequently, the preprocessed data is input into the 3D-SEConvNeXt network to perform four classification tasks: distinguishing between AD and Normal Control (NC), Mild Cognitive Impairment (MCI) and NC, AD and MCI, as well as AD, MCI, and NC. The experimental results indicate that the 3D-SEConvNeXt model consistently outperforms alternative models in terms of accuracy, achieving commendable outcomes in early AD diagnostic tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
23. An exploration of distinguishing subjective cognitive decline and mild cognitive impairment based on resting-state prefrontal functional connectivity assessed by functional near-infrared spectroscopy.
- Author
-
Pu, Zhengping, Huang, Hongna, Li, Man, Li, Hongyan, Shen, Xiaoyan, Wu, Qingfeng, Ni, Qin, Lin, Yong, and Cui, Donghong
- Subjects
COGNITION disorders diagnosis ,CROSS-sectional method ,MILD cognitive impairment ,FUNCTIONAL connectivity ,RESEARCH funding ,LOGISTIC regression analysis ,NEAR infrared spectroscopy ,LONGITUDINAL method ,SUPPORT vector machines ,NEUROPSYCHOLOGICAL tests ,COMPUTER-aided diagnosis ,MACHINE learning ,COMPARATIVE studies ,CONFIDENCE intervals ,SENSITIVITY & specificity (Statistics) ,DISCRIMINANT analysis - Abstract
Purpose: Functional near-infrared spectroscopy (fNIRS) has shown feasibility in evaluating cognitive function and brain functional connectivity (FC). Therefore, this fNIRS study aimed to develop a screening method for subjective cognitive decline (SCD) and mild cognitive impairment (MCI) based on resting-state prefrontal FC and neuropsychological tests via machine learning. Methods: Functional connectivity data measured by fNIRS were collected from 55 normal controls (NCs), 80 SCD individuals, and 111 MCI individuals. Differences in FC were analyzed among the groups. FC strength and neuropsychological test scores were extracted as features to build classification and predictive models through machine learning. Model performance was assessed based on accuracy, specificity, sensitivity, and area under the curve (AUC) with 95% confidence interval (CI) values. Results: Statistical analysis revealed a trend toward compensatory enhanced prefrontal FC in SCD and MCI individuals. The models showed a satisfactory ability to differentiate among the three groups, especially those employing linear discriminant analysis, logistic regression, and support vector machine. Accuracies of 94.9% for MCI vs. NC, 79.4% for MCI vs. SCD, and 77.0% for SCD vs. NC were achieved, and the highest AUC values were 97.5% (95% CI: 95.0%–100.0%) for MCI vs. NC, 83.7% (95% CI: 77.5%–89.8%) for MCI vs. SCD, and 80.6% (95% CI: 72.7%–88.4%) for SCD vs. NC. Conclusion: The developed screening method based on resting-state prefrontal FC measured by fNIRS and machine learning may help predict early-stage cognitive impairment. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
24. Artificial Intelligence for Adenoma and Polyp Detection During Screening and Surveillance Colonoscopy: A Randomized-Controlled Trial.
- Author
-
Alali, Ali A., Alhashmi, Ahmad, Alotaibi, Nawal, Ali, Nargess, Alali, Maryam, and Alfadhli, Ahmad
- Subjects
- *
COMPUTER-aided diagnosis , *ADENOMATOUS polyps , *INFLAMMATORY bowel diseases , *ADENOMA , *COLON cancer - Abstract
Background: Colorectal cancer (CRC) is the second leading cause of cancer death in Kuwait. The effectiveness of colonoscopy in preventing CRC is dependent on a high adenoma detection rate (ADR). Computer-aided detection can identify (CADe) and characterize polyps in real time and differentiate benign from neoplastic polyps, but its role remains unclear in screening colonoscopy. Methods: This was a randomized-controlled trial (RCT) enrolling patients 45 years of age or older presenting for outpatient screening or surveillance colonoscopy (Kuwait clinical trial registration number 2047/2022). Patients with a history of inflammatory bowel disease, alarm symptoms, familial polyposis syndrome, colon resection, or poor bowel preparation were excluded. Patients were randomly assigned to either high-definition white-light (HD-WL) colonoscopy (standard of care) or HD-WL colonoscopy with the CADe system. The primary outcome was ADR. The secondary outcomes included polyp detection rate (PDR), adenoma per colonoscopy (APC), polyp per colonoscopy (PPC), and accuracy of polyp characterization. Results: From 1 September 2022 to 1 March 2023, 102 patients were included and allocated to either the HD-WL colonoscopy group (n = 51) or CADe group (n = 51). The mean age was 52.8 years (SD 8.2), and males represented 50% of the cohort. Screening for CRC accounted for 94.1% of all examinations, while the remaining patients underwent surveillance colonoscopy. A total of 121 polyps were detected with an average size of 4.18 mm (SD 5.1), the majority being tubular adenomas with low-grade dysplasia (47.1%) and hyperplastic polyps (46.3%). There was no difference in the overall bowel preparation, insertion and withdrawal times, and adverse events between the two arms. ADR (primary outcome) was non-significantly higher in the CADe group compared to the HD colonoscopy group (47.1% vs. 37.3%, p = 0.3). Among the secondary outcomes, PDR (78.4% vs. 56.8%, p = 0.02) and PPC (1.35 vs. 0.96, p = 0.04) were significantly higher in the CADe group, but APC was not (0.75 vs. 0.51, p = 0.09). Accuracy in characterizing polyp histology was similar in both groups. Conclusions: In this RCT, the artificial intelligence system showed a non-significant trend towards improving ADR among Kuwaiti patients undergoing screening or surveillance colonoscopy compared to HD-WL colonoscopy alone, while it significantly improved the detection of diminutive polyps. A larger multicenter study is required to detect the true effect of CADe on the detection of adenomas. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
25. Deep Transfer Learning for Classification of Late Gadolinium Enhancement Cardiac MRI Images into Myocardial Infarction, Myocarditis, and Healthy Classes: Comparison with Subjective Visual Evaluation.
- Author
-
Ben Khalifa, Amani, Mili, Manel, Maatouk, Mezri, Ben Abdallah, Asma, Abdellali, Mabrouk, Gaied, Sofiene, Ben Ali, Azza, Lahouel, Yassir, Bedoui, Mohamed Hedi, and Zrig, Ahmed
- Subjects
- *
COMPUTER-aided diagnosis , *CARDIAC magnetic resonance imaging , *MAGNETIC resonance imaging , *MYOCARDIAL infarction , *CARDIAC imaging - Abstract
Background/Objectives: To develop a computer-aided diagnosis (CAD) method for the classification of late gadolinium enhancement (LGE) cardiac MRI images into myocardial infarction (MI), myocarditis, and healthy classes using a fine-tuned VGG16 model hybridized with multi-layer perceptron (MLP) (VGG16-MLP) and assess our model's performance in comparison to various pre-trained base models and MRI readers. Methods: This study included 361 LGE images for MI, 222 for myocarditis, and 254 for the healthy class. The left ventricle was extracted automatically using a U-net segmentation model on LGE images. Fine-tuned VGG16 was performed for feature extraction. A spatial attention mechanism was implemented as a part of the neural network architecture. The MLP architecture was used for the classification. The evaluation metrics were calculated using a separate test set. To compare the VGG16 model's performance in feature extraction, various pre-trained base models were evaluated: VGG19, DenseNet121, DenseNet201, MobileNet, InceptionV3, and InceptionResNetV2. The Support Vector Machine (SVM) classifier was evaluated and compared to MLP for the classification task. The performance of the VGG16-MLP model was compared with a subjective visual analysis conducted by two blinded independent readers. Results: The VGG16-MLP model allowed high-performance differentiation between MI, myocarditis, and healthy LGE cardiac MRI images. It outperformed the other tested models with 96% accuracy, 97% precision, 96% sensitivity, and 96% F1-score. Our model surpassed the accuracy of Reader 1 by 27% and Reader 2 by 17%. Conclusions: Our study demonstrated that the VGG16-MLP model permits accurate classification of MI, myocarditis, and healthy LGE cardiac MRI images and could be considered a reliable computer-aided diagnosis approach specifically for radiologists with limited experience in cardiovascular imaging. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
26. Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It.
- Author
-
Hafeez, Yasir, Memon, Khuhed, AL-Quraishi, Maged S., Yahya, Norashikin, Elferik, Sami, and Ali, Syed Saad Azhar
- Subjects
- *
COMPUTER-aided diagnosis , *MEDICAL personnel , *POSITRON emission tomography , *MAGNETIC resonance imaging , *ARTIFICIAL intelligence - Abstract
Background: Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it has not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, and medical experts have been working in the direction of designing and developing computer aided diagnosis (CAD) tools to serve as assistants to doctors, their large-scale adoption and integration into the healthcare system still seems far-fetched. Diagnostic radiology is no exception. Imagining techniques like magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET) scans have been widely and very effectively employed by radiologists and neurologists for the differential diagnoses of neurological disorders for decades, yet no AI-powered systems to analyze such scans have been incorporated into the standard operating procedures of healthcare systems. Why? It is absolutely understandable that in diagnostic medicine, precious human lives are on the line, and hence there is no room even for the tiniest of mistakes. Nevertheless, with the advent of explainable artificial intelligence (XAI), the old-school black boxes of deep learning (DL) systems have been unraveled. Would XAI be the turning point for medical experts to finally embrace AI in diagnostic radiology? This review is a humble endeavor to find the answers to these questions. Methods: In this review, we present the journey and contributions of AI in developing systems to recognize, preprocess, and analyze brain MRI scans for differential diagnoses of various neurological disorders, with special emphasis on CAD systems embedded with explainability. A comprehensive review of the literature from 2017 to 2024 was conducted using host databases. We also present medical domain experts' opinions and summarize the challenges up ahead that need to be addressed in order to fully exploit the tremendous potential of XAI in its application to medical diagnostics and serve humanity. Results: Forty-seven studies were summarized and tabulated with information about the XAI technology and datasets employed, along with performance accuracies. The strengths and weaknesses of the studies have also been discussed. In addition, the opinions of seven medical experts from around the world have been presented to guide engineers and data scientists in developing such CAD tools. Conclusions: Current CAD research was observed to be focused on the enhancement of the performance accuracies of the DL regimens, with less attention being paid to the authenticity and usefulness of explanations. A shortage of ground truth data for explainability was also observed. Visual explanation methods were found to dominate; however, they might not be enough, and more thorough and human professor-like explanations would be required to build the trust of healthcare professionals. Special attention to these factors along with the legal, ethical, safety, and security issues can bridge the current gap between XAI and routine clinical practice. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
27. Artificial intelligence performance in ultrasound-based lymph node diagnosis: a systematic review and meta-analysis.
- Author
-
Han, Xinyang, Qu, Jingguo, Chui, Man-Lik, Gunda, Simon Takadiyi, Chen, Ziman, Qin, Jing, King, Ann Dorothy, Chu, Winnie Chiu-Wing, Cai, Jing, and Ying, Michael Tin-Cheung
- Subjects
- *
CLINICAL decision support systems , *COMPUTER-aided diagnosis , *MACHINE learning , *ARTIFICIAL intelligence , *LYMPH nodes - Abstract
Background and objectives: Accurate classification of lymphadenopathy is essential for determining the pathological nature of lymph nodes (LNs), which plays a crucial role in treatment selection. The biopsy method is invasive and carries the risk of sampling failure, while the utilization of non-invasive approaches such as ultrasound can minimize the probability of iatrogenic injury and infection. With the advancement of artificial intelligence (AI) and machine learning, the diagnostic efficiency of LNs is further enhanced. This study evaluates the performance of ultrasound-based AI applications in the classification of benign and malignant LNs. Methods: The literature research was conducted using the PubMed, EMBASE, and Cochrane Library databases as of June 2024. The quality of the included studies was evaluated using the QUADAS-2 tool. The pooled sensitivity, specificity, and diagnostic odds ratio (DOR) were calculated to assess the diagnostic efficacy of ultrasound-based AI in classifying benign and malignant LNs. Subgroup analyses were also conducted to identify potential sources of heterogeneity. Results: A total of 1,355 studies were identified and reviewed. Among these studies, 19 studies met the inclusion criteria, and 2,354 cases were included in the analysis. The pooled sensitivity, specificity, and DOR of ultrasound-based machine learning in classifying benign and malignant LNs were 0.836 (95% CI [0.805, 0.863]), 0.850 (95% CI [0.805, 0.886]), and 33.331 (95% CI [22.873, 48.57]), respectively, indicating no publication bias (p = 0.12). Subgroup analyses may suggest that the location of lymph nodes, validation methods, and type of primary tumor are the sources of heterogeneity. Conclusion: AI can accurately differentiate benign from malignant LNs. Given the widespread use of ultrasonography in diagnosing malignant LNs in cancer patients, there is significant potential for integrating AI-based decision support systems into clinical practice to enhance the diagnostic accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
28. Advancements in Obstructive Sleep Apnea Diagnosis and Screening Through Artificial Intelligence: A Systematic Review.
- Author
-
Giorgi, Lucrezia, Nardelli, Domiziana, Moffa, Antonio, Iafrati, Francesco, Di Giovanni, Simone, Olszewska, Ewa, Baptista, Peter, Sabatino, Lorenzo, and Casale, Manuele
- Subjects
RECEIVER operating characteristic curves ,LOGISTIC regression analysis ,EVALUATION of medical care ,DESCRIPTIVE statistics ,SYSTEMATIC reviews ,MEDLINE ,SLEEP apnea syndromes ,DETECTION algorithms ,COMPUTER-aided diagnosis ,INTRACLASS correlation ,MEDICAL screening ,MACHINE learning ,EARLY diagnosis ,ONLINE information services ,QUALITY assurance ,ALGORITHMS ,SENSITIVITY & specificity (Statistics) - Abstract
Background: Obstructive sleep apnea (OSA) is a prevalent yet underdiagnosed condition associated with a major healthcare burden. Current diagnostic tools, such as full-night polysomnography (PSG), pose a limited accessibility to diagnosis due to their elevated costs. Recent advances in Artificial Intelligence (AI), including Machine Learning (ML) and deep learning (DL) algorithms, offer novel potential tools for an accurate OSA screening and diagnosis. This systematic review evaluates articles employing AI-powered models for OSA screening and diagnosis in the last decade. Methods: A comprehensive electronic search was performed on PubMed/MEDLINE, Google Scholar, and SCOPUS databases. The included studies were original articles written in English, reporting the use of ML algorithms to diagnose and predict OSA in suspected patients. The last search was performed in June 2024. This systematic review is registered in PROSPERO (Registration ID: CRD42024563059). Results: Sixty-five articles, involving data from 109,046 patients, met the inclusion criteria. Due to the heterogeneity of the algorithms, outcomes were analyzed into six sections (anthropometric indexes, imaging, electrocardiographic signals, respiratory signals, and oximetry and miscellaneous signals). AI algorithms demonstrated significant improvements in OSA detection, with accuracy, sensitivity, and specificity often exceeding traditional tools. In particular, anthropometric indexes were most widely used, especially in logistic regression-powered algorithms. Conclusions: The application of AI algorithms to OSA diagnosis and screening has great potential to improve patient outcomes, increase early detection, and lessen the load on healthcare systems. However, rigorous validation and standardization efforts must be made to standardize datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
29. A Radiograph Dataset for the Classification, Localization, and Segmentation of Primary Bone Tumors.
- Author
-
Yao, Shunhan, Huang, Yuanxiang, Wang, Xiaoyu, Zhang, Yiwen, Paixao, Ian Costa, Wang, Zhikang, Chai, Charla Lu, Wang, Hongtao, Lu, Dinggui, Webb, Geoffrey I, Li, Shanshan, Guo, Yuming, Chen, Qingfeng, and Song, Jiangning
- Subjects
MACHINE learning ,COMPUTER-aided diagnosis ,DEEP learning ,MEDICAL sciences ,CANCER-related mortality - Abstract
Primary malignant bone tumors are the third highest cause of cancer-related mortality among patients under the age of 20. X-ray scan is the primary tool for detecting bone tumors. However, due to the varying morphologies of bone tumors, it is challenging for radiologists to make a definitive diagnosis based on radiographs. With the recent advancement in deep learning algorithms, there is a surge of interest in computer-aided diagnosis of primary bone tumors. Nonetheless, the development in this field has been hindered by the lack of publicly available X-ray datasets for bone tumors. To tackle this challenge, we established the Bone Tumor X-ray Radiograph dataset (termed BTXRD) in collaboration with multiple medical institutes and hospitals. The BTXRD dataset comprises 3,746 bone images (1,879 normal and 1,867 tumor), with clinical information and global labels available for each image, and distinct mask and annotated bounding box for each tumor instance. This publicly available dataset can support the development and evaluation of deep learning algorithms for the diagnosis of primary bone tumors. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
30. Effectiveness of a novel artificial intelligence-assisted colonoscopy system for adenoma detection: a prospective, propensity score-matched, non-randomized controlled study in Korea.
- Author
-
Park, Jung-Bin and Bae, Jung Ho
- Subjects
- *
COMPUTER-aided diagnosis , *ADENOMA , *ARTIFICIAL intelligence , *TERTIARY care , *COLONOSCOPY - Abstract
Background/Aims: The real-world effectiveness of computer-aided detection (CADe) systems during colonoscopies remains uncertain. We assessed the effectiveness of the novel CADe system, ENdoscopy as AI-powered Device (ENAD), in enhancing the adenoma detection rate (ADR) and other quality indicators in real-world clinical practice. Methods: We enrolled patients who underwent elective colonoscopies between May 2022 and October 2022 at a tertiary healthcare center. Standard colonoscopy (SC) was compared to ENAD-assisted colonoscopy. Eight experienced endoscopists performed the procedures in randomly assigned CADe- and non-CADe-assisted rooms. The primary outcome was a comparison of ADR between the ENAD and SC groups. Results: A total of 1,758 sex- and age-matched patients were included and evenly distributed into two groups. The ENAD group had a significantly higher ADR (45.1% vs. 38.8%, p=0.010), higher sessile serrated lesion detection rate (SSLDR) (5.7% vs. 2.5%, p=0.001), higher mean number of adenomas per colonoscopy (APC) (0.78±1.17 vs. 0.61±0.99; incidence risk ratio, 1.27; 95% confidence interval, 1.13–1.42), and longer withdrawal time (9.0±3.4 vs. 8.3±3.1, p<0.001) than the SC group. However, the mean withdrawal times were not significantly different between the two groups in cases where no polyps were detected (6.9±1.7 vs. 6.7±1.7, p=0.058). Conclusions: ENAD-assisted colonoscopy significantly improved the ADR, APC, and SSLDR in real-world clinical practice, particularly for smaller and nonpolypoid adenomas. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
31. Ischemic Stroke Lesion Segmentation on Multiparametric CT Perfusion Maps Using Deep Neural Network.
- Author
-
Kandpal, Ankit, Gupta, Rakesh Kumar, and Singh, Anup
- Subjects
- *
COMPUTER-aided diagnosis , *ARTIFICIAL neural networks , *IMAGE segmentation , *ISCHEMIC stroke , *COMPUTED tomography - Abstract
Background: Accurate delineation of lesions in acute ischemic stroke is important for determining the extent of tissue damage and the identification of potentially salvageable brain tissues. Automatic segmentation on CT images is challenging due to the poor contrast-to-noise ratio. Quantitative CT perfusion images improve the estimation of the perfusion deficit regions; however, they are limited by a poor signal-to-noise ratio. The study aims to investigate the potential of deep learning (DL) algorithms for the improved segmentation of ischemic lesions. Methods: This study proposes a novel DL architecture, DenseResU-NetCTPSS, for stroke segmentation using multiparametric CT perfusion images. The proposed network is benchmarked against state-of-the-art DL models. Its performance is assessed using the ISLES-2018 challenge dataset, a widely recognized dataset for stroke segmentation in CT images. The proposed network was evaluated on both training and test datasets. Results: The final optimized network takes three image sequences, namely CT, cerebral blood volume (CBV), and time to max (Tmax), as input to perform segmentation. The network achieved a dice score of 0.65 ± 0.19 and 0.45 ± 0.32 on the training and testing datasets. The model demonstrated a notable improvement over existing state-of-the-art DL models. Conclusions: The optimized model combines CT, CBV, and Tmax images, enabling automatic lesion identification with reasonable accuracy and aiding radiologists in faster, more objective assessments. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
32. Advanced Brain Tumor Classification in MR Images Using Transfer Learning and Pre-Trained Deep CNN Models.
- Author
-
Disci, Rukiye, Gurcan, Fatih, and Soylu, Ahmet
- Subjects
- *
GLIOMAS , *DIAGNOSTIC imaging , *CLINICAL decision support systems , *MAGNETIC resonance imaging , *CONVOLUTIONAL neural networks , *COMPUTER-aided diagnosis , *MENINGIOMA , *DEEP learning , *COMPUTERS in medicine , *AUTOMATION , *PITUITARY tumors , *MACHINE learning , *BRAIN tumors , *ALGORITHMS - Abstract
Simple Summary: This study explores the use of pre-trained deep learning models for classifying brain MRI images into four categories: Glioma, Meningioma, Pituitary, and No Tumor. The study uses a publicly available Brain Tumor MRI dataset and applies transfer learning to improve diagnostic accuracy and efficiency by fine-tuning pre-trained models. Xception achieved the highest performance with a weighted accuracy of 98.73%. While the models showed promise in addressing class imbalances, challenges in improving recall for certain tumor types remain. The study highlights the potential of deep learning in transforming medical imaging and clinical diagnostics. Background/Objectives: Brain tumor classification is a crucial task in medical diagnostics, as early and accurate detection can significantly improve patient outcomes. This study investigates the effectiveness of pre-trained deep learning models in classifying brain MRI images into four categories: Glioma, Meningioma, Pituitary, and No Tumor, aiming to enhance the diagnostic process through automation. Methods: A publicly available Brain Tumor MRI dataset containing 7023 images was used in this research. The study employs state-of-the-art pre-trained models, including Xception, MobileNetV2, InceptionV3, ResNet50, VGG16, and DenseNet121, which are fine-tuned using transfer learning, in combination with advanced preprocessing and data augmentation techniques. Transfer learning was applied to fine-tune the models and optimize classification accuracy while minimizing computational requirements, ensuring efficiency in real-world applications. Results: Among the tested models, Xception emerged as the top performer, achieving a weighted accuracy of 98.73% and a weighted F1 score of 95.29%, demonstrating exceptional generalization capabilities. These models proved particularly effective in addressing class imbalances and delivering consistent performance across various evaluation metrics, thus demonstrating their suitability for clinical adoption. However, challenges persist in improving recall for the Glioma and Meningioma categories, and the black-box nature of deep learning models requires further attention to enhance interpretability and trust in medical settings. Conclusions: The findings underscore the transformative potential of deep learning in medical imaging, offering a pathway toward more reliable, scalable, and efficient diagnostic tools. Future research will focus on expanding dataset diversity, improving model explainability, and validating model performance in real-world clinical settings to support the widespread adoption of AI-driven systems in healthcare and ensure their integration into clinical workflows. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
33. The Three-Class Annotation Method Improves the AI Detection of Early-Stage Osteosarcoma on Plain Radiographs: A Novel Approach for Rare Cancer Diagnosis.
- Author
-
Hasei, Joe, Nakahara, Ryuichi, Otsuka, Yujiro, Nakamura, Yusuke, Ikuta, Kunihiro, Osaki, Shuhei, Hironari, Tamiya, Miwa, Shinji, Ohshika, Shusa, Nishimura, Shunji, Kahara, Naoaki, Yoshida, Aki, Fujiwara, Tomohiro, Nakata, Eiji, Kunisada, Toshiyuki, and Ozaki, Toshifumi
- Subjects
- *
OSTEOSARCOMA , *STATISTICAL models , *RESEARCH funding , *RECEIVER operating characteristic curves , *ACADEMIC medical centers , *RARE diseases , *ARTIFICIAL intelligence , *EARLY detection of cancer , *DATA curation , *CANCER patients , *DESCRIPTIVE statistics , *MAGNETIC resonance imaging , *COMPUTER-aided diagnosis , *SENSITIVITY & specificity (Statistics) - Abstract
Simple Summary: Developing effective artificial intelligence (AI) systems for rare diseases such as osteosarcoma is challenging owing to the limited available data. This study introduces a novel approach for preparing training data for AI systems that detect osteosarcoma using X-rays. Traditional methods label tumor areas as a single entity; however, our new approach divides tumor regions into three distinct classes: intramedullary, cortical, and extramedullary. This three-class annotation method enables AI systems to learn more effectively from limited datasets by incorporating detailed anatomical knowledge. This new approach to data preparation resulted in more robust AI models that could detect subtle tumor changes at lower threshold values, demonstrating how strategic data annotation methods can enhance AI performance even with limited training samples. This methodological innovation in data preparation offers a new paradigm for developing AI systems for rare diseases, for which traditional data-driven approaches often fall short. Background/Objectives: Developing high-performance artificial intelligence (AI) models for rare diseases is challenging owing to limited data availability. This study aimed to evaluate whether a novel three-class annotation method for preparing training data could enhance AI model performance in detecting osteosarcoma on plain radiographs compared to conventional single-class annotation. Methods: We developed two annotation methods for the same dataset of 468 osteosarcoma X-rays and 378 normal radiographs: a conventional single-class annotation (1C model) and a novel three-class annotation method (3C model) that separately labeled intramedullary, cortical, and extramedullary tumor components. Both models used identical U-Net-based architectures, differing only in their annotation approaches. Performance was evaluated using an independent validation dataset. Results: Although both models achieved high diagnostic accuracy (AUC: 0.99 vs. 0.98), the 3C model demonstrated superior operational characteristics. At a standardized cutoff value of 0.2, the 3C model maintained balanced performance (sensitivity: 93.28%, specificity: 92.21%), whereas the 1C model showed compromised specificity (83.58%) despite high sensitivity (98.88%). Notably, at the 25th percentile threshold, both models showed identical false-negative rates despite significantly different cutoff values (3C: 0.661 vs. 1C: 0.985), indicating the ability of the 3C model to maintain diagnostic accuracy at substantially lower thresholds. Conclusions: This study demonstrated that anatomically informed three-class annotation can enhance AI model performance for rare disease detection without requiring additional training data. The improved stability at lower thresholds suggests that thoughtful annotation strategies can optimize the AI model training, particularly in contexts where training data are limited. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
34. Neoplasms in the Nasal Cavity Identified and Tracked with an Artificial Intelligence-Assisted Nasal Endoscopic Diagnostic System.
- Author
-
Xu, Xiayue, Yun, Boxiang, Zhao, Yumin, Jin, Ling, Zong, Yanning, Yu, Guanzhen, Zhao, Chuanliang, Fan, Kai, Zhang, Xiaolin, Tan, Shiwang, Zhang, Zimu, Wang, Yan, Li, Qingli, and Yu, Shaoqing
- Subjects
- *
COMPUTER-aided diagnosis , *ENDOSCOPIC surgery , *NASAL surgery , *COMPUTER-assisted surgery , *ARTIFICIAL intelligence , *DEEP learning - Abstract
Objective: We aim to construct an artificial intelligence (AI)-assisted nasal endoscopy diagnostic system capable of preliminary differentiation and identification of nasal neoplasia properties, as well as intraoperative tracking, providing an important basis for nasal endoscopic surgery. Methods: We retrospectively analyzed 1050 video data of nasal endoscopic surgeries involving four types of nasal neoplasms. Using Deep Snake, U-Net, and Att-Res2-UNet, we developed a nasal neoplastic detection network based on endoscopic images. After deep learning, the optimal network was selected as the initialization model and trained to optimize the SiamMask online tracking algorithm. Results: The Att-Res2-UNet network demonstrated the highest accuracy and precision, with the most accurate recognition results. The overall accuracy of the model established by us achieved an overall accuracy similar to that of residents (0.9707 ± 0.00984), while slightly lower than that of rhinologists (0.9790 ± 0.00348). SiamMask's segmentation range was consistent with rhinologists, with a 99% compliance rate and a neoplasm probability value ≥ 0.5. Conclusions: This study successfully established an AI-assisted nasal endoscopic diagnostic system that can preliminarily identify nasal neoplasms from endoscopic images and automatically track them in real time during surgery, enhancing the efficiency of endoscopic diagnosis and surgery. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
35. Isfahan Artificial Intelligence Event 2023: Macular Pathology Detection Competition.
- Author
-
Sedighin, Farnaz, Monemian, Maryam, Zojaji, Zahra, Montazerolghaem, Ahmadreza, Asadinia, Mohammad Amin, Mirghaderi, Seyed Mojtaba, Esfahani, Seyed Amin Naji, Kazemi, Mohammad, Mokhtari, Reza, Mohammadi, Maryam, Ramezani, Mohadese, Tajmirriahi, Mahnoosh, and Rabbani, Hossein
- Subjects
- *
MACULAR degeneration , *OPTICAL coherence tomography , *COMPUTER-aided diagnosis , *MACULAR edema , *ARTIFICIAL intelligence , *DEEP learning - Abstract
Background: Computer-aided diagnosis (CAD) methods have become of great interest for diagnosing macular diseases over the past few decades. Artificial intelligence (AI)-based CADs offer several benefits, including speed, objectivity, and thoroughness. They are utilized as an assistance system in various ways, such as highlighting relevant disease indicators to doctors, providing diagnosis suggestions, and presenting similar past cases for comparison. Methods: Much specifically, retinal AI-CADs have been developed to assist ophthalmologists in analyzing optical coherence tomography (OCT) images and making retinal diagnostics simpler and more accurate than before. Retinal AI-CAD technology could provide a new insight for the health care of humans who do not have access to a specialist doctor. AI-based classification methods are critical tools in developing improved retinal AI-CAD technology. The Isfahan AI-2023 challenge has organized a competition to provide objective formal evaluations of alternative tools in this area. In this study, we describe the challenge and those methods that had the most successful algorithms. Results: A dataset of OCT images, acquired from normal subjects, patients with diabetic macular edema, and patients with other macular disorders, was provided in a documented format. The dataset, including the labeled training set and unlabeled test set, was made accessible to the participants. The aim of this challenge was to maximize the performance measures for the test labels. Researchers tested their algorithms and competed for the best classification results. Conclusions: The competition is organized to evaluate the current AI-based classification methods in macular pathology detection. We received several submissions to our posted datasets that indicate the growing interest in AI-CAD technology. The results demonstrated that deep learning-based methods can learn essential features of pathologic images, but much care has to be taken in choosing and adapting appropriate models for imbalanced small datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
36. Diagnosis of Autism Spectrum Disorder (ASD) by Dynamic Functional Connectivity Using GNN-LSTM.
- Author
-
Tang, Jun, Chen, Jie, Hu, Miaojun, Hu, Yao, Zhang, Zixi, and Xiao, Liuming
- Subjects
- *
GRAPH neural networks , *LONG short-term memory , *COMPUTER-aided diagnosis , *AUTISM spectrum disorders , *LARGE-scale brain networks - Abstract
Early detection of autism spectrum disorder (ASD) is particularly important given its insidious qualities and the high cost of the diagnostic process. Currently, static functional connectivity studies have achieved significant results in the field of ASD detection. However, with the deepening of clinical research, more and more evidence suggests that dynamic functional connectivity analysis can more comprehensively reveal the complex and variable characteristics of brain networks and their underlying mechanisms, thus providing more solid scientific support for computer-aided diagnosis of ASD. To overcome the lack of time-scale information in static functional connectivity analysis, in this paper, we proposes an innovative GNN-LSTM model, which combines the advantages of long short-term memory (LSTM) and graph neural networks (GNNs). The model captures the spatial features in fMRI data by GNN and aggregates the temporal information of dynamic functional connectivity using LSTM to generate a more comprehensive spatio-temporal feature representation of fMRI data. Further, a dynamic graph pooling method is proposed to extract the final node representations from the dynamic graph representations for classification tasks. To address the variable dependence of dynamic feature connectivity on time scales, the model introduces a jump connection mechanism to enhance information extraction between internal units and capture features at different time scales. The model achieves remarkable results on the ABIDE dataset, with accuracies of 80.4% on the ABIDE I and 79.63% on the ABIDE II, which strongly demonstrates the effectiveness and potential of the model for ASD detection. This study not only provides new perspectives and methods for computer-aided diagnosis of ASD but also provides useful references for research in related fields. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
37. Automated CAD system for early detection and classification of pancreatic cancer using deep learning model.
- Author
-
Nadeem, Abubakar, Ashraf, Rahan, Mahmood, Toqeer, and Parveen, Sajida
- Subjects
- *
COMPUTER-aided diagnosis , *PANCREATIC tumors , *PANCREATIC cancer , *COMPUTED tomography , *GRAYSCALE model - Abstract
Accurate diagnosis of pancreatic cancer using CT scan images is critical for early detection and treatment, potentially saving numerous lives globally. Manual identification of pancreatic tumors by radiologists is challenging and time-consuming due to the complex nature of CT scan images and variations in tumor shape, size, and location of the pancreatic tumor also make it challenging to detect and classify different types of tumors. Thus, to address this challenge we proposed a four-stage framework of computer-aided diagnosis systems. In the preprocessing stage, the input image resizes into 227 × 227 dimensions then converts the RGB image into a grayscale image, and enhances the image by removing noise without blurring edges by applying anisotropic diffusion filtering. In the segmentation stage, the preprocessed grayscale image a binary image is created based on a threshold, highlighting the edges by Sobel filtering, and watershed segmentation to segment the tumor region and we also implement the U-Net method for segmentation. Then refine the geometric structure of the image using morphological operation and extracting the texture features from the image using a gray-level co-occurrence matrix computed by analyzing the spatial relationship of pixel intensities in the refined image, counting the occurrences of pixel pairs with specific intensity values and spatial relationships. The detection stage analyzes the tumor region's extracted features characteristics by labeling the connected components and selecting the region with the highest density to locate the tumor area, achieving a good accuracy of 99.64%. In the classification stage, the system classifies the detected tumor into the normal, pancreatic tumor, then into benign, pre-malignant, or malignant using a proposed reduced 11-layer AlexNet model. The classification stage attained an accuracy level of 98.72%, an AUC of 0.9979, and an overall system average processing time of 1.51 seconds, demonstrating the capability of the system to effectively and efficiently identify and classify pancreatic cancers. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
38. Research on Medical Image Classification Based on Triple Fusion Attention.
- Author
-
Wang, Y. G., Wang, L., and Geng, Y. X.
- Subjects
- *
IMAGE recognition (Computer vision) , *COMPUTER-aided diagnosis , *MEDICAL coding , *IMAGE fusion , *DIAGNOSTIC imaging - Abstract
Attention mechanism is very important in the task of medical image classification. In medical image classification, different types of images may have different lesion morphology and size, and lesion characteristics are not obvious; however, the existing attention mechanism has the problem of insufficient feature diversity and ignoring small lesions, which seriously affects the classification performance. In order to solve these problems, Triple Fusion Attention (TFA) was proposed. Through convolutional fusion, attention fusion and adaptive fusion, TFA improves the model's perception ability of subtle structures and features in medical images, suppresses the noise in the image, and enhances the representation of key features, which can effectively solve the problem of insufficient sensitivity of important features in medical images. Experiments have shown that TFA enables the model to focus on the lesion area more accurately, so that multi-scale features can be fused and classified, which significantly improves the overall performance and outperforms other attention mechanisms. In addition, TFA is able to improve training efficiency and ease of deployment while maintaining good performance, while improving the accuracy and effectiveness of computer-aided diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2025
39. Osteoarthritis Classification Algorithm Using CNN and Image Edge Detections.
- Author
-
Ali, Hanafy M.
- Subjects
- *
MACHINE learning , *DEEP learning , *COMPUTER-aided diagnosis , *KNEE osteoarthritis , *CONVOLUTIONAL neural networks , *THUMB , *KNEE - Abstract
The paper presents a comprehensive computer-aided diagnosis (CAD) system designed for early detection of knee Osteoarthritis (OA) utilizing knee medical imaging and machine learning algorithms. Osteoarthritis is a prevalent chronic disease affecting various joints, primarily the fingers, thumbs, spine, hips, knees, and big toes, with secondary occurrences linked to pre-existing joint anomalies. Although more common among older individuals, OA can develop in adults of any age, characterized by degenerative changes in joints. Traditional diagnosis involves examining joint scans, typically through X-ray analyses being conducted by trained radiologists and orthopaedists, which can be timeconsuming and subject to precision loss due to manual segmentation. Automatic segmentation and interpretation of joint X-ray scans are thus necessary to enhance clinical outcomes and bone calculation precision. The advent of deep learning technologies in medical systems has facilitated such transition, enabling efficient processing of large data volumes with improved accuracy. In particular, Convolutional Neural Networks (CNNs) being among the deep learning methods, have proven effectiveness in automating X-ray scan segmentation. The paper provides an overview of various deep learning and image processing techniques employed for automatic segmentation and interpretation of X-ray scans, facilitating disease diagnosis based on image data along with a proposed improved model of Visual Geometry Group VGG-16 with edge detection using X-ray images. A classification algorithm based on CNN and image edge detection is proposed demonstrating promising results, achieving predictive accuracies exceeding 90% across all suggested models. Particularly is the performance of the proposed VGG-16 after training with edge detection, which attained a training accuracy of 100% and a testing accuracy of 98.2%. This highlights the efficacy of deep learning approaches in enhancing diagnostic accuracy and efficiency in knee OA detection. A comparative evaluation of the proposed algorithm against other techniques based on performance metrics is reported. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
40. The usefulness of automated high frequency ultrasound image analysis in atopic dermatitis staging.
- Author
-
Czajkowska, Joanna, Polańska, Adriana, Slian, Anna, and Dańczak-Pazdrowska, Aleksandra
- Subjects
- *
COMPUTER-aided diagnosis , *CLINICAL decision support systems , *IMAGE processing , *ATOPIC dermatitis , *ARTIFICIAL intelligence - Abstract
The last decades have brought an interest in ultrasound applications in dermatology. Especially in the case of atopic dermatitis, where the formation of a subepidermal low echogenic band (SLEB) may serve as an independent indicator of the effects of treatment, the use of ultrasound is of particular interest. This study proposes and evaluates the computer-aided diagnosis method for assessing atopic dermatitis (AD). The fully automated image processing framework combines advanced machine learning techniques for fast, reliable, and repeatable HFUS image analysis, supporting clinical decisions. The proposed methodology comprises accurate SLEB segmentation followed by a classification step. The data set includes 20 MHz images of 80 patients diagnosed with AD according to Hanifin and Rajka criteria, which were evaluated before and after treatment. The ground true labels- clinical evaluation based on Investigator Global Assessment index (IGA score) together with ultrasound skin examination was performed. For reliable analysis, in further experiments, two experts annotated the HFUS images twice in two-week intervals. The analysis aimed to verify whether the fully automated method can classify the HFUS images at the expert level. The Dice coefficient values for segmentation reached 0.908 for SLEB and 0.936 for the entry echo layer. The accuracy of SLEB presence detection results (IGA0) is equal to 98% and slightly outperforms the experts' assessment, which reaches 96%. The overall accuracy of the AD assessment was equal to 69% (Cohen's kappa 0.78) and was comparable with the experts' assessment, ranging between 64% and 70% (Cohen's kappa 0.73–0.79). The results indicate that the automated method can be applied to AD assessment, and its combination with standard diagnosis may benefit repeatable analysis and a better understanding of the processes that take place within the skin and aid treatment monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
41. Neural Memory State Space Models for Medical Image Segmentation.
- Author
-
Wang, Zhihua, Gu, Jingjun, Zhou, Wang, He, Quansong, Zhao, Tianli, Guo, Jialong, Lu, Li, He, Tao, and Bu, Jiajun
- Subjects
- *
COMPUTER-aided diagnosis , *IMAGE segmentation , *ORDINARY differential equations , *COMPUTATIONAL complexity , *DIAGNOSTIC imaging - Abstract
With the rapid advancement of deep learning, computer-aided diagnosis and treatment have become crucial in medicine. UNet is a widely used architecture for medical image segmentation, and various methods for improving UNet have been extensively explored. One popular approach is incorporating transformers, though their quadratic computational complexity poses challenges. Recently, State-Space Models (SSMs), exemplified by Mamba, have gained significant attention as a promising alternative due to their linear computational complexity. Another approach, neural memory Ordinary Differential Equations (nmODEs), exhibits similar principles and achieves good results. In this paper, we explore the respective strengths and weaknesses of nmODEs and SSMs and propose a novel architecture, the nmSSM decoder, which combines the advantages of both approaches. This architecture possesses powerful nonlinear representation capabilities while retaining the ability to preserve input and process global information. We construct nmSSM-UNet using the nmSSM decoder and conduct comprehensive experiments on the PH2, ISIC2018, and BU-COCO datasets to validate its effectiveness in medical image segmentation. The results demonstrate the promising application value of nmSSM-UNet. Additionally, we conducted ablation experiments to verify the effectiveness of our proposed improvements on SSMs and nmODEs. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
42. Fine_Denseiganet: Automatic Medical Image Classification in Chest CT Scan Using Hybrid Deep Learning Framework.
- Author
-
Sahu, Hemlata P. and Kashyap, Ramgopal
- Subjects
- *
COMPUTER-aided diagnosis , *COMPUTED tomography , *COMPUTER-assisted image analysis (Medicine) , *IMAGE recognition (Computer vision) , *IMAGE analysis , *DEEP learning - Abstract
Medical image classification is one of the most significant tasks in computer-aided diagnosis. In the era of modern healthcare, the progress of digitalized medical images has led to a crucial role in analyzing medical image analysis. Recently, accurate disease recognition from medical Computed Tomography (CT) images remains a challenging scenario which is important in rendering effective treatment to patients. The infectious COVID-19 disease is highly contagious and leads to a rapid increase in infected individuals. Some drawbacks noticed with RT-PCR kits are high false negative rate (FNR) and a shortage in the number of test kits. Hence, a Chest CT scan is introduced instead of RT-PCR which plays an important role in diagnosing and screening COVID-19 infections. However, manual examination of CT scans performed by radiologists can be time-consuming, and a manual review of each individual CT image may not be feasible in emergencies. Therefore, there is a need to perform automated COVID-19 detection with the advances in AI-based models. This work presents effective and automatic Deep Learning (DL)-based COVID-19 detection using Chest CT images. Initially, the data is gathered and pre-processed through Spatial Weighted Bilateral Filter (SWBF) to eradicate unwanted distortions. The extraction of deep features is processed using Fine_Dense Convolutional Network (Fine_DenseNet). For classification, the Softmax layer of Fine_DenseNet is replaced using Improved Generative Adversarial Network_Artificial Hummingbird (IGAN_AHb) model in order to train the data on the labeled and unlabeled dataset. The loss in the network model is optimized using Artificial Hummingbird (AHb) optimization algorithm. Here, the proposed DL model (Fine_DenseIGANet) is used to perform automated multi-class classification of COVID-19 using CT scan images and attained a superior classification accuracy of 95.73% over other DL models. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
43. Automatic Breast Mass Lesion Detection in Mammogram Image.
- Author
-
Bania, Rubul Kumar and Halder, Anindya
- Subjects
- *
COMPUTER-aided diagnosis , *IMAGE processing , *MAMMOGRAMS , *SELECTION (Plant breeding) , *DATABASES , *BREAST - Abstract
Mammography imaging is one of the most successful techniques for breast cancer screening and detecting breast lesions. Detection of the Region of Interest (ROI) (where the possible abnormalities could be present) is the backbone for the success of any Computer-Aided Detection or Diagnosis (CADx) system. In this paper, to assist the CADx system, one computational model is proposed to detect breast mass lesions from mammogram images. At the beginning of the process, pectoral muscles from the mammograms are removed as a pre-processing step. Then by applying an automatic thresholding scheme with the required image processing techniques, different regions of breast tissues are ranked to detect the possible suspected region to refine the further segmentation task. One seeded region growing approach is proposed with an automatic seed selection criterion to detect the suspected region to segment the ROI. The proposed model has very less user intervention as maximum of the parameters are computed automatically. To evaluate the performance of the proposed model, it is compared with four different methods with six different evaluation metrics viz., Jaccard & Dice co-efficient, relative error, segmentation accuracy, error and Fowlkes–Mallows index (FMI). On the proposed model, 57 mammogram images are tested, consisting of four different cases that are collected from the publicly available benchmark database. The qualitative and quantitative analyses are performed to evaluate the proposed model. The best dice co-efficient, Jaccard co-efficient, accuracy, error and FMI values observed are 0.9506, 0.9471, 95.62%, 4.38% and 0.932, respectively. The superiority of the model over six state-of-the-art compared methods is well evident from the experimental results. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
44. Influence of next-generation artificial intelligence on headache research, diagnosis and treatment: the junior editorial board members' vision – part 2.
- Author
-
Petrušić, Igor, Chiang, Chia-Chun, Garcia-Azorin, David, Ha, Woo-Seok, Ornello, Raffaele, Pellesi, Lanfranco, Rubio-Beltrán, Eloisa, Ruscheweyh, Ruth, Waliszewska-Prosół, Marta, and Wells-Gatnik, William
- Subjects
- *
HEADACHE diagnosis , *HEADACHE treatment , *THERAPEUTICS , *ARTIFICIAL intelligence , *HEADACHE , *MULTIOMICS , *WEARABLE technology , *BIOINFORMATICS , *MEDICAL research , *COMPUTER-aided diagnosis , *COMPUTERS in medicine , *PAIN management , *CONCEPTUAL structures , *NEURORADIOLOGY , *INDIVIDUALIZED medicine , *DRUG discovery , *CHATBOTS , *DISEASE risk factors - Abstract
Part 2 explores the transformative potential of artificial intelligence (AI) in addressing the complexities of headache disorders through innovative approaches, including digital twin models, wearable healthcare technologies and biosensors, and AI-driven drug discovery. Digital twins, as dynamic digital representations of patients, offer opportunities for personalized headache management by integrating diverse datasets such as neuroimaging, multiomics, and wearable sensor data to advance headache research, optimize treatment, and enable virtual trials. In addition, AI-driven wearable devices equipped with next-generation biosensors combined with multi-agent chatbots could enable real-time physiological and biochemical monitoring, diagnosing, facilitating early headache attack forecasting and prevention, disease tracking, and personalized interventions. Furthermore, AI-driven advances in drug discovery leverage machine learning and generative AI to accelerate the identification of novel therapeutic targets and optimize treatment strategies for migraine and other headache disorders. Despite these advances, challenges such as data standardization, model explainability, and ethical considerations remain pivotal. Collaborative efforts between clinicians, biomedical and biotechnological engineers, AI scientists, legal representatives and bioethics experts are essential to overcoming these barriers and unlocking AI's full potential in transforming headache research and healthcare. This is a call to action in proposing novel frameworks for integrating AI-based technologies into headache care. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
45. Evaluation of an enhanced ResNet-18 classification model for rapid On-site diagnosis in respiratory cytology.
- Author
-
Gong, Wei, Vaishnani, Deep K., Jin, Xuan-Chen, Zeng, Jing, Chen, Wei, Huang, Huixia, Zhou, Yu-Qing, Hla, Khaing Wut Yi, Geng, Chen, and Ma, Jun
- Abstract
Objective: Rapid on-site evaluation (ROSE) of respiratory cytology specimens is a critical technique for accurate and timely diagnosis of lung cancer. However, in China, limited familiarity with the Diff-Quik staining method and a shortage of trained cytopathologists hamper utilization of ROSE. Therefore, developing an improved deep learning model to assist clinicians in promptly and accurately evaluating Diff-Quik stained cytology samples during ROSE has important clinical value. Methods: Retrospectively, 116 digital images of Diff-Quik stained cytology samples were obtained from whole slide scans. These included 6 diagnostic categories - carcinoid, normal cells, adenocarcinoma, squamous cell carcinoma, non-small cell carcinoma, and small cell carcinoma. All malignant diagnoses were confirmed by histopathology and immunohistochemistry. The test image set was presented to 3 cytopathologists from different hospitals with varying levels of experience, as well as an artificial intelligence system, as single-choice questions. Results: The diagnostic accuracy of the cytopathologists correlated with their years of practice and hospital setting. The AI model demonstrated proficiency comparable to the humans. Importantly, all combinations of AI assistance and human cytopathologist increased diagnostic efficiency to varying degrees. Conclusions: This deep learning model shows promising capability as an aid for on-site diagnosis of respiratory cytology samples. However, human expertise remains essential to the diagnostic process. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
46. Enhanced Detection of Fetal Congenital Cardiac Abnormalities through Hybrid Deep Learning Using Hunter-Prey Optimization.
- Author
-
PASUPATHY, VIJAYALAKSHMI and KHILAR, RASHMITA
- Subjects
CONVOLUTIONAL neural networks ,FETAL diseases ,CONGENITAL heart disease ,COMPUTER-aided diagnosis ,MACHINE learning ,DEEP learning - Abstract
Congenital heart disease (Cl-ID) is one of the common birth defects, affecting -1% of live births, and is the highest birth defect-related contributor to infant mortality in developing countries. Prenatal diagnoses of critical CHD allow delivery planning for optimal neonatal intervention and medical care, decision-making, and family preparation. Children with prenatal diagnoses are less preoperative brain injury, lower morbidity, more-robust microstructural brain development, and lower mortality for some lesions than those with postnatal diagnoses of CI ID. Moresuccessful prognoses and better treatment are dependent on earlier detection during the phase of embryonie development. Lately Deep Learning and Machine Learning methods are most comnionly used for automatic detection and classification of CHD. This manuscript offers the design of HPOHDL-CHDDF - Hunter Prey Optimization with Hybrid Deep Learning-based Congenital Heart Disease Detection of Fetus (HPOHDL-CHDDF) technique. The goal of the 1-IPOHDL-CHDDF technique is to improve the accuracy and efficiency of C+ID detection. To accomplish this, the presented Hunter Prey Optimization with Hybrid Deep Learning-based Congenital Heart Disease Detection of Fetus technique follows two major phases of operations such as Hybrid Deep Learning-based classification and hyperparameter tuning. At the initial stage. the Hunter Prey Optimization with Hybrid Deep Learning-based Congenital Heart Disease Detection of Fetus system involves the design of Convolutional Neural Network with Long Short rerm Memory' algorithm for classification purposes. Next, in the second stage. the Llunter Prey Optimization with Hybrid Deep Learning-based Congenital Heart Disease Detection of Fetus technique employs the HPO algorithm for optimal selection of the hyperparameter values of the Convolutional Neural Network -Long Short-Term Memory (CNN-LSTM) system. The perforniance validation of the Hunter Prey Optimization with Hybrid Deep Learning-based Congenital Heart Disease Detection of Fetus algorithm is tested on medical datasets. The experimental values stated that the Hunter Prey Optimization with Hybrid Deep Learning-based Congenital Heart Disease Detection of Fetus technique reaches enhanced performance over other baseline models. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
47. Deep learning system for the differential diagnosis of oral mucosal lesions through clinical photographic imaging.
- Author
-
Su, An-Yu, Wu, Ming-Long, and Wu, Yu-Hsueh
- Subjects
COMPUTER-aided diagnosis ,CONVOLUTIONAL neural networks ,IMAGE recognition (Computer vision) ,ARTIFICIAL intelligence ,DEEP learning - Abstract
Oral mucosal lesions are associated with a variety of pathological conditions. Most deep-learning-based convolutional neural network (CNN) systems for computer-aided diagnosis of oral lesions have typically concentrated on determining limited aspects of differential diagnosis. This study aimed to develop a CNN-based diagnostic model capable of classifying clinical photographs of oral ulcerative and associated lesions into five different diagnoses, thereby assisting clinicians in making accurate differential diagnoses. A set of clinical images were selected, including 506 images of five different diagnoses. The images were pre-processed and randomly divided into two sets for training and testing the CNN model. The model architecture was composed of convolutional layers, batch normalization layers, max pooling layers, the dropout layer and fully-connected layers. Evaluation metrics included weighted-precision, weighted-recall, weighted-F1 score, average specificity, Cohen's Kappa coefficient, normalized confusion matrix and AUC. The overall performance for the image classification showed a weighted-precision of 88.8%, a weighted-recall of 88.2%, a weighted-F1 score of 0.878, an average pecificity of 97.0%, a Kappa coefficient of 0.851, and an average AUC of 0.985. The model achieved a decent classification performance (overall AUC=0.985), showing the capacity to discern between benign and malignant potential lesions, and laid the foundation of a novel tool that can help clinical differential diagnosis of oral mucosal lesions. The main challenges were the small and imbalanced dataset. Enlarging the minority classes, incorporating more oral mucosal lesion diagnoses, employing transfer learning and cross-validation might be included in future works to optimize the image classification model. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
48. Efficient Generative-Adversarial U-Net for Multi-Organ Medical Image Segmentation.
- Author
-
Wang, Haoran, Wu, Gengshen, and Liu, Yi
- Subjects
COMPUTER-assisted image analysis (Medicine) ,COMPUTER-aided diagnosis ,IMAGE analysis ,DEEP learning ,DIAGNOSTIC imaging - Abstract
Manual labeling of lesions in medical image analysis presents a significant challenge due to its labor-intensive and inefficient nature, which ultimately strains essential medical resources and impedes the advancement of computer-aided diagnosis. This paper introduces a novel medical image-segmentation framework named Efficient Generative-Adversarial U-Net (EGAUNet), designed to facilitate rapid and accurate multi-organ labeling. To enhance the model's capability to comprehend spatial information, we propose the Global Spatial-Channel Attention Mechanism (GSCA). This mechanism enables the model to concentrate more effectively on regions of interest. Additionally, we have integrated Efficient Mapping Convolutional Blocks (EMCB) into the feature-learning process, allowing for the extraction of multi-scale spatial information and the adjustment of feature map channels through optimized weight values. Moreover, the proposed framework progressively enhances its performance by utilizing a generative-adversarial learning strategy, which contributes to improvements in segmentation accuracy. Consequently, EGAUNet demonstrates exemplary segmentation performance on public multi-organ datasets while maintaining high efficiency. For instance, in evaluations on the CHAOS T2SPIR dataset, EGAUNet achieves approximately 2 % higher performance on the Jaccard metric, 1 % higher on the Dice metric, and nearly 3 % higher on the precision metric in comparison to advanced networks such as Swin-Unet and TransUnet. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
49. Automatic Lower-Limb Length Measurement Network (A3LMNet): A Hybrid Framework for Automated Lower-Limb Length Measurement in Orthopedic Diagnostics.
- Author
-
Rhyou, Seyeol, Cho, Yongjin, Yoo, Jaechern, Hong, Sanghoon, Bae, Sunghoon, Bae, Hyunjae, and Yu, Minyung
- Subjects
MEASUREMENT errors ,LUMBAR pain ,COMPUTER-aided diagnosis ,LENGTH measurement ,X-ray imaging - Abstract
Limb Length Discrepancy (LLD) is a common condition that can result in gait abnormalities, pain, and an increased risk of early degenerative osteoarthritis in the lower extremities. Epidemiological studies indicate that mild LLD, defined as a discrepancy of 10 mm or less, affects approximately 60–90% of the population. While more severe cases are less frequent, they are associated with secondary conditions such as low back pain, scoliosis, and osteoarthritis of the hip or knee. LLD not only impacts daily activities, but may also lead to long-term complications, making early detection and precise measurement essential. Current LLD measurement methods include physical examination and imaging techniques, with physical exams being simple and non-invasive but prone to operator-dependent errors. To address these limitations and reduce measurement errors, we have developed an AI-based automated lower-limb length measurement system. This method employs semantic segmentation to accurately identify the positions of the femur and tibia and extracts key anatomical landmarks, achieving a margin of error within 4 mm. By automating the measurement process, this system reduces the time and effort required for manual measurements, enabling clinicians to focus more on treatment and improving the overall quality of care. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
50. Peering into the Heart: A Comprehensive Exploration of Semantic Segmentation and Explainable AI on the MnMs-2 Cardiac MRI Dataset.
- Author
-
Ayoob, Mohamed, Nettasinghe, Oshan, Sylvester, Vithushan, Bowala, Helmini, and Mohideen, Hamdaan
- Subjects
CARDIAC magnetic resonance imaging ,COMPUTER-aided diagnosis ,TRANSFORMER models ,COMPUTER-assisted image analysis (Medicine) ,DIAGNOSTIC imaging - Abstract
Accurate and interpretable segmentation of medical images is crucial for computer-aided diagnosis and image-guided interventions. This study explores the integration of semantic segmentation and explainable AI techniques on the MnMs-2 Cardiac MRI dataset. We propose a segmentation model that achieves competitive dice scores (nearly 90 %) and Hausdorff distance (less than 70), demonstrating its effectiveness for cardiac MRI analysis. Furthermore, we leverage Grad-CAM, and Feature Ablation, explainable AI techniques, to visualise the regions of interest guiding the model predictions for a target class. This integration enhances interpretability, allowing us to gain insights into the model decision-making process and build trust in its predictions. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.