20,462 results on '"COMPUTER-AIDED DIAGNOSIS"'
Search Results
2. Cellular spatial-semantic embedding for multi-label classification of cell clusters in thyroid fine needle aspiration biopsy whole slide images
- Author
-
Gao, Juntao, Zhang, Jing, Sun, Meng, and Zhuo, Li
- Published
- 2025
- Full Text
- View/download PDF
3. ItpCtrl-AI: End-to-end interpretable and controllable artificial intelligence by modeling radiologists’ intentions
- Author
-
Pham, Trong-Thang, Brecheisen, Jacob, Wu, Carol C., Nguyen, Hien, Deng, Zhigang, Adjeroh, Donald, Doretto, Gianfranco, Choudhary, Arabinda, and Le, Ngan
- Published
- 2025
- Full Text
- View/download PDF
4. Computer vision algorithms in healthcare: Recent advancements and future challenges
- Author
-
Kabir, Md Mohsin, Rahman, Ashifur, Hasan, Md Nahid, and Mridha, M.F.
- Published
- 2025
- Full Text
- View/download PDF
5. PsyneuroNet architecture for multi-class prediction of neurological disorders
- Author
-
Rawat, Kavita and Sharma, Trapti
- Published
- 2025
- Full Text
- View/download PDF
6. Wireless capsule endoscopy anomaly classification via dynamic multi-task learning
- Author
-
Li, Xingcun, Wu, Qinghua, and Wu, Kun
- Published
- 2025
- Full Text
- View/download PDF
7. FocalNeXt: A ConvNeXt augmented FocalNet architecture for lung cancer classification from CT-scan images
- Author
-
Gulsoy, Tolgahan and Baykal Kablan, Elif
- Published
- 2025
- Full Text
- View/download PDF
8. Computer-aided diagnosis of pituitary microadenoma on dynamic contrast-enhanced MRI based on spatio-temporal features
- Author
-
Guo, Te, Luan, Jixin, Gao, Jingyuan, Liu, Bing, Shen, Tianyu, Yu, Hongwei, Ma, Guolin, and Wang, Kunfeng
- Published
- 2025
- Full Text
- View/download PDF
9. A deep neural network model with spectral correlation function for electrocardiogram classification and diagnosis of atrial fibrillation
- Author
-
Mihandoost, Sara
- Published
- 2024
- Full Text
- View/download PDF
10. Application of computer-aided diagnosis to predict malignancy in BI-RADS 3 breast lesions
- Author
-
He, Ping, Chen, Wen, Bai, Ming-Yu, Li, Jun, Wang, Qing-Qing, Fan, Li-Hong, Zheng, Jian, Liu, Chun-Tao, Zhang, Xiao-Rong, Yuan, Xi-Rong, Song, Peng-Jie, and Cui, Li-Gang
- Published
- 2024
- Full Text
- View/download PDF
11. YOLO and residual network for colorectal cancer cell detection and counting
- Author
-
Haq, Inayatul, Mazhar, Tehseen, Asif, Rizwana Naz, Ghadi, Yazeed Yasin, Ullah, Najib, Khan, Muhammad Amir, and Al-Rasheed, Amal
- Published
- 2024
- Full Text
- View/download PDF
12. Computer-aided Diagnosis of Sarcoidosis Based on CT Images
- Author
-
Prokop, Paweł
- Published
- 2024
- Full Text
- View/download PDF
13. Fully-automatic end-to-end approaches for 3D drusen segmentation in Optical Coherence Tomography images
- Author
-
Goyanes, Elena, Leyva, Saúl, Herrero, Paula, de Moura, Joaquim, Novo, Jorge, and Ortega, Marcos
- Published
- 2024
- Full Text
- View/download PDF
14. GOLF-Net: Global and local association fusion network for COVID-19 lung infection segmentation
- Author
-
Xu, Xinyu, Gao, Lin, and Yu, Liang
- Published
- 2023
- Full Text
- View/download PDF
15. Automated Versus Traditional Scoring Agreeability During the Balance Error Scoring System.
- Author
-
Bruce Leicht, Amelia S., Patrie, James T., Sutherlin, Mark A., Smart, Madeline, and Hart, Joe M.
- Subjects
- *
POSTURAL balance , *RESEARCH methodology , *CROSS-sectional method , *COMPARATIVE studies , *DESCRIPTIVE statistics , *COMPUTER-aided diagnosis , *VIDEO recording - Abstract
Context: The Balance Error Scoring System (BESS) is a commonly used clinical tool to evaluate postural control that is traditionally performed through visual assessment and subjective evaluation of balance errors. The purpose of this study was to evaluate an automated computer-based scoring system using an instrumented pressure mat compared to the traditional human-based manual assessment. Design: A descriptive cross-sectional study design was used to evaluate the performance of the automated versus human BESS scoring methodology in healthy individuals. Methods: Fifty-one healthy active participants performed BESS trials following standard BESS procedures on an instrumented pressure mat (MobileMat, Tekscan Inc). Trained evaluators manually scored balance errors from frontal and sagittal plane video recordings for comparison to errors scored using center of force measurements and an automated scoring software (SportsAT, version 2.0.2, Tekscan Inc). A linear mixed model was used to determine measurement discrepancies across the 2 methods. Bland–Altman analyses were conducted to determine limit of agreement for the automated and manual scoring methods. Results: Significant differences between the automated and manual errors scored were observed across all conditions (P <.05), excluding bilateral firm stance. The greatest discrepancy between scoring methods was during the tandem foam stance, while the smallest discrepancy was during the tandem firm stance. Conclusion: The 2 methods of BESS scoring are different with wide limits of agreement. The benefits and risks of each approach to error scoring should be considered when selecting the most appropriate metric for clinical use or research studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. A Multi-phase Multi-graph Approach for Focal Liver Lesion Classification on CT Scans
- Author
-
Sam, Tran Bao, Huy, Ta Duc, Dao, Cong Tuyen, Lam, Thanh Tin, Tang, Van Ha, Truong, Steven Q. H., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Cho, Minsu, editor, Laptev, Ivan, editor, Tran, Du, editor, Yao, Angela, editor, and Zha, Hongbin, editor
- Published
- 2025
- Full Text
- View/download PDF
17. An Attention Transformer-Based Method for the Modelling of Functional Connectivity and the Diagnosis of Autism Spectrum Disorder
- Author
-
Yang, Ge, Qing, Linbo, Zhang, Yanteng, Gao, Feng, Gao, Li, He, Xiaohai, Peng, Yonghong, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Antonacopoulos, Apostolos, editor, Chaudhuri, Subhasis, editor, Chellappa, Rama, editor, Liu, Cheng-Lin, editor, Bhattacharya, Saumik, editor, and Pal, Umapada, editor
- Published
- 2025
- Full Text
- View/download PDF
18. Calibrated Diverse Ensemble Entropy Minimization for Robust Test-Time Adaptation in Prostate Cancer Detection
- Author
-
Gilany, Mahdi, Harmanani, Mohamed, Wilson, Paul, To, Minh Nguyen Nhat, Jamzad, Amoon, Fooladgar, Fahimeh, Wodlinger, Brian, Abolmaesumi, Purang, Mousavi, Parvin, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Xu, Xuanang, editor, Cui, Zhiming, editor, Rekik, Islem, editor, Ouyang, Xi, editor, and Sun, Kaicong, editor
- Published
- 2025
- Full Text
- View/download PDF
19. Classification of Cervical Spine Fracture Using Deep Learning
- Author
-
Tiwari, Arunesh, Singh, Swapnil, Pandey, Adarsh, Singh, Brijendra Pratap, Kumar, Dinesh, Kumar, Dharmendra, Das, Swagatam, Series Editor, Bansal, Jagdish Chand, Series Editor, Jaiswal, Ajay, editor, Anand, Sameer, editor, Hassanien, Aboul Ella, editor, and Azar, Ahmad Taher, editor
- Published
- 2025
- Full Text
- View/download PDF
20. Positive-Sum Fairness: Leveraging Demographic Attributes to Achieve Fair AI Outcomes Without Sacrificing Group Gains
- Author
-
Belhadj, Samia, Park, Sanguk, Seth, Ambika, Dar, Hesham, Kooi, Thijs, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Puyol-Antón, Esther, editor, Zamzmi, Ghada, editor, Feragen, Aasa, editor, King, Andrew P., editor, Cheplygina, Veronika, editor, Ganz-Benjaminsen, Melanie, editor, Ferrante, Enzo, editor, Glocker, Ben, editor, Petersen, Eike, editor, Baxter, John S. H., editor, Rekik, Islem, editor, and Eagleson, Roy, editor
- Published
- 2025
- Full Text
- View/download PDF
21. SelectiveKD: A Semi-supervised Framework for Cancer Detection in DBT Through Knowledge Distillation and Pseudo-labeling
- Author
-
Dillard, Laurent, Lee, Hyeonsoo, Lee, Weonsuk, Kim, Tae Soo, Diba, Ali, Kooi, Thijs, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ali, Sharib, editor, van der Sommen, Fons, editor, Papież, Bartłomiej Władysław, editor, Ghatwary, Noha, editor, Jin, Yueming, editor, and Kolenbrander, Iris, editor
- Published
- 2025
- Full Text
- View/download PDF
22. Multi-center Ovarian Tumor Classification Using Hierarchical Transformer-Based Multiple-Instance Learning
- Author
-
H.B. Claessens, Cris, W.R. Schultz, Eloy, Koch, Anna, Nies, Ingrid, A.E. Hellström, Terese, Nederend, Joost, Niers-Stobbe, Ilse, Bruining, Annemarie, M.J. Piek, Jurgen, H.N. De With, Peter, van der Sommen, Fons, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ali, Sharib, editor, van der Sommen, Fons, editor, Papież, Bartłomiej Władysław, editor, Ghatwary, Noha, editor, Jin, Yueming, editor, and Kolenbrander, Iris, editor
- Published
- 2025
- Full Text
- View/download PDF
23. Fractal texture analysis for automated breast cancer detection.
- Author
-
Chamundeeswari, V. Vijaya and Gowri, V.
- Subjects
- *
COMPUTER-aided diagnosis , *BREAST cancer , *FRACTAL analysis , *EARLY detection of cancer ,FRACTAL dimensions - Abstract
Breast cancer is the most common type of cancer in women all over the world. In 2023, there will be approximately 2.2 million new cases of breast cancer, with 662000 deaths worldwide. As of the end of 2021, 7.9 million women had been diagnosed with breast cancer in the previous five years, making it the world's second most common cancer. Primary breast cancer detection is critical for increasing survival rates. After 5 years, breast cancer survival rates range from more than 93% in developed countries to 63% in India and 42% in South Africa. Early detection and treatment have been shown in developed countries to be effective and should be expanded to other countries [1]. Mammography is the utmost operative method for detecting early-stage breast cancer that is presently available. Naturally, a radiologist will look for signs of cancer in a mammogram. Radiologists are directed by the computer-aided diagnosis system to re-examine the mammogram for any suspicious areas. The computer programme can analyse the mammogram and detect abnormalities. The goal of this paper is to look into the value of textural properties, specifically fractal textures, in describing benign and malignant microcalcifications, as well as their role in improving classification accuracy. To investigate classification and labelling, textural measures were used. For fractal analysis, the differential box counting method, Fractal dimensions, and the Gray level Difference Method were used (GLDM). Fractal measures were used to classify and label benign and malignant microcalcifications. The importance of fractal texture measures in achieving automated Breast cancer detection and classification accuracy is emphasised in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
24. Cervical cancer screening based on automated CN network.
- Author
-
Francis, Divya and Subramani, Bharath
- Subjects
- *
HUMAN papillomavirus , *COMPUTER-aided diagnosis , *PAP test , *CERVICAL cancer , *PRECANCEROUS conditions - Abstract
Cervical cancer continues to pose a substantial public health burden, ranking as the fourth most frequent cause of female cancer deaths worldwide. Developing countries bear a disproportionate burden, accounting for roughly 80% of cases. Human Papillomavirus (HPV) is the major culprit, emphasizing the importance of preventive measures and early detection. While Pap smears are a cornerstone of screening, manual analysis has limitations. Subjectivity, time constraints, and potential human error can lead to missed diagnoses and delayed treatment. This paper proposes a novel computer-aided diagnosis (CAD) system to address these shortcomings. This research proposes an automated system for directly classifying cervical cancer cells within Pap smear images, leveraging machine learning and computer vision. This tool has the potential to significantly enhance the accuracy, consistency, and efficiency of cervical cancer screening programs. Earlier detection of precancerous lesions could lead to timely intervention and ultimately reduce cervical cancer mortality rates. Additionally, automating Pap smear analysis could free up valuable time for pathologists. This allows them to focus their expertise on more complex cases and potentially streamline overall workflow efficiency within healthcare systems. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
25. Computer aided diagnosis of prostate cancer.
- Author
-
Shete, Vaishnavi, Wankhade, Sakshi, Pethe, Anil, Hatwalne, Prasad, Shahane, Dev, and Waghmare, Gopal
- Subjects
- *
COMPUTER-aided diagnosis , *RECEIVER operating characteristic curves , *CANCER diagnosis , *MAGNETIC resonance imaging , *PROSTATE - Abstract
In the US, prostate cancer (PC) is the cancer which strikes males the most frequently. In this study, we review prostate cancer detection and diagnosis approaches utilising Computer-assisted diagnosis (CAD) and multipara-metric magnetic resonance imaging (MP-MRI). We cover the most widely used mainstream methods of segmentation along with a list of them. completing the registration process extracting characteristics from, and classifying images AUC, abbreviation area under the receiver operating characteristic curves has been employed to compare the performances of 15 advanced prostate CAD systems. In this work, we explore obstacles and potential strategies for prostate CAD research. In order to make prostate CAD systems usable in clinical use, more improvements should be researched into. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Chapter Three - Machine learning-based techniques for computer-aided diagnosis
- Author
-
Lakshminarayana, M., Dhananjay, B., Hiremath, B.V., Narayanappa, C.K., Neelapu, Bala Chakravarthy, and Sivaraman, J.
- Published
- 2025
- Full Text
- View/download PDF
27. MIDRC-MetricTree: a decision tree-based tool for recommending performance metrics in artificial intelligence-assisted medical image analysis.
- Author
-
Drukker, Karen, Sahiner, Berkman, Hu, Tingting, Kim, Grace Hyun, Whitney, Heather M, Baughan, Natalie, Myers, Kyle J, Giger, Maryellen L, and McNitt-Gray, Michael
- Subjects
artificial intelligence ,computer-aided diagnosis ,machine learning ,performance evaluation ,Clinical sciences ,Biomedical engineering - Abstract
PURPOSE: The Medical Imaging and Data Resource Center (MIDRC) was created to facilitate medical imaging machine learning (ML) research for tasks including early detection, diagnosis, prognosis, and assessment of treatment response related to the coronavirus disease 2019 pandemic and beyond. The purpose of this work was to create a publicly available metrology resource to assist researchers in evaluating the performance of their medical image analysis ML algorithms. APPROACH: An interactive decision tree, called MIDRC-MetricTree, has been developed, organized by the type of task that the ML algorithm was trained to perform. The criteria for this decision tree were that (1) users can select information such as the type of task, the nature of the reference standard, and the type of the algorithm output and (2) based on the user input, recommendations are provided regarding appropriate performance evaluation approaches and metrics, including literature references and, when possible, links to publicly available software/code as well as short tutorial videos. RESULTS: Five types of tasks were identified for the decision tree: (a) classification, (b) detection/localization, (c) segmentation, (d) time-to-event (TTE) analysis, and (e) estimation. As an example, the classification branch of the decision tree includes two-class (binary) and multiclass classification tasks and provides suggestions for methods, metrics, software/code recommendations, and literature references for situations where the algorithm produces either binary or non-binary (e.g., continuous) output and for reference standards with negligible or non-negligible variability and unreliability. CONCLUSIONS: The publicly available decision tree is a resource to assist researchers in conducting task-specific performance evaluations, including classification, detection/localization, segmentation, TTE, and estimation tasks.
- Published
- 2024
28. Intelligent mask image reconstruction for cardiac image segmentation through local–global fusion.
- Author
-
Boukhamla, Assia, Azizi, Nabiha, and Belhaouari, Samir Brahim
- Subjects
CARDIAC magnetic resonance imaging ,COMPUTER-aided diagnosis ,CARDIOVASCULAR disease diagnosis ,TRANSFORMER models ,IMAGE processing - Abstract
Accurate segmentation of cardiac structures in magnetic resonance imaging (MRI) is essential for reliable diagnosis and management of cardiovascular disease. Although numerous robust models have been proposed, no single segmentation model consistently outperforms others across all cases, and models that excel on one dataset may not achieve similar accuracy on others or when the same dataset is expanded. This study introduces FCTransNet, an ensemble-based computer-aided diagnosis system that leverages the complementary strengths of Vision Transformer (ViT) models (specifically TransUNet, SwinUNet, and SegFormer) to address these challenges. To achieve this, we propose a novel pixel-level fusion technique, the Intelligent Weighted Summation Technique (IWST), which reconstructs the final segmentation mask by integrating the outputs of the ViT models and accounting for their diversity. First, a dedicated U-Net module isolates the region of interest (ROI) from cine MRI images, which is then processed by each ViT to generate preliminary segmentation masks. The IWST subsequently fuses these masks to produce a refined final segmentation. By using a local window around each pixel, IWST captures specific neighborhood details while incorporating global context to enhance segmentation accuracy. Experimental validation on the ACDC dataset shows that FCTransNet significantly outperforms individual ViTs and other deep learning-based methods, achieving a Dice Score (DSC) of 0.985 and a mean Intersection over Union (IoU) of 0.914 in the end-diastolic phase. In addition, FCTransNet maintains high accuracy in the end-systolic phase with a DSC of 0.989 and an IoU of 0.908. These results underscore FCTransNet's ability to improve cardiac MRI segmentation accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
29. A Narrative Review of Image Processing Techniques Related to Prostate Ultrasound.
- Author
-
Wang, Haiqiao, Wu, Hong, Wang, Zhuoyuan, Yue, Peiyan, Ni, Dong, Heng, Pheng-Ann, and Wang, Yi
- Subjects
- *
COMPUTER-aided diagnosis , *IMAGE processing , *IMAGE segmentation , *COMPUTER-assisted image analysis (Medicine) , *IMAGE analysis - Abstract
Prostate cancer (PCa) poses a significant threat to men's health, with early diagnosis being crucial for improving prognosis and reducing mortality rates. Transrectal ultrasound (TRUS) plays a vital role in the diagnosis and image-guided intervention of PCa. To facilitate physicians with more accurate and efficient computer-assisted diagnosis and interventions, many image processing algorithms in TRUS have been proposed and achieved state-of-the-art performance in several tasks, including prostate gland segmentation, prostate image registration, PCa classification and detection and interventional needle detection. The rapid development of these algorithms over the past 2 decades necessitates a comprehensive summary. As a consequence, this survey provides a narrative review of this field, outlining the evolution of image processing methods in the context of TRUS image analysis and meanwhile highlighting their relevant contributions. Furthermore, this survey discusses current challenges and suggests future research directions to possibly advance this field further. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
30. Enhancing pancreatic cancer classification through dynamic weighted ensemble: a game theory approach.
- Author
-
S., Dhanasekaran, D., Silambarasan, P., Vivek Karthick, and K., Sudhakar
- Subjects
- *
FUZZY sets , *ROUGH sets , *COMPUTER-aided diagnosis , *PANCREATIC cancer , *TUMOR classification - Abstract
The significant research carried out on medical healthcare networks is giving computing innovations lots of space to produce the most recent innovations. Pancreatic cancer, which ranks among of the most common tumors that are thought to be fatal and unsuspected since it is positioned in the region of the abdomen beyond the stomach and can't be adequately treated once diagnosed. In radiological imaging, such as MRI and CT, computer-aided diagnosis (CAD), quantitative evaluations, and automated pancreatic cancer classification approaches are routinely provided. This study provides a dynamic weighted ensemble framework for pancreatic cancer classification inspired by game theory. Grey Level Co-occurrence Matrix (GLCM) is utilized for feature extraction, together with Gaussian kernel-based fuzzy rough sets theory (GKFRST) for feature reduction and the Random Forest (RF) classifier for categorization. The ResNet50 and VGG16 are used in the transfer learning (TL) paradigm. The combination of the outcomes from the TL paradigm and the RF classifier paradigm is suggested using an innovative ensemble classifier that relies on the game theory method. When compared with the current models, the ensemble technique considerably increases the pancreatic cancer classification accuracy and yields exceptional performance. The study improves the categorization of pancreatic cancer by using game theory, a mathematical paradigm that simulates strategic interactions. Because game theory has been not frequently used in the discipline of cancer categorization, this research is distinctive in its methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
31. Automated diagnosis of COVID-19 using chest X-ray image processing by a Convolutional Neural Network.
- Author
-
Alotaibi, Reem, Alharbi, Abir, Algethami, Abdulaziz, and Alkenawi, Abdulkader
- Subjects
- *
CONVOLUTIONAL neural networks , *COMPUTER-aided diagnosis , *RADIOSCOPIC diagnosis , *MACHINE learning , *DIAGNOSIS - Abstract
The COVID-19 pandemic has severely impacted global healthcare and financial systems, Highlighting the need for an automatic computer-aided diagnosis system using image recognition for chest X-rays(CXR). This study aims to classify COVID-19, normal, and pneumonia patients from CXR images via a modified ResNet-50 pre-trained CNN model. Our experiments are based on Dataset-1, which contains CXR images of COVID-19 and normal, while Dataset-2 also includes pneumonia. Dataset-1 and Dataset-2 were collected from King Abdul-Aziz Medical City in National Guard, Jeddah, Saudi Arabia. Moreover, a sample from the Kaggle repository was added to Dataset-1 and 2 to make two more Datasets. Our results for diagnosis of raspatory diseases have shown reliability and high accuracy of 95.28% (97.66% Sensitivity and 93.12% Specificity), which will be beneficial in aiding physicians and healthcare centres in the global fight against harmful spreading viruses through employing AI and ML techniques in X-ray medical diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
32. A review of deep learning methods for gastrointestinal diseases classification applied in computer-aided diagnosis system.
- Author
-
Jiang, Qianru, Yu, Yulin, Ren, Yipei, Li, Sheng, and He, Xiongxiong
- Subjects
- *
COMPUTER-aided diagnosis , *IMAGE recognition (Computer vision) , *GASTROINTESTINAL diseases , *NOSOLOGY , *DEEP learning - Abstract
Recent advancements in deep learning have significantly improved the intelligent classification of gastrointestinal (GI) diseases, particularly in aiding clinical diagnosis. This paper seeks to review a computer-aided diagnosis (CAD) system for GI diseases, aligning with the actual clinical diagnostic process. It offers a comprehensive survey of deep learning (DL) techniques tailored for classifying GI diseases, addressing challenges inherent in complex scenes, clinical constraints, and technical obstacles encountered in GI imaging. Firstly, the esophagus, stomach, small intestine, and large intestine were located to determine the organs where the lesions were located. Secondly, location detection and classification of a single disease are performed on the premise that the organ's location corresponding to the image is known. Finally, comprehensive classification for multiple diseases is carried out. The results of single and multi-classification are compared to achieve more accurate classification outcomes, and a more effective computer-aided diagnosis system for gastrointestinal diseases was further constructed. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
33. Facial micro-expression classification through an optimized convolutional neural network using genetic algorithm.
- Author
-
Naidana, Krishna Santosh, Yarra, Yaswanth, and Divvela, Lakshmi Prasanna
- Subjects
CONVOLUTIONAL neural networks ,COMPUTER-aided diagnosis ,FEATURE extraction ,GENETIC algorithms ,FACIAL expression - Abstract
Computer vision facilitates machines to interpret the visual world using various computer aided detection (CAD)-based techniques. It plays a crucial role in micro-expression auto classification. A micro-expression is a brief facial movement which reveals a genuine emotion that a person tries to conceal, it usually lasts for a short duration and is imperceptible with normal vision. To reveal people's genuine emotions, an automatic micro-expression screening using convolutional neural network (CNN) is in great need. Traditional methods for micro-expression recognition (MER) suffer from low classification accuracy due to inadequate CNN hyperparameters selection. The proposed approach addresses these challenges by using an optimized CNN with adequate learning rate, batch size, epochs, and dropout rate. Real-coded genetic algorithm (RCGA) has been employed for the hyperparameter optimization. In this experimentation, features are extracted from the onset and apex frames of microexpression video clips of CASME II dataset. The proposed model's performance is measured using various metrics, including accuracy, precision, and recall. The proposed approach's performance is then compared with an optimized CNN using random search algorithm. The empirical investigation of existing CNN-based methods has proven efficacy of our proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
34. AN APPROACH FOR LUNG CANCER DETECTION AND CLASSIFICATION USING LENET-DENSENET.
- Author
-
Mathew, Ann, Grace, K. S. Vijula, and Preetha, M. Mary Synthuja Jain
- Subjects
ARTIFICIAL neural networks ,LUNG cancer ,CLINICAL decision support systems ,CONVOLUTIONAL neural networks ,COMPUTER-aided diagnosis ,DEEP learning - Published
- 2025
- Full Text
- View/download PDF
35. Computer-Aided Diagnosis of Graphomotor Difficulties Utilizing Direction-Based Fractional Order Derivatives.
- Author
-
Gavenciak, Michal, Mucha, Jan, Mekyska, Jiri, Galaz, Zoltan, Zvoncakova, Katarina, and Faundez-Zanuy, Marcos
- Abstract
Children who do not sufficiently develop graphomotor skills essential for handwriting often develop graphomotor disabilities (GD), impacting the self-esteem and academic performance of the individual. Current examination methods of GD consist of scales and questionaries, which lack objectivity, rely on the perceptual abilities of the examiner, and may lead to inadequately targeted remediation. Nowadays, one way to address the factor of subjectivity is to incorporate supportive machine learning (ML) based assessment. However, even with the increasing popularity of decision-support systems facilitating the diagnosis and assessment of GD, this field still lacks an understanding of deficient kinematics concerning the direction of pen movement. This study aims to explore the impact of movement direction on the manifestations of graphomotor difficulties in school-aged. We introduced a new fractional-order derivative-based approach enabling quantification of kinematic aspects of handwriting concerning the direction of movement using polar plot representation. We validated the novel features in a barrage of machine learning scenarios, testing various training methods based on extreme gradient boosting trees (XGBboost), Bayesian, and random search hyperparameter tuning methods. Results show that our novel features outperformed the baseline and provided a balanced accuracy of 87 % (sensitivity = 82 %, specificity = 92 %), performing binary classification (children with/without graphomotor difficulties). The final model peaked when using only 43 out of 250 novel features, showing that XGBoost can benefit from feature selection methods. Proposed features provide additional information to an automated classifier with the potential of human interpretability thanks to the possibility of easy visualization using polar plots. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
36. A quantum-optimized approach for breast cancer detection using SqueezeNet-SVM.
- Author
-
Bilal, Anas, Alkhathlan, Ali, Kateb, Faris A., Tahir, Alishba, Shafiq, Muhammad, and Long, Haixia
- Subjects
- *
GREY Wolf Optimizer algorithm , *COMPUTER-aided diagnosis , *MEDICAL sciences , *IMAGE analysis , *SUPPORT vector machines - Abstract
Breast cancer is one of the most aggressive types of cancer, and its early diagnosis is crucial for reducing mortality rates and ensuring timely treatment. Computer-aided diagnosis systems provide automated mammography image processing, interpretation, and grading. However, since the currently existing methods suffer from such issues as overfitting, lack of adaptability, and dependence on massive annotated datasets, the present work introduces a hybrid approach to enhance breast cancer classification accuracy. The proposed Q-BGWO-SQSVM approach utilizes an improved quantum-inspired binary Grey Wolf Optimizer and combines it with SqueezeNet and Support Vector Machines to exhibit sophisticated performance. SqueezeNet's fire modules and complex bypass mechanisms extract distinct features from mammography images. Then, these features are optimized by the Q-BGWO for determining the best SVM parameters. Since the current CAD system is more reliable, accurate, and sensitive, its application is advantageous for healthcare. The proposed Q-BGWO-SQSVM was evaluated using diverse databases: MIAS, INbreast, DDSM, and CBIS-DDSM, analyzing its performance regarding accuracy, sensitivity, specificity, precision, F1 score, and MCC. Notably, on the CBIS-DDSM dataset, the Q-BGWO-SQSVM achieved remarkable results at 99% accuracy, 98% sensitivity, and 100% specificity in 15-fold cross-validation. Finally, it can be observed that the performance of the designed Q-BGWO-SQSVM model is excellent, and its potential realization in other datasets and imaging conditions is promising. The novel Q-BGWO-SQSVM model outperforms the state-of-the-art classification methods and offers accurate and reliable early breast cancer detection, which is essential for further healthcare development. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
37. Classification of CT scan and X-ray dataset based on deep learning and particle swarm optimization.
- Author
-
Liu, Honghua, Zhao, Mingwei, She, Chang, Peng, Han, Liu, Mailan, and Li, Bo
- Subjects
- *
FEATURE extraction , *COMPUTER-aided diagnosis , *COMPUTED tomography , *PARTICLE swarm optimization , *IMAGE processing - Abstract
In 2019, the novel coronavirus swept the world, exposing the monitoring and early warning problems of the medical system. Computer-aided diagnosis models based on deep learning have good universality and can well alleviate these problems. However, traditional image processing methods may lead to high false positive rates, which is unacceptable in disease monitoring and early warning. This paper proposes a low false positive rate disease detection method based on COVID-19 lung images and establishes a two-stage optimization model. In the first stage, the model is trained using classical gradient descent, and relevant features are extracted; in the second stage, an objective function that minimizes the false positive rate is constructed to obtain a network model with high accuracy and low false positive rate. Therefore, the proposed method has the potential to effectively classify medical images. The proposed model was verified using a public COVID-19 radiology dataset and a public COVID-19 lung CT scan dataset. The results show that the model has made significant progress, with the false positive rate reduced to 11.3% and 7.5%, and the area under the ROC curve increased to 92.8% and 97.01%. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
38. Alzheimer's disease diagnosis by 3D-SEConvNeXt.
- Author
-
Hu, Zhongyi, Wang, Yuhang, and Xiao, Lei
- Abstract
Alzheimer's disease (AD) constitutes a fatal neurodegenerative disorder and represents the most prevalent form of dementia among the elderly population. Traditional manual AD classification methods, such as clinical diagnosis, are known to be time-consuming and labor-intensive, with relatively low accuracy. Therefore, our work aims to develop a new deep learning framework to tackle this challenge. Our proposed model integrates ConvNeXt with three-dimensional (3D) convolution and incorporates a 3D Squeeze-and-Excitation (3D-SE) attention mechanism to enhance early classification of AD. The experimental data is sourced from the publicly accessible Alzheimer's disease Neuroimaging Initiative (ADNI) database, with raw Magnetic Resonance Imaging (MRI) data preprocessed using SPM12 software. Subsequently, the preprocessed data is input into the 3D-SEConvNeXt network to perform four classification tasks: distinguishing between AD and Normal Control (NC), Mild Cognitive Impairment (MCI) and NC, AD and MCI, as well as AD, MCI, and NC. The experimental results indicate that the 3D-SEConvNeXt model consistently outperforms alternative models in terms of accuracy, achieving commendable outcomes in early AD diagnostic tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
39. An exploration of distinguishing subjective cognitive decline and mild cognitive impairment based on resting-state prefrontal functional connectivity assessed by functional near-infrared spectroscopy.
- Author
-
Pu, Zhengping, Huang, Hongna, Li, Man, Li, Hongyan, Shen, Xiaoyan, Wu, Qingfeng, Ni, Qin, Lin, Yong, and Cui, Donghong
- Subjects
COGNITION disorders diagnosis ,CROSS-sectional method ,MILD cognitive impairment ,FUNCTIONAL connectivity ,RESEARCH funding ,LOGISTIC regression analysis ,NEAR infrared spectroscopy ,LONGITUDINAL method ,SUPPORT vector machines ,NEUROPSYCHOLOGICAL tests ,COMPUTER-aided diagnosis ,MACHINE learning ,COMPARATIVE studies ,CONFIDENCE intervals ,SENSITIVITY & specificity (Statistics) ,DISCRIMINANT analysis - Abstract
Purpose: Functional near-infrared spectroscopy (fNIRS) has shown feasibility in evaluating cognitive function and brain functional connectivity (FC). Therefore, this fNIRS study aimed to develop a screening method for subjective cognitive decline (SCD) and mild cognitive impairment (MCI) based on resting-state prefrontal FC and neuropsychological tests via machine learning. Methods: Functional connectivity data measured by fNIRS were collected from 55 normal controls (NCs), 80 SCD individuals, and 111 MCI individuals. Differences in FC were analyzed among the groups. FC strength and neuropsychological test scores were extracted as features to build classification and predictive models through machine learning. Model performance was assessed based on accuracy, specificity, sensitivity, and area under the curve (AUC) with 95% confidence interval (CI) values. Results: Statistical analysis revealed a trend toward compensatory enhanced prefrontal FC in SCD and MCI individuals. The models showed a satisfactory ability to differentiate among the three groups, especially those employing linear discriminant analysis, logistic regression, and support vector machine. Accuracies of 94.9% for MCI vs. NC, 79.4% for MCI vs. SCD, and 77.0% for SCD vs. NC were achieved, and the highest AUC values were 97.5% (95% CI: 95.0%–100.0%) for MCI vs. NC, 83.7% (95% CI: 77.5%–89.8%) for MCI vs. SCD, and 80.6% (95% CI: 72.7%–88.4%) for SCD vs. NC. Conclusion: The developed screening method based on resting-state prefrontal FC measured by fNIRS and machine learning may help predict early-stage cognitive impairment. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
40. Artificial Intelligence for Adenoma and Polyp Detection During Screening and Surveillance Colonoscopy: A Randomized-Controlled Trial.
- Author
-
Alali, Ali A., Alhashmi, Ahmad, Alotaibi, Nawal, Ali, Nargess, Alali, Maryam, and Alfadhli, Ahmad
- Subjects
- *
COMPUTER-aided diagnosis , *ADENOMATOUS polyps , *INFLAMMATORY bowel diseases , *ADENOMA , *COLON cancer - Abstract
Background: Colorectal cancer (CRC) is the second leading cause of cancer death in Kuwait. The effectiveness of colonoscopy in preventing CRC is dependent on a high adenoma detection rate (ADR). Computer-aided detection can identify (CADe) and characterize polyps in real time and differentiate benign from neoplastic polyps, but its role remains unclear in screening colonoscopy. Methods: This was a randomized-controlled trial (RCT) enrolling patients 45 years of age or older presenting for outpatient screening or surveillance colonoscopy (Kuwait clinical trial registration number 2047/2022). Patients with a history of inflammatory bowel disease, alarm symptoms, familial polyposis syndrome, colon resection, or poor bowel preparation were excluded. Patients were randomly assigned to either high-definition white-light (HD-WL) colonoscopy (standard of care) or HD-WL colonoscopy with the CADe system. The primary outcome was ADR. The secondary outcomes included polyp detection rate (PDR), adenoma per colonoscopy (APC), polyp per colonoscopy (PPC), and accuracy of polyp characterization. Results: From 1 September 2022 to 1 March 2023, 102 patients were included and allocated to either the HD-WL colonoscopy group (n = 51) or CADe group (n = 51). The mean age was 52.8 years (SD 8.2), and males represented 50% of the cohort. Screening for CRC accounted for 94.1% of all examinations, while the remaining patients underwent surveillance colonoscopy. A total of 121 polyps were detected with an average size of 4.18 mm (SD 5.1), the majority being tubular adenomas with low-grade dysplasia (47.1%) and hyperplastic polyps (46.3%). There was no difference in the overall bowel preparation, insertion and withdrawal times, and adverse events between the two arms. ADR (primary outcome) was non-significantly higher in the CADe group compared to the HD colonoscopy group (47.1% vs. 37.3%, p = 0.3). Among the secondary outcomes, PDR (78.4% vs. 56.8%, p = 0.02) and PPC (1.35 vs. 0.96, p = 0.04) were significantly higher in the CADe group, but APC was not (0.75 vs. 0.51, p = 0.09). Accuracy in characterizing polyp histology was similar in both groups. Conclusions: In this RCT, the artificial intelligence system showed a non-significant trend towards improving ADR among Kuwaiti patients undergoing screening or surveillance colonoscopy compared to HD-WL colonoscopy alone, while it significantly improved the detection of diminutive polyps. A larger multicenter study is required to detect the true effect of CADe on the detection of adenomas. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
41. Deep Transfer Learning for Classification of Late Gadolinium Enhancement Cardiac MRI Images into Myocardial Infarction, Myocarditis, and Healthy Classes: Comparison with Subjective Visual Evaluation.
- Author
-
Ben Khalifa, Amani, Mili, Manel, Maatouk, Mezri, Ben Abdallah, Asma, Abdellali, Mabrouk, Gaied, Sofiene, Ben Ali, Azza, Lahouel, Yassir, Bedoui, Mohamed Hedi, and Zrig, Ahmed
- Subjects
- *
COMPUTER-aided diagnosis , *CARDIAC magnetic resonance imaging , *MAGNETIC resonance imaging , *MYOCARDIAL infarction , *CARDIAC imaging - Abstract
Background/Objectives: To develop a computer-aided diagnosis (CAD) method for the classification of late gadolinium enhancement (LGE) cardiac MRI images into myocardial infarction (MI), myocarditis, and healthy classes using a fine-tuned VGG16 model hybridized with multi-layer perceptron (MLP) (VGG16-MLP) and assess our model's performance in comparison to various pre-trained base models and MRI readers. Methods: This study included 361 LGE images for MI, 222 for myocarditis, and 254 for the healthy class. The left ventricle was extracted automatically using a U-net segmentation model on LGE images. Fine-tuned VGG16 was performed for feature extraction. A spatial attention mechanism was implemented as a part of the neural network architecture. The MLP architecture was used for the classification. The evaluation metrics were calculated using a separate test set. To compare the VGG16 model's performance in feature extraction, various pre-trained base models were evaluated: VGG19, DenseNet121, DenseNet201, MobileNet, InceptionV3, and InceptionResNetV2. The Support Vector Machine (SVM) classifier was evaluated and compared to MLP for the classification task. The performance of the VGG16-MLP model was compared with a subjective visual analysis conducted by two blinded independent readers. Results: The VGG16-MLP model allowed high-performance differentiation between MI, myocarditis, and healthy LGE cardiac MRI images. It outperformed the other tested models with 96% accuracy, 97% precision, 96% sensitivity, and 96% F1-score. Our model surpassed the accuracy of Reader 1 by 27% and Reader 2 by 17%. Conclusions: Our study demonstrated that the VGG16-MLP model permits accurate classification of MI, myocarditis, and healthy LGE cardiac MRI images and could be considered a reliable computer-aided diagnosis approach specifically for radiologists with limited experience in cardiovascular imaging. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
42. Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It.
- Author
-
Hafeez, Yasir, Memon, Khuhed, AL-Quraishi, Maged S., Yahya, Norashikin, Elferik, Sami, and Ali, Syed Saad Azhar
- Subjects
- *
COMPUTER-aided diagnosis , *MEDICAL personnel , *POSITRON emission tomography , *MAGNETIC resonance imaging , *ARTIFICIAL intelligence - Abstract
Background: Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it has not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, and medical experts have been working in the direction of designing and developing computer aided diagnosis (CAD) tools to serve as assistants to doctors, their large-scale adoption and integration into the healthcare system still seems far-fetched. Diagnostic radiology is no exception. Imagining techniques like magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET) scans have been widely and very effectively employed by radiologists and neurologists for the differential diagnoses of neurological disorders for decades, yet no AI-powered systems to analyze such scans have been incorporated into the standard operating procedures of healthcare systems. Why? It is absolutely understandable that in diagnostic medicine, precious human lives are on the line, and hence there is no room even for the tiniest of mistakes. Nevertheless, with the advent of explainable artificial intelligence (XAI), the old-school black boxes of deep learning (DL) systems have been unraveled. Would XAI be the turning point for medical experts to finally embrace AI in diagnostic radiology? This review is a humble endeavor to find the answers to these questions. Methods: In this review, we present the journey and contributions of AI in developing systems to recognize, preprocess, and analyze brain MRI scans for differential diagnoses of various neurological disorders, with special emphasis on CAD systems embedded with explainability. A comprehensive review of the literature from 2017 to 2024 was conducted using host databases. We also present medical domain experts' opinions and summarize the challenges up ahead that need to be addressed in order to fully exploit the tremendous potential of XAI in its application to medical diagnostics and serve humanity. Results: Forty-seven studies were summarized and tabulated with information about the XAI technology and datasets employed, along with performance accuracies. The strengths and weaknesses of the studies have also been discussed. In addition, the opinions of seven medical experts from around the world have been presented to guide engineers and data scientists in developing such CAD tools. Conclusions: Current CAD research was observed to be focused on the enhancement of the performance accuracies of the DL regimens, with less attention being paid to the authenticity and usefulness of explanations. A shortage of ground truth data for explainability was also observed. Visual explanation methods were found to dominate; however, they might not be enough, and more thorough and human professor-like explanations would be required to build the trust of healthcare professionals. Special attention to these factors along with the legal, ethical, safety, and security issues can bridge the current gap between XAI and routine clinical practice. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
43. Artificial intelligence performance in ultrasound-based lymph node diagnosis: a systematic review and meta-analysis.
- Author
-
Han, Xinyang, Qu, Jingguo, Chui, Man-Lik, Gunda, Simon Takadiyi, Chen, Ziman, Qin, Jing, King, Ann Dorothy, Chu, Winnie Chiu-Wing, Cai, Jing, and Ying, Michael Tin-Cheung
- Subjects
- *
CLINICAL decision support systems , *COMPUTER-aided diagnosis , *MACHINE learning , *ARTIFICIAL intelligence , *LYMPH nodes - Abstract
Background and objectives: Accurate classification of lymphadenopathy is essential for determining the pathological nature of lymph nodes (LNs), which plays a crucial role in treatment selection. The biopsy method is invasive and carries the risk of sampling failure, while the utilization of non-invasive approaches such as ultrasound can minimize the probability of iatrogenic injury and infection. With the advancement of artificial intelligence (AI) and machine learning, the diagnostic efficiency of LNs is further enhanced. This study evaluates the performance of ultrasound-based AI applications in the classification of benign and malignant LNs. Methods: The literature research was conducted using the PubMed, EMBASE, and Cochrane Library databases as of June 2024. The quality of the included studies was evaluated using the QUADAS-2 tool. The pooled sensitivity, specificity, and diagnostic odds ratio (DOR) were calculated to assess the diagnostic efficacy of ultrasound-based AI in classifying benign and malignant LNs. Subgroup analyses were also conducted to identify potential sources of heterogeneity. Results: A total of 1,355 studies were identified and reviewed. Among these studies, 19 studies met the inclusion criteria, and 2,354 cases were included in the analysis. The pooled sensitivity, specificity, and DOR of ultrasound-based machine learning in classifying benign and malignant LNs were 0.836 (95% CI [0.805, 0.863]), 0.850 (95% CI [0.805, 0.886]), and 33.331 (95% CI [22.873, 48.57]), respectively, indicating no publication bias (p = 0.12). Subgroup analyses may suggest that the location of lymph nodes, validation methods, and type of primary tumor are the sources of heterogeneity. Conclusion: AI can accurately differentiate benign from malignant LNs. Given the widespread use of ultrasonography in diagnosing malignant LNs in cancer patients, there is significant potential for integrating AI-based decision support systems into clinical practice to enhance the diagnostic accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
44. A Radiograph Dataset for the Classification, Localization, and Segmentation of Primary Bone Tumors.
- Author
-
Yao, Shunhan, Huang, Yuanxiang, Wang, Xiaoyu, Zhang, Yiwen, Paixao, Ian Costa, Wang, Zhikang, Chai, Charla Lu, Wang, Hongtao, Lu, Dinggui, Webb, Geoffrey I, Li, Shanshan, Guo, Yuming, Chen, Qingfeng, and Song, Jiangning
- Subjects
MACHINE learning ,COMPUTER-aided diagnosis ,DEEP learning ,MEDICAL sciences ,CANCER-related mortality - Abstract
Primary malignant bone tumors are the third highest cause of cancer-related mortality among patients under the age of 20. X-ray scan is the primary tool for detecting bone tumors. However, due to the varying morphologies of bone tumors, it is challenging for radiologists to make a definitive diagnosis based on radiographs. With the recent advancement in deep learning algorithms, there is a surge of interest in computer-aided diagnosis of primary bone tumors. Nonetheless, the development in this field has been hindered by the lack of publicly available X-ray datasets for bone tumors. To tackle this challenge, we established the Bone Tumor X-ray Radiograph dataset (termed BTXRD) in collaboration with multiple medical institutes and hospitals. The BTXRD dataset comprises 3,746 bone images (1,879 normal and 1,867 tumor), with clinical information and global labels available for each image, and distinct mask and annotated bounding box for each tumor instance. This publicly available dataset can support the development and evaluation of deep learning algorithms for the diagnosis of primary bone tumors. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
45. Pairwise hemorrhage-brain region interaction-driven hemorrhagic stroke assessment in CT.
- Author
-
Liang, Wei, Wu, Haixiong, Guo, Hongbin, Huang, Zhanyao, Liang, Shibin, Zhang, Jinhuang, Zhang, Huiling, Ma, Xiangyuan, and Xu, Zibi
- Subjects
- *
INTRACRANIAL hemorrhage , *HEMORRHAGIC stroke , *COMPUTER-aided diagnosis , *INTRACLASS correlation , *COMPUTED tomography - Abstract
Objective. Hemorrhagic stroke is a major global health problem requiring rapid and accurate diagnosis for effective treatment. Despite advances, current computer-aided diagnosis (CAD) frameworks rarely account for the functional impacts of hemorrhages on specific critical brain regions and lack the detailed assessments essential for precise treatment. To provide detailed insights into hemorrhages, we aim to propose a CAD framework for in-depth hemorrhagic stroke assessment in computed tomography (CT). The framework includes segmenting hemorrhages and critical brain regions, intraparenchymal hemorrhage (IPH) classification for identifying hemorrhages in critical brain regions and detecting hemorrhage volume. Approach. To capture the complex interactions between hemorrhages and critical brain regions, we developed the pairwise hemorrhage-brain regions interaction (PHRI) Network. Its emphasis a novel interaction head that integrates feature on hemorrhages, brain regions, and their interdependencies, enabling the model to learn these relationships during training. In addition, a Global-Local Fusion Unit was introduced to provide with image-wide contextual information, and an uncertainty-weighted loss method was utilized to simultaneously optimise the multitask framework. With institutional review board approval, an in-house hemorrhagic stroke dataset was collected, including 2,764 CT slices from 99 patients. A five-fold cross-validation was used to train and test the models. Main Results. The proposed PHRI network was experimentally validated to effectively extract hemorrhage brain region interactions, thereby significantly outperforming several state-of-the-art models in both hemorrhage and critical brain region segmentation, with an average Dice of 0.9064 ± 0.1079 (P < 0.05), as well as in IPH classification, with an F1-Score of 0.8366. Additionally, the framework demonstrated good performance in hemorrhage volume detection, with an intraclass correlation coefficient of 0.981. Significance. This study introduces a CAD framework for hemorrhagic stroke assessment, offering a novel approach that emphasizes relationships between hemorrhages and critical brain regions. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
46. Ischemic Stroke Lesion Segmentation on Multiparametric CT Perfusion Maps Using Deep Neural Network.
- Author
-
Kandpal, Ankit, Gupta, Rakesh Kumar, and Singh, Anup
- Subjects
- *
COMPUTER-aided diagnosis , *ARTIFICIAL neural networks , *IMAGE segmentation , *ISCHEMIC stroke , *COMPUTED tomography - Abstract
Background: Accurate delineation of lesions in acute ischemic stroke is important for determining the extent of tissue damage and the identification of potentially salvageable brain tissues. Automatic segmentation on CT images is challenging due to the poor contrast-to-noise ratio. Quantitative CT perfusion images improve the estimation of the perfusion deficit regions; however, they are limited by a poor signal-to-noise ratio. The study aims to investigate the potential of deep learning (DL) algorithms for the improved segmentation of ischemic lesions. Methods: This study proposes a novel DL architecture, DenseResU-NetCTPSS, for stroke segmentation using multiparametric CT perfusion images. The proposed network is benchmarked against state-of-the-art DL models. Its performance is assessed using the ISLES-2018 challenge dataset, a widely recognized dataset for stroke segmentation in CT images. The proposed network was evaluated on both training and test datasets. Results: The final optimized network takes three image sequences, namely CT, cerebral blood volume (CBV), and time to max (Tmax), as input to perform segmentation. The network achieved a dice score of 0.65 ± 0.19 and 0.45 ± 0.32 on the training and testing datasets. The model demonstrated a notable improvement over existing state-of-the-art DL models. Conclusions: The optimized model combines CT, CBV, and Tmax images, enabling automatic lesion identification with reasonable accuracy and aiding radiologists in faster, more objective assessments. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
47. Advanced Brain Tumor Classification in MR Images Using Transfer Learning and Pre-Trained Deep CNN Models.
- Author
-
Disci, Rukiye, Gurcan, Fatih, and Soylu, Ahmet
- Subjects
- *
GLIOMAS , *DIAGNOSTIC imaging , *CLINICAL decision support systems , *MAGNETIC resonance imaging , *CONVOLUTIONAL neural networks , *COMPUTER-aided diagnosis , *MENINGIOMA , *DEEP learning , *COMPUTERS in medicine , *AUTOMATION , *PITUITARY tumors , *MACHINE learning , *BRAIN tumors , *ALGORITHMS - Abstract
Simple Summary: This study explores the use of pre-trained deep learning models for classifying brain MRI images into four categories: Glioma, Meningioma, Pituitary, and No Tumor. The study uses a publicly available Brain Tumor MRI dataset and applies transfer learning to improve diagnostic accuracy and efficiency by fine-tuning pre-trained models. Xception achieved the highest performance with a weighted accuracy of 98.73%. While the models showed promise in addressing class imbalances, challenges in improving recall for certain tumor types remain. The study highlights the potential of deep learning in transforming medical imaging and clinical diagnostics. Background/Objectives: Brain tumor classification is a crucial task in medical diagnostics, as early and accurate detection can significantly improve patient outcomes. This study investigates the effectiveness of pre-trained deep learning models in classifying brain MRI images into four categories: Glioma, Meningioma, Pituitary, and No Tumor, aiming to enhance the diagnostic process through automation. Methods: A publicly available Brain Tumor MRI dataset containing 7023 images was used in this research. The study employs state-of-the-art pre-trained models, including Xception, MobileNetV2, InceptionV3, ResNet50, VGG16, and DenseNet121, which are fine-tuned using transfer learning, in combination with advanced preprocessing and data augmentation techniques. Transfer learning was applied to fine-tune the models and optimize classification accuracy while minimizing computational requirements, ensuring efficiency in real-world applications. Results: Among the tested models, Xception emerged as the top performer, achieving a weighted accuracy of 98.73% and a weighted F1 score of 95.29%, demonstrating exceptional generalization capabilities. These models proved particularly effective in addressing class imbalances and delivering consistent performance across various evaluation metrics, thus demonstrating their suitability for clinical adoption. However, challenges persist in improving recall for the Glioma and Meningioma categories, and the black-box nature of deep learning models requires further attention to enhance interpretability and trust in medical settings. Conclusions: The findings underscore the transformative potential of deep learning in medical imaging, offering a pathway toward more reliable, scalable, and efficient diagnostic tools. Future research will focus on expanding dataset diversity, improving model explainability, and validating model performance in real-world clinical settings to support the widespread adoption of AI-driven systems in healthcare and ensure their integration into clinical workflows. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
48. The Three-Class Annotation Method Improves the AI Detection of Early-Stage Osteosarcoma on Plain Radiographs: A Novel Approach for Rare Cancer Diagnosis.
- Author
-
Hasei, Joe, Nakahara, Ryuichi, Otsuka, Yujiro, Nakamura, Yusuke, Ikuta, Kunihiro, Osaki, Shuhei, Hironari, Tamiya, Miwa, Shinji, Ohshika, Shusa, Nishimura, Shunji, Kahara, Naoaki, Yoshida, Aki, Fujiwara, Tomohiro, Nakata, Eiji, Kunisada, Toshiyuki, and Ozaki, Toshifumi
- Subjects
- *
OSTEOSARCOMA , *STATISTICAL models , *RESEARCH funding , *RECEIVER operating characteristic curves , *ACADEMIC medical centers , *RARE diseases , *ARTIFICIAL intelligence , *EARLY detection of cancer , *DATA curation , *CANCER patients , *DESCRIPTIVE statistics , *MAGNETIC resonance imaging , *COMPUTER-aided diagnosis , *SENSITIVITY & specificity (Statistics) - Abstract
Simple Summary: Developing effective artificial intelligence (AI) systems for rare diseases such as osteosarcoma is challenging owing to the limited available data. This study introduces a novel approach for preparing training data for AI systems that detect osteosarcoma using X-rays. Traditional methods label tumor areas as a single entity; however, our new approach divides tumor regions into three distinct classes: intramedullary, cortical, and extramedullary. This three-class annotation method enables AI systems to learn more effectively from limited datasets by incorporating detailed anatomical knowledge. This new approach to data preparation resulted in more robust AI models that could detect subtle tumor changes at lower threshold values, demonstrating how strategic data annotation methods can enhance AI performance even with limited training samples. This methodological innovation in data preparation offers a new paradigm for developing AI systems for rare diseases, for which traditional data-driven approaches often fall short. Background/Objectives: Developing high-performance artificial intelligence (AI) models for rare diseases is challenging owing to limited data availability. This study aimed to evaluate whether a novel three-class annotation method for preparing training data could enhance AI model performance in detecting osteosarcoma on plain radiographs compared to conventional single-class annotation. Methods: We developed two annotation methods for the same dataset of 468 osteosarcoma X-rays and 378 normal radiographs: a conventional single-class annotation (1C model) and a novel three-class annotation method (3C model) that separately labeled intramedullary, cortical, and extramedullary tumor components. Both models used identical U-Net-based architectures, differing only in their annotation approaches. Performance was evaluated using an independent validation dataset. Results: Although both models achieved high diagnostic accuracy (AUC: 0.99 vs. 0.98), the 3C model demonstrated superior operational characteristics. At a standardized cutoff value of 0.2, the 3C model maintained balanced performance (sensitivity: 93.28%, specificity: 92.21%), whereas the 1C model showed compromised specificity (83.58%) despite high sensitivity (98.88%). Notably, at the 25th percentile threshold, both models showed identical false-negative rates despite significantly different cutoff values (3C: 0.661 vs. 1C: 0.985), indicating the ability of the 3C model to maintain diagnostic accuracy at substantially lower thresholds. Conclusions: This study demonstrated that anatomically informed three-class annotation can enhance AI model performance for rare disease detection without requiring additional training data. The improved stability at lower thresholds suggests that thoughtful annotation strategies can optimize the AI model training, particularly in contexts where training data are limited. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
49. Isfahan Artificial Intelligence Event 2023: Macular Pathology Detection Competition.
- Author
-
Sedighin, Farnaz, Monemian, Maryam, Zojaji, Zahra, Montazerolghaem, Ahmadreza, Asadinia, Mohammad Amin, Mirghaderi, Seyed Mojtaba, Esfahani, Seyed Amin Naji, Kazemi, Mohammad, Mokhtari, Reza, Mohammadi, Maryam, Ramezani, Mohadese, Tajmirriahi, Mahnoosh, and Rabbani, Hossein
- Subjects
- *
MACULAR degeneration , *OPTICAL coherence tomography , *COMPUTER-aided diagnosis , *MACULAR edema , *ARTIFICIAL intelligence , *DEEP learning - Abstract
Background: Computer-aided diagnosis (CAD) methods have become of great interest for diagnosing macular diseases over the past few decades. Artificial intelligence (AI)-based CADs offer several benefits, including speed, objectivity, and thoroughness. They are utilized as an assistance system in various ways, such as highlighting relevant disease indicators to doctors, providing diagnosis suggestions, and presenting similar past cases for comparison. Methods: Much specifically, retinal AI-CADs have been developed to assist ophthalmologists in analyzing optical coherence tomography (OCT) images and making retinal diagnostics simpler and more accurate than before. Retinal AI-CAD technology could provide a new insight for the health care of humans who do not have access to a specialist doctor. AI-based classification methods are critical tools in developing improved retinal AI-CAD technology. The Isfahan AI-2023 challenge has organized a competition to provide objective formal evaluations of alternative tools in this area. In this study, we describe the challenge and those methods that had the most successful algorithms. Results: A dataset of OCT images, acquired from normal subjects, patients with diabetic macular edema, and patients with other macular disorders, was provided in a documented format. The dataset, including the labeled training set and unlabeled test set, was made accessible to the participants. The aim of this challenge was to maximize the performance measures for the test labels. Researchers tested their algorithms and competed for the best classification results. Conclusions: The competition is organized to evaluate the current AI-based classification methods in macular pathology detection. We received several submissions to our posted datasets that indicate the growing interest in AI-CAD technology. The results demonstrated that deep learning-based methods can learn essential features of pathologic images, but much care has to be taken in choosing and adapting appropriate models for imbalanced small datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
50. Diagnosis of Autism Spectrum Disorder (ASD) by Dynamic Functional Connectivity Using GNN-LSTM.
- Author
-
Tang, Jun, Chen, Jie, Hu, Miaojun, Hu, Yao, Zhang, Zixi, and Xiao, Liuming
- Subjects
- *
GRAPH neural networks , *LONG short-term memory , *COMPUTER-aided diagnosis , *AUTISM spectrum disorders , *LARGE-scale brain networks - Abstract
Early detection of autism spectrum disorder (ASD) is particularly important given its insidious qualities and the high cost of the diagnostic process. Currently, static functional connectivity studies have achieved significant results in the field of ASD detection. However, with the deepening of clinical research, more and more evidence suggests that dynamic functional connectivity analysis can more comprehensively reveal the complex and variable characteristics of brain networks and their underlying mechanisms, thus providing more solid scientific support for computer-aided diagnosis of ASD. To overcome the lack of time-scale information in static functional connectivity analysis, in this paper, we proposes an innovative GNN-LSTM model, which combines the advantages of long short-term memory (LSTM) and graph neural networks (GNNs). The model captures the spatial features in fMRI data by GNN and aggregates the temporal information of dynamic functional connectivity using LSTM to generate a more comprehensive spatio-temporal feature representation of fMRI data. Further, a dynamic graph pooling method is proposed to extract the final node representations from the dynamic graph representations for classification tasks. To address the variable dependence of dynamic feature connectivity on time scales, the model introduces a jump connection mechanism to enhance information extraction between internal units and capture features at different time scales. The model achieves remarkable results on the ABIDE dataset, with accuracies of 80.4% on the ABIDE I and 79.63% on the ABIDE II, which strongly demonstrates the effectiveness and potential of the model for ASD detection. This study not only provides new perspectives and methods for computer-aided diagnosis of ASD but also provides useful references for research in related fields. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.