73,934 results on '"medical imaging"'
Search Results
2. EndoDepth: A Benchmark for Assessing Robustness in Endoscopic Depth Prediction
- Author
-
Reyes-Amezcua, Ivan, Espinosa, Ricardo, Daul, Christian, Ochoa-Ruiz, Gilberto, Mendez-Vazquez, Andres, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bhattarai, Binod, editor, Ali, Sharib, editor, Rau, Anita, editor, Caramalau, Razvan, editor, Nguyen, Anh, editor, Gyawali, Prashnna, editor, Namburete, Ana, editor, and Stoyanov, Danail, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Leveraging Pre-trained Models for Robust Federated Learning for Kidney Stone Type Recognition
- Author
-
Reyes-Amezcua, Ivan, Rojas-Ruiz, Michael, Ochoa-Ruiz, Gilberto, Mendez-Vazquez, Andres, Daul, Christian, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Martínez-Villaseñor, Lourdes, editor, and Ochoa-Ruiz, Gilberto, editor
- Published
- 2025
- Full Text
- View/download PDF
4. Implementation of Morphological Fractional Order Darwinian Operator for Brain Tumour Localization
- Author
-
Ansah, Kwabena, Adevu, Wisdom Benedictus, Mensah, Joseph Agyapong, Appati, Justice Kwame, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Weber, Gerhard-Wilhelm, editor, Martinez Trinidad, Jose Francisco, editor, Sheng, Michael, editor, Ramachand, Raghavendra, editor, Kharb, Latika, editor, and Chahal, Deepak, editor
- Published
- 2025
- Full Text
- View/download PDF
5. Practical and Ethical Considerations for Generative AI in Medical Imaging
- Author
-
Jha, Debesh, Rauniyar, Ashish, Hagos, Desta Haileselassie, Sharma, Vanshali, Tomar, Nikhil Kumar, Zhang, Zheyuan, Isler, Ilkin, Durak, Gorkem, Wallace, Michael, Yazici, Cemal, Berzin, Tyler, Biswas, Koushik, Bagci, Ulas, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Puyol-Antón, Esther, editor, Zamzmi, Ghada, editor, Feragen, Aasa, editor, King, Andrew P., editor, Cheplygina, Veronika, editor, Ganz-Benjaminsen, Melanie, editor, Ferrante, Enzo, editor, Glocker, Ben, editor, Petersen, Eike, editor, Baxter, John S. H., editor, Rekik, Islem, editor, and Eagleson, Roy, editor
- Published
- 2025
- Full Text
- View/download PDF
6. All You Need Is a Guiding Hand: Mitigating Shortcut Bias in Deep Learning Models for Medical Imaging
- Author
-
Boland, Christopher, Anderson, Owen, Goatman, Keith A., Hipwell, John, Tsaftaris, Sotirios A., Dahdouh, Sonia, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Puyol-Antón, Esther, editor, Zamzmi, Ghada, editor, Feragen, Aasa, editor, King, Andrew P., editor, Cheplygina, Veronika, editor, Ganz-Benjaminsen, Melanie, editor, Ferrante, Enzo, editor, Glocker, Ben, editor, Petersen, Eike, editor, Baxter, John S. H., editor, Rekik, Islem, editor, and Eagleson, Roy, editor
- Published
- 2025
- Full Text
- View/download PDF
7. Mitigating Overdiagnosis Bias in CNN-Based Alzheimer’s Disease Diagnosis for the Elderly
- Author
-
Dang, Vien Ngoc, Casamitjana, Adrià, Hernández-González, Jerónimo, Lekadir, Karim, Disease Neuroimaging Initiative, for the Alzheimer’s, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Puyol-Antón, Esther, editor, Zamzmi, Ghada, editor, Feragen, Aasa, editor, King, Andrew P., editor, Cheplygina, Veronika, editor, Ganz-Benjaminsen, Melanie, editor, Ferrante, Enzo, editor, Glocker, Ben, editor, Petersen, Eike, editor, Baxter, John S. H., editor, Rekik, Islem, editor, and Eagleson, Roy, editor
- Published
- 2025
- Full Text
- View/download PDF
8. Non-reference Quality Assessment for Medical Imaging: Application to Synthetic Brain MRIs
- Author
-
Van Eeden Risager, Karl, Gholamalizadeh, Torkan, Mehdipour Ghazi, Mostafa, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Mukhopadhyay, Anirban, editor, Oksuz, Ilkay, editor, Engelhardt, Sandy, editor, Mehrof, Dorit, editor, and Yuan, Yixuan, editor
- Published
- 2025
- Full Text
- View/download PDF
9. Diffusion Models for Unsupervised Anomaly Detection in Fetal Brain Ultrasound
- Author
-
Mykula, Hanna, Gasser, Lisa, Lobmaier, Silvia, Schnabel, Julia A., Zimmer, Veronika, Bercea, Cosmin I., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Gomez, Alberto, editor, Khanal, Bishesh, editor, King, Andrew, editor, and Namburete, Ana, editor
- Published
- 2025
- Full Text
- View/download PDF
10. Assessing the Efficacy of Foundation Models in Pancreas Segmentation
- Author
-
Rapisarda, Emanuele, Gravagno, Alessandro Giuseppe, Calcagno, Salvatore, Giordano, Daniela, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Proietto Salanitri, Federica, editor, Viriri, Serestina, editor, Bağcı, Ulaş, editor, Tiwari, Pallavi, editor, Gong, Boqing, editor, Spampinato, Concetto, editor, Palazzo, Simone, editor, Bellitto, Giovanni, editor, Zlatintsi, Nancy, editor, Filntisis, Panagiotis, editor, Lee, Cecilia S., editor, and Lee, Aaron Y., editor
- Published
- 2025
- Full Text
- View/download PDF
11. Radiative Gaussian Splatting for Efficient X-Ray Novel View Synthesis
- Author
-
Cai, Yuanhao, Liang, Yixun, Wang, Jiahao, Wang, Angtian, Zhang, Yulun, Yang, Xiaokang, Zhou, Zongwei, Yuille, Alan, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
12. Fully convolutional neural network-based segmentation of brain metastases: a comprehensive approach for accurate detection and localization.
- Author
-
Farghaly, Omar and Deshpande, Priya
- Subjects
- *
MAGNETIC resonance imaging , *BRAIN metastasis , *COMPUTER-assisted image analysis (Medicine) , *CANCER cells , *DIAGNOSTIC imaging - Abstract
Brain metastases present a formidable challenge in cancer management due to the infiltration of malignant cells from distant sites into the brain. Precise segmentation of brain metastases (BM) in medical imaging is vital for treatment planning and assessment. Leveraging deep learning techniques has shown promise in automating BM identification, facilitating faster and more accurate detection. This paper aims to develop an innovative novel deep learning model tailored for BM segmentation, addressing current approach limitations. Utilizing a comprehensive dataset of annotated magnetic resonance imaging (MRI) from Stanford University, the proposed model will undergo thorough evaluation using standard performance metrics. Comparative analysis with existing segmentation methods will highlight the superior performance and efficacy of our model. The anticipated outcome of this research is a highly accurate and efficient deep learning model for brain metastasis segmentation. Such a model holds potential to enhance treatment planning, monitoring, and ultimately improve patient care and clinical outcomes in managing brain metastases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. A complete benchmark for polyp detection, segmentation and classification in colonoscopy images.
- Author
-
Tudela, Yael, Majó, Mireia, de la Fuente, Neil, Galdran, Adrian, Krenzer, Adrian, Puppe, Frank, Yamlahi, Amine, Tran, Thuy Nuong, Matuszewski, Bogdan J., Fitzgerald, Kerr, Bian, Cheng, Pan, Junwen, Liu, Shijle, Fernández-Esparrach, Gloria, Histace, Aymeric, and Bernal, Jorge
- Abstract
Introduction: Colorectal cancer (CRC) is one of the main causes of deaths worldwide. Early detection and diagnosis of its precursor lesion, the polyp, is key to reduce its mortality and to improve procedure efficiency. During the last two decades, several computational methods have been proposed to assist clinicians in detection, segmentation and classification tasks but the lack of a common public validation framework makes it difficult to determine which of them is ready to be deployed in the exploration room. Methods: This study presents a complete validation framework and we compare several methodologies for each of the polyp characterization tasks. Results: Results show that the majority of the approaches are able to provide good performance for the detection and segmentation task, but that there is room for improvement regarding polyp classification. Discussion: While studied show promising results in the assistance of polyp detection and segmentation tasks, further research should be done in classification task to obtain reliable results to assist the clinicians during the procedure. The presented framework provides a standarized method for evaluating and comparing different approaches, which could facilitate the identification of clinically prepared assisting methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Novel large empirical study of deep transfer learning for COVID-19 classification based on CT and X-ray images.
- Author
-
Almutaani, Mansour, Turki, Turki, and Taguchi, Y.-H.
- Abstract
The early and highly accurate prediction of COVID-19 based on medical images can speed up the diagnostic process and thereby mitigate disease spread; therefore, developing AI-based models is an inevitable endeavor. The presented work, to our knowledge, is the first to expand the model space and identify a better performing model among 10,000 constructed deep transfer learning (DTL) models as follows. First, we downloaded and processed 4481 CT and X-ray images pertaining to COVID-19 and non-COVID-19 patients, obtained from the Kaggle repository. Second, we provide processed images as inputs to four pre-trained deep learning models (ConvNeXt, EfficientNetV2, DenseNet121, and ResNet34) on more than a million images from the ImageNet database, in which we froze the convolutional and pooling layers pertaining to the feature extraction part while unfreezing and training the densely connected classifier with the Adam optimizer. Third, we generate and take a majority vote of two, three, and four combinations from the four DTL models, resulting in DTL models. Then, we combine the 11 DTL models, followed by consecutively generating and taking the majority vote of DTL models. Finally, we select DTL models from Experimental results from the whole datasets using five-fold cross-validation demonstrate that the best generated DTL model, named HC, achieving the best AUC of 0.909 when applied to the CT dataset, while ConvNeXt yielded a higher marginal AUC of 0.933 compared to 0.93 for HX when considering the X-ray dataset. These promising results set the foundation for promoting the large generation of models (LGM) in AI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. On the estimation of hip joint centre location with incomplete bone ossification for foetus-specific neuromusculoskeletal modeling.
- Author
-
Ferrandini, Morgane and Dao, Tien-Tuan
- Subjects
- *
RIGID dynamics , *HIP joint , *FEMUR head , *CARTESIAN coordinates , *COMPUTED tomography - Abstract
Childbirth is a complex physiological process in which a foetal neuromusculoskeletal model is of great importance to develop realistic delivery simulations and associated complication analyses. However, the estimation of hip joint centre (HJC) in foetuses remains a challenging issue. Thus, this paper aims to propose and evaluate a new approach to locate the HJC in foetuses. Hip CT-scans from 25 children (F = 11, age = 5.5 ± 2.6 years, height = 117 ± 21 cm, mass = 26 kg ± 9.5 kg) were used to propose and evaluate the novel acetabulum sphere fitting process to locate the HJC. This new approach using the acetabulum surface was applied to a population of 57 post-mortem foetal CT scans to locate the HJC as well as to determine associated regression equations using multiple linear regression. As results, the average distance between the HJC located using acetabulum sphere fitting and femoral head sphere fitting in children was 1.5 ± 0.7 mm. The average prediction error using our developed foetal HJC regression equations was 3.0 ± 1.5 mm, even though the equation for the x coordinate had a poor value of R2 (R2 for the x coordinate = 0.488). The present study suggests that the use of the acetabulum sphere fitting approach is a valid and accurate method to locate the HJC in children, and then can be extrapolated to get an estimation of the HJC in foetuses with incomplete bone ossification. Therefore, the present paper can be used as a guideline for foetus specific neuromusculoskeletal modelling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Transformer Connections: Improving Segmentation in Blurred Near‐Infrared Blood Vessel Image in Different Depth.
- Author
-
Wang, Jiazhe, Shimizu, Koichi, and Yoshie, Osamu
- Subjects
- *
CONVOLUTIONAL neural networks , *TRANSFORMER models , *ARTIFICIAL intelligence , *DEEP learning , *IMAGE segmentation , *RETINAL blood vessels - Abstract
High‐fidelity segmentation of blood vessels plays a pivotal role in numerous biomedical applications, such as injection assistance, cancer detection, various surgeries, and vein authentication. Near‐infrared (NIR) transillumination imaging is an effective and safe method to visualize the subcutaneous blood vessel network. However, such images are severely blurred because of the light scattering in body tissues. Inspired by the Vision Transformer model, this paper proposes a novel deep learning network known as transformer connection (TRC)‐Unet to capture global blurred and local clear correlations while using multi‐layer attention. Our method mainly consists of two blocks, thereby aiming to remap skip connection information flow and fuse different domain features. Specifically, the TRC extracts global blurred information from multiple layers and suppresses scattering to increase the clarity of vessel features. Transformer feature fusion eliminates the domain gap between the highly semantic feature maps of the convolutional neural network backbone and the adaptive self‐attention maps of TRCs. Benefiting from the long‐range dependencies of transformers, we achieved competitive results in relation to various competing methods on different data sets, including retinal vessel segmentation, simulated blur image segmentation, and real NIR blood vessel image segmentation. Moreover, our method remarkably improved the segmentation results of simulated blur image data sets and a real NIR vessel image data set. The quantitative results of ablation studies and visualizations are also reported to demonstrate the superiority of the TRC‐Unet design. © 2024 The Author(s). IEEJ Transactions on Electrical and Electronic Engineering published by Institute of Electrical Engineers of Japan and Wiley Periodicals LLC. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Convolutional neural network (CNN) and federated learning-based privacy preserving approach for skin disease classification.
- Author
-
Divya, Anand, Niharika, and Sharma, Gaurav
- Subjects
- *
CONVOLUTIONAL neural networks , *FEDERATED learning , *SKIN disease diagnosis , *NOSOLOGY , *SKIN diseases - Abstract
This research displays inspect a study on the classification of human skin diseases using medical imaging, with a focus on data privacy preservation. Skin disease diagnosis is primarily done visually and can be challenging due to variant colors and complex formation of diseases. The proposed solution involves an image dataset with seven classes of skin disease, a convolutional neural network (CNN) model, and image augmentation to increase dataset size and model generalization. The suggested CNN model attained an average precision of 86% and an average recall of 81% for all seven classes of skin diseases. To safeguard the privacy of the data, a federated learning method was used, in which the information was split among 500, 1000, and 2000 users. With the proposed scheme which based on CNN for disease classification and the federated learning method, the average accuracy was 82.42%, 87.26%, and 93.25% for the different numbers of clients. The findings show that it may be possible to effectively categorize skin illnesses by employing a CNN-based approach coupled with federated learning in order to achieve this goal. This would be conducted without compromising the confidentiality of patient data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Diagnostic evaluation of blunt chest trauma by imaging-based application of artificial intelligence.
- Author
-
Zhao, Tingting, Meng, Xianghong, Wang, Zhi, Hu, Yongcheng, Fan, Hongxing, Han, Jun, Zhu, Nana, and Niu, Feige
- Abstract
Artificial intelligence (AI) is becoming increasingly integral in clinical practice, such as during imaging tasks associated with the diagnosis and evaluation of blunt chest trauma (BCT). Due to significant advances in imaging-based deep learning, recent studies have demonstrated the efficacy of AI in the diagnosis of BCT, with a focus on rib fractures, pulmonary contusion, hemopneumothorax and others, demonstrating significant clinical progress. However, the complicated nature of BCT presents challenges in providing a comprehensive diagnosis and prognostic evaluation, and current deep learning research concentrates on specific clinical contexts, limiting its utility in addressing BCT intricacies. Here, we provide a review of the available evidence surrounding the potential utility of AI in BCT, and additionally identify the challenges impeding its development. This review offers insights on how to optimize the role of AI in the diagnostic evaluation of BCT, which can ultimately enhance patient care and outcomes in this critical clinical domain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Monte Carlo methods for medical imaging research.
- Author
-
Lee, Hoyeon
- Abstract
In radiation-based medical imaging research, computational modeling methods are used to design and validate imaging systems and post-processing algorithms. Monte Carlo methods are widely used for the computational modeling as they can model the systems accurately and intuitively by sampling interactions between particles and imaging subject with known probability distributions. This article reviews the physics behind Monte Carlo methods, their applications in medical imaging, and available MC codes for medical imaging research. Additionally, potential research areas related to Monte Carlo for medical imaging are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Transformers-based architectures for stroke segmentation: a review.
- Author
-
Zafari-Ghadim, Yalda, Rashed, Essam A., Mohamed, Amr, and Mabrok, Mohamed
- Abstract
Stroke remains a significant global health concern, necessitating precise and efficient diagnostic tools for timely intervention and improved patient outcomes. The emergence of deep learning methodologies has transformed the landscape of medical image analysis. Recently, Transformers, initially designed for natural language processing, have exhibited remarkable capabilities in various computer vision applications, including medical image analysis. This comprehensive review aims to provide an in-depth exploration of the cutting-edge Transformer-based architectures applied in the context of stroke segmentation. It commences with an exploration of stroke pathology, imaging modalities, and the challenges associated with accurate diagnosis and segmentation. Subsequently, the review delves into the fundamental ideas of Transformers, offering detailed insights into their architectural intricacies and the underlying mechanisms that empower them to effectively capture complex spatial information within medical images. The existing literature is systematically categorized and analyzed, discussing various approaches that leverage Transformers for stroke segmentation. A critical assessment is provided, highlighting the strengths and limitations of these methods, including considerations of performance and computational efficiency. Additionally, this review explores potential avenues for future research and development. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Computed tomography technologies to measure key structural features of polymeric biomedical implants from bench to bedside.
- Author
-
Pawelec, Kendell M., Schoborg, Todd A., and Shapiro, Erik M.
- Abstract
Implanted polymeric devices, designed to encourage tissue regeneration, require porosity. However, characterizing porosity, which affects many functional device properties, is non‐trivial. Computed tomography (CT) is a quick, versatile, and non‐destructive way to gain 3D structural information, yet various CT technologies, such as benchtop, preclinical and clinical systems, all have different capabilities. As system capabilities determine the structural information that can be obtained, seamless monitoring of key device features through all stages of clinical translation must be engineered intentionally. Therefore, in this study we tested feasibility of obtaining structural information in pre‐clinical systems and high‐resolution micro‐CT (μCT) under physiological conditions. To overcome the low CT contrast of polymers in hydrated environments, radiopaque nanoparticle contrast agent was incorporated into porous devices. The size of resolved features in porous structures is highly dependent on the resolution (voxel size) of the scan. As the voxel size of the CT scan increased (lower resolution) from 5 to 50 μm, the measured pore size was overestimated, and percentage porosity was underestimated by nearly 50%. With the homogeneous introduction of nanoparticles, changes to device structure could be quantified in the hydrated state, including at high‐resolution. Biopolymers had significant structural changes post‐hydration, including a mean increase of 130% in pore wall thickness that could potentially impact biological response. By incorporating imaging capabilities into polymeric devices, CT can be a facile way to monitor devices from initial design stages through to clinical translation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Advanced federated ensemble internet of learning approach for cloud based medical healthcare monitoring system.
- Author
-
Khan, Rahim, Taj, Sher, Ma, Xuefei, Noor, Alam, Zhu, Haifeng, Khan, Javed, Khan, Zahid Ullah, and Khan, Sajid Ullah
- Abstract
Medical image machines serve as a valuable tool to monitor and diagnose a variety of diseases. However, manual and centralized interpretation are both error-prone and time-consuming due to malicious attacks. Numerous diagnostic algorithms have been developed to improve precision and prevent poisoning attacks by integrating symptoms, test methods, and imaging data. But in today's digital technology world, it is necessary to have a global cloud-based diagnostic artificial intelligence model that is efficient in diagnosis and preventing poisoning attacks and might be used for multiple purposes. We propose the Healthcare Federated Ensemble Internet of Learning Cloud Doctor System (FDEIoL) model, which integrates different Internet of Things (IoT) devices to provide precise and accurate interpretation without poisoning attack problems, thereby facilitating IoT-enabled remote patient monitoring for smart healthcare systems. Furthermore, the FDEIoL system model uses a federated ensemble learning strategy to provide an automatic, up-to-date global prediction model based on input local models from the medical specialist. This assures biomedical security by safeguarding patient data and preserving the integrity of diagnostic processes. The FDEIoL system model utilizes local model feature selection to discriminate between malicious and non-malicious local models, and ensemble strategies use positive and negative samples to optimize the performance of the test dataset, enhancing its capability for remote patient monitoring. The FDEIoL system model achieved an exceptional accuracy rate of 99.24% on the Chest X-ray dataset and 99.0% on the MRI dataset of brain tumors compared to centralized models, demonstrating its ability for precision diagnosis in IoT-enabled healthcare systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Analysis of integration of IoMT with blockchain: issues, challenges and solutions.
- Author
-
Mazhar, Tehseen, Shah, Syed Faisal Abbas, Inam, Syed Azeem, Awotunde, Joseph Bamidele, Saeed, Mamoon M., and Hamam, Habib
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,DATA privacy ,DRUG discovery ,MEDICAL personnel - Abstract
The incorporation of Artificial Intelligence (AI) into the fields of Neurosurgery and Neurology has transformed the landscape of the healthcare industry. The present study describes seven dimensions of AI that have transformed the way of providing care, diagnosing, and treating patients. It has exhibited unparalleled accuracy in analyzing complex medical imaging data and expediting precise diagnoses of neurological conditions. It has also enabled personalized treatment plans by harnessing patient-specific data and genetic information, promising more effective therapies. For instance, AI-powered surgical robots have brought precision and remote capabilities to neurosurgical procedures, reducing human error. In AI, machine learning models predict disease progression, optimizing resource allocation and patient care, whereas wearable devices with AI provide continuous neurological monitoring, and enable early intervention for chronic conditions. It has also accelerated drug discovery by analyzing vast datasets, potentially leading to breakthrough therapies. Chatbots and virtual assistants powered by AI, enhance patient engagement and adherence to treatment plans. It holds promise in further personalization of care, augmented decision-making, earlier intervention, and the development of groundbreaking treatments. The present study mainly focuses on the incorporation of blockchain technology and provides a reasonable understanding of the associated issues and challenges along with its solutions. It will allow AI and healthcare professionals to advance the field and contribute towards the improvement of an individual's well-being when facing neurological challenges. Article Highlights: The study explores and understands the characteristics of blockchain technology for its implementation in the healthcare industry to strengthen data privacy. The study discusses the importance of standardization and compliance in the development and integration of blockchain technology into IoT-based systems to ensure data security and management. The study also explores the challenges and opportunities of blockchain integration in IoT and ways for addressing these challenges. The study addresses the benefits of handling security issues in the healthcare industry and proposes the benefits of blockchain technology integration into IoT systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Innovative Deep Learning Architecture for the Classification of Lung and Colon Cancer From Histopathology Images.
- Author
-
Said, Menatalla M. R., Islam, Md. Sakib Bin, Sumon, Md. Shaheenur Islam, Vranic, Semir, Al Saady, Rafif Mahmood, Alqahtani, Abdulrahman, Chowdhury, Muhammad E. H., Pedersen, Shona, and Chen, Honggang
- Subjects
CONVOLUTIONAL neural networks ,COLON cancer ,RECEIVER operating characteristic curves ,LUNG cancer ,ARTIFICIAL intelligence ,DEEP learning - Abstract
The increasing prevalence of colon and lung cancer presents a considerable challenge to healthcare systems worldwide, emphasizing the critical necessity for early and accurate diagnosis to enhance patient outcomes. The precision of diagnosis heavily relies on the expertise of histopathologists, constituting a demanding task. The health and well‐being of patients are jeopardized in the absence of adequately trained histopathologists, potentially leading to misdiagnoses, unnecessary treatments, and tests, resulting in the inefficient utilization of healthcare resources. However, with substantial technological advancements, deep learning (DL) has emerged as a potent tool in clinical settings, particularly in the realm of medical imaging. This study leveraged the LC25000 dataset, encompassing 25,000 images of lung and colon tissue, introducing an innovative approach by employing a self‐organized operational neural network (Self‐ONN) to accurately detect lung and colon cancer in histopathology images. Subsequently, our novel model underwent comparison with five pretrained convolutional neural network (CNN) models: MobileNetV2‐SelfMLP, Resnet18‐SelfMLP, DenseNet201‐SelfMLP, InceptionV3‐SelfMLP, and MobileViTv2_200‐SelfMLP, where each multilayer perceptron (MLP) was replaced with Self‐MLP. The models' performance was meticulously assessed using key metrics such as precision, recall, F1 score, accuracy, and area under the receiver operating characteristic (ROC) curve. The proposed model demonstrated exceptional overall accuracy, precision, sensitivity, F1 score, and specificity, achieving 99.74%, 99.74%, 99.74%, 99.74%, and 99.94%, respectively. This underscores the potential of artificial intelligence (AI) to significantly enhance diagnostic precision within clinical settings, portraying a promising avenue for improving patient care and outcomes. The synopsis of the literature provides a thorough examination of several DL and digital image processing methods used in the identification of cancer, with a primary emphasis on lung and colon cancer. The experiments use the LC25000 dataset, which consists of 25,000 photos, for the purposes of training and testing. Various techniques, such as CNNs, transfer learning, ensemble models, and lightweight DL architectures, have been used to accomplish accurate categorization of cancer tissue. Various investigations regularly show exceptional performance, with accuracy rates ranging from 96.19% to 99.97%. DL models such as EfficientNetV2, DHS‐CapsNet, and CNN‐based architectures such as VGG16 and GoogleNet variations have shown remarkable performance in obtaining high levels of accuracy. In addition, methods such as SSL and lightweight DL models provide encouraging outcomes in effectively managing large datasets. In general, the research emphasizes the efficacy of DL methods in successfully diagnosing cancer from histopathological pictures. It therefore indicates that DL has the potential to greatly improve medical diagnostic techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Unnecessary diagnostic imaging requested by medical students during a first day of residency simulation: an explorative study.
- Author
-
Gärtner, Julia, Bußenius, Lisa, Prediger, Sarah, and Harendza, Sigrid
- Subjects
MEDICAL students ,DIAGNOSTIC imaging ,MEDICAL education ,ELECTRONIC systems ,RADIATION exposure ,SIMULATED patients - Abstract
Background: Physicians' choice of appropriate tests in the diagnostic process is crucial for patient safety. The increased use of medical imaging has raised concerns about its potential overuse. How appropriately medical students order diagnostic tests is unknown. We explored their ordering behavior of diagnostic imaging during a simulated first day of residency. Methods: In total, 492 undergraduate medical students participated in the simulation. After history taking with simulated patients, the students used an electronic system for requesting diagnostic tests. The analysis focused on 16 patient cases, each managed by at least 50 students. We calculated the total number of ordered images and unnecessary radiation exposure in millisievert per patient and performed one sample t-tests (one tailed) with an expected mean of zero on a Bonferroni-corrected alpha level of 0.003 for the independent variable of unnecessary radiation exposure. Results: Unnecessary diagnostic imaging was ordered across all patient cases. Ultrasound, especially abdominal ultrasound, X-rays of the thorax, and abdominal CTs were notably overused in 90.9%, 80.0%, and 69.2% of all patient cases, respectively. Unnecessary requests of imaging related to radiation resulted in radiation over-exposure for nearly all patients, with 37.5% of all patients being exposed to a significant radiation overdose on average. Conclusion: Medical students' overuse of diagnostic imaging can be explained by patient-related factors like anxiety and medical factors like missing clinical information leading to cognitive biases in patient workup. This suggests the need for interventions to improve students' clinical decision-making and reduce cognitive biases. Investigating student-specific factors being associated with overuse of diagnostic imaging would be of additional interest. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Mixture prior distributions and Bayesian models for robust radionuclide image processing.
- Author
-
Muyang Zhang, Aykroyd, Robert G., and Tsoumpas, Charalampos
- Subjects
STATISTICAL models ,BIOLOGICAL models ,POISSON distribution ,DIAGNOSTIC imaging ,CLUSTER analysis (Statistics) ,DATA analysis ,PROBABILITY theory ,RESEARCH evaluation ,DESCRIPTIVE statistics ,MICE ,SIMULATION methods in education ,ANIMAL experimentation ,STATISTICS ,DIGITAL image processing ,MACHINE learning ,RADIONUCLIDE imaging ,ALGORITHMS ,SENSITIVITY & specificity (Statistics) - Abstract
The diagnosis of medical conditions and subsequent treatment often involves radionuclide imaging techniques. To refine localisation accuracy and improve diagnostic confidence, compared with the use of a single scanning technique, a combination of two (or more) techniques can be used but with a higher risk of misalignment. For this to be reliable and accurate, recorded data undergo processing to suppress noise and enhance resolution. A step in image processing techniques for such inverse problems is the inclusion of smoothing. Standard approaches, however, are usually limited to applying identical models globally. In this study, we propose a novel Laplace and Gaussian mixture prior distribution that incorporates different smoothing strategies with the automatic model-based estimation of mixture component weightings creating a locally adaptive model. A fully Bayesian approach is presented using multi-level hierarchical modelling and Markov chain Monte Carlo (MCMC) estimation methods to sample from the posterior distribution and hence perform estimation. The proposed methods are assessed using simulated γ-eye™ camera images and demonstrate greater noise reduction than existing methods but without compromising resolution. As well as image estimates, the MCMC methods also provide posterior variance estimates and hence uncertainty quantification takes into consideration any potential sources of variability. The use of mixture prior models, part Laplace random field and part Gaussian random field, within a Bayesian modelling approach is not limited to medical imaging applications but provides a more general framework for analysing other spatial inverse problems. Locally adaptive prior distributions provide a more realistic model, which leads to robust results and hence more reliable decision-making, especially in nuclear medicine. They can become a standard part of the toolkit of everyone working in image processing applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. AFINITI: attention-aware feature integration for nuclei instance segmentation and type identification.
- Author
-
Nasir, Esha Sadia, Rasool, Shahzad, Nawaz, Raheel, and Fraz, Muhammad Moazam
- Subjects
- *
COMPUTER-assisted image analysis (Medicine) , *DEEP learning , *SOURCE code , *DIAGNOSTIC imaging , *CANCER diagnosis - Abstract
Accurately identifying and analyzing nuclei is pivotal for both the diagnosis and examination of cancer. However, the complexity of this task arises due to the presence of overlapping and cluttered nuclei with blurred boundaries, variations in nuclei sizes and shapes, and an imbalance in the available datasets. Although current methods utilize region proposal techniques and feature encoding frameworks, but they often fail to precisely identify occluded nuclei instances. We propose a model named AFINITI, which is both simple, efficient, achieves high accuracy, recognizes instance boundaries cluttered and overlapping nuclei, and addresses class imbalance issues. Our approach utilizes nuclei pixel positional information and a novel loss function to yield accurate class information for each nuclei. Our network features a lightweight, attention-aware feature fusion architecture with separate instance probability, shape radial estimator, and classification heads. We use a compound classification loss function to assign a weighted loss to each class according to its occurrence frequency, thereby addressing the class imbalance issues. The AFINITI model outperforms current leading networks across eight major publicly available nuclei segmentation datasets achieving up to an 8% increase in Dice Similarity Coefficient (DSc) and a 17% increase in Panoptic Quality (PQ) compared to existing techniques demonstrating its effectiveness and potential for clinical applications. The source code and the weights of the trained model have been released to the public and can be accessed at: https://github.com/Vision-At-SEECS/AF-Net. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Transfer learning-enabled skin disease classification: the case of monkeypox detection.
- Author
-
Thorat, Rohan and Gupta, Aditya
- Subjects
MONKEYPOX ,SKIN imaging ,NOSOLOGY ,PUBLIC health ,POLYMERASE chain reaction ,DEEP learning - Abstract
In the midst of the continuing difficulties presented by the COVID-19 pandemic, the possible emergence of illnesses such as monkeypox places an additional and significant load on public health services that are already under pressure. Conventional diagnostic methods for monkeypox, reliant on polymerase chain reaction tests and biochemical assays on lesion swabs suffer from drawbacks such as patient discomfort and resource limitations, particularly in economically distressed areas of Western and Central Africa. This work investigates the application of deep learning techniques, specifically transfer learning using three trained CNN frameworks (ResNet50V2, MobileNetV2, and Xception), to accurately identify monkeypox. Utilizing the "Monkeypox Skin Lesion Dataset," images of patients' skin lesions are augmented and incorporated into the training and validation of the models. Additional layers for classifying monkeypox and non-monkeypox images are introduced. Evaluation metrics, with a focus on accuracy and F1-score, showcase the superior performance of the ResNet50V2-based model (0.9874 accuracy, F1-score of 0.99), followed by Xception (0.9546 accuracy, F1-score of 0.95), and MobileNetV2 (0.9452 accuracy, F1-score of 0.94). Comparative analysis with previous works in the field underscores the improved results achieved in this research. The proposed models offer a promising avenue for early monkeypox detection, contributing to effective preventive measures against its spread. Looking ahead, we aim to deploy the developed MobileNetV2-based model as a web application, leveraging its lightweight architecture and notable accuracy. This initiative is intended to provide people in rural areas with a cost-effective and easily accessible solution for the early detection of monkeypox, contributing to improved healthcare in resource-constrained settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Development of brain tumor radiogenomic classification using GAN-based augmentation of MRI slices in the newly released gazi brains dataset.
- Author
-
Yurtsever, M.M.Enes, Atay, Yilmaz, Arslan, Bilgehan, and Sagiroglu, Seref
- Subjects
- *
TUMOR classification , *BRAIN tumors , *DEEP learning , *DATA augmentation , *TRANSFORMER models - Abstract
Significant progress has been made recently with the contribution of technological advances in studies on brain cancer. Regarding this, identifying and correctly classifying tumors is a crucial task in the field of medical imaging. The disease-related tumor classification problem, on which deep learning technologies have also become a focus, is very important in the diagnosis and treatment of the disease. The use of deep learning models has shown promising results in recent years. However, the sparsity of ground truth data in medical imaging or inconsistent data sources poses a significant challenge for training these models. The utilization of StyleGANv2-ADA is proposed in this paper for augmenting brain MRI slices to enhance the performance of deep learning models. Specifically, augmentation is applied solely to the training data to prevent any potential leakage. The StyleGanv2-ADA model is trained with the Gazi Brains 2020, BRaTS 2021, and Br35h datasets using the researchers' default settings. The effectiveness of the proposed method is demonstrated on datasets for brain tumor classification, resulting in a notable improvement in the overall accuracy of the model for brain tumor classification on all the Gazi Brains 2020, BraTS 2021, and Br35h datasets. Importantly, the utilization of StyleGANv2-ADA on the Gazi Brains 2020 Dataset represents a novel experiment in the literature. The results show that the augmentation with StyleGAN can help overcome the challenges of working with medical data and the sparsity of ground truth data. Data augmentation employing the StyleGANv2-ADA GAN model yielded the highest overall accuracy for brain tumor classification on the BraTS 2021 and Gazi Brains 2020 datasets, together with the BR35H dataset, achieving 75.18%, 99.36%, and 98.99% on the EfficientNetV2S models, respectively. This study emphasizes the potency of GANs for augmenting medical imaging datasets, particularly in brain tumor classification, showcasing a notable increase in overall accuracy through the integration of synthetic GAN data on the used datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Enhancing brain tumor detection: a novel CNN approach with advanced activation functions for accurate medical imaging analysis.
- Author
-
Kaifi, Reham
- Subjects
COMPUTER-aided diagnosis ,MAGNETIC resonance imaging ,COMPUTER-assisted image analysis (Medicine) ,CANCER diagnosis ,CONVOLUTIONAL neural networks ,BRAIN tumors - Abstract
Introduction: Brain tumors are characterized by abnormal cell growth within or around the brain, posing severe health risks often associated with high mortality rates. Various imaging techniques, including magnetic resonance imaging (MRI), are commonly employed to visualize the brain and identify malignant growths. Computer-aided diagnosis tools (CAD) utilizing Convolutional Neural Networks (CNNs) have proven effective in feature extraction and predictive analysis across diverse medical imaging modalities. Methods: This study explores a CNN trained and evaluated with nine activation functions, encompassing eight established ones from the literature and a modified version of the soft sign activation function. Results: The latter demonstrates notable efficacy in discriminating between four types of brain tumors in MR images, achieving an accuracy of 97.6%. The sensitivity for glioma is 93.7%; for meningioma, it is 97.4%; for cases with no tumor, it is 98.8%; and for pituitary tumors, it reaches 100%. Discussion: In this manuscript, we propose an advanced CNN architecture that integrates a newly developed activation function. Our extensive experimentation and analysis showcase the model's remarkable ability to precisely distinguish between different types of brain tumors within a substantial and diverse dataset. The findings from our study suggest that this model could serve as an invaluable supplementary tool for healthcare practitioners, including specialized medical professionals and resident physicians, in the accurate diagnosis of brain tumors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. 3D Printing Materials Mimicking Human Tissues after Uptake of Iodinated Contrast Agents for Anthropomorphic Radiology Phantoms.
- Author
-
Homolka, Peter, Breyer, Lara, and Semturs, Friedrich
- Subjects
- *
MEDICAL quality control , *IMAGING phantoms , *TOMOSYNTHESIS , *X-ray imaging , *CONTRAST media , *BREAST - Abstract
(1) Background: 3D printable materials with accurately defined iodine content enable the development and production of radiological phantoms that simulate human tissues, including lesions after contrast administration in medical imaging with X-rays. These phantoms provide accurate, stable and reproducible models with defined iodine concentrations, and 3D printing allows maximum flexibility and minimal development and production time, allowing the simulation of anatomically correct anthropomorphic replication of lesions and the production of calibration and QA standards in a typical medical research facility. (2) Methods: Standard printing resins were doped with an iodine contrast agent and printed using a consumer 3D printer, both (resins and printer) available from major online marketplaces, to produce printed specimens with iodine contents ranging from 0 to 3.0% by weight, equivalent to 0 to 3.85% elemental iodine per volume, covering the typical levels found in patients. The printed samples were scanned in a micro-CT scanner to measure the properties of the materials in the range of the iodine concentrations used. (3) Results: Both mass density and attenuation show a linear dependence on iodine concentration (R2 = 1.00), allowing highly accurate, stable, and predictable results. (4) Conclusions: Standard 3D printing resins can be doped with liquids, avoiding the problem of sedimentation, resulting in perfectly homogeneous prints with accurate dopant content. Iodine contrast agents are perfectly suited to dope resins with appropriate iodine concentrations to radiologically mimic tissues after iodine uptake. In combination with computer-aided design, this can be used to produce printed objects with precisely defined iodine concentrations in the range of up to a few percent of elemental iodine, with high precision and anthropomorphic shapes. Applications include radiographic phantoms for detectability studies and calibration standards in projective X-ray imaging modalities, such as contrast-enhanced dual energy mammography (abbreviated CEDEM, CEDM, TICEM, or CESM depending on the equipment manufacturer), and 3-dimensional modalities like CT, including spectral and dual energy CT (DECT), and breast tomosynthesis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Advances in Medical Image Segmentation: A Comprehensive Review of Traditional, Deep Learning and Hybrid Approaches.
- Author
-
Xu, Yan, Quan, Rixiang, Xu, Weiting, Huang, Yi, Chen, Xiaolong, and Liu, Fengyuan
- Subjects
- *
COMPUTER-assisted image analysis (Medicine) , *RECURRENT neural networks , *CONVOLUTIONAL neural networks , *IMAGE processing , *DIAGNOSTIC imaging , *DEEP learning , *IMAGE segmentation - Abstract
Medical image segmentation plays a critical role in accurate diagnosis and treatment planning, enabling precise analysis across a wide range of clinical tasks. This review begins by offering a comprehensive overview of traditional segmentation techniques, including thresholding, edge-based methods, region-based approaches, clustering, and graph-based segmentation. While these methods are computationally efficient and interpretable, they often face significant challenges when applied to complex, noisy, or variable medical images. The central focus of this review is the transformative impact of deep learning on medical image segmentation. We delve into prominent deep learning architectures such as Convolutional Neural Networks (CNNs), Fully Convolutional Networks (FCNs), U-Net, Recurrent Neural Networks (RNNs), Adversarial Networks (GANs), and Autoencoders (AEs). Each architecture is analyzed in terms of its structural foundation and specific application to medical image segmentation, illustrating how these models have enhanced segmentation accuracy across various clinical contexts. Finally, the review examines the integration of deep learning with traditional segmentation methods, addressing the limitations of both approaches. These hybrid strategies offer improved segmentation performance, particularly in challenging scenarios involving weak edges, noise, or inconsistent intensities. By synthesizing recent advancements, this review provides a detailed resource for researchers and practitioners, offering valuable insights into the current landscape and future directions of medical image segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Precision Segmentation of Subretinal Fluids in OCT Using Multiscale Attention-Based U-Net Architecture.
- Author
-
Karn, Prakash Kumar and Abdulla, Waleed H.
- Subjects
- *
MACULAR degeneration , *OPTICAL coherence tomography , *MACULAR edema , *RETINAL diseases , *COMPUTER-assisted image analysis (Medicine) - Abstract
This paper presents a deep-learning architecture for segmenting retinal fluids in patients with Diabetic Macular Oedema (DME) and Age-related Macular Degeneration (AMD). Accurate segmentation of multiple fluid types is critical for diagnosis and treatment planning, but existing techniques often struggle with precision. We propose an encoder–decoder network inspired by U-Net, processing enhanced OCT images and their edge maps. The encoder incorporates Residual and Inception modules with an autoencoder-based multiscale attention mechanism to extract detailed features. Our method shows superior performance across several datasets. On the RETOUCH dataset, the network achieved F1 Scores of 0.82 for intraretinal fluid (IRF), 0.93 for subretinal fluid (SRF), and 0.94 for pigment epithelial detachment (PED). The model also performed well on the OPTIMA and DUKE datasets, demonstrating high precision, recall, and F1 Scores. This architecture significantly enhances segmentation accuracy and edge precision, offering a valuable tool for diagnosing and managing retinal diseases. Its integration of dual-input processing, multiscale attention, and advanced encoder modules highlights its potential to improve clinical outcomes and advance retinal disease treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. The Integration of Radiomics and Artificial Intelligence in Modern Medicine.
- Author
-
Maniaci, Antonino, Lavalle, Salvatore, Gagliano, Caterina, Lentini, Mario, Masiello, Edoardo, Parisi, Federica, Iannella, Giannicola, Cilia, Nicole Dalia, Salerno, Valerio, Cusumano, Giacomo, and La Via, Luigi
- Subjects
- *
MACHINE learning , *COMPUTER-assisted image analysis (Medicine) , *RADIOMICS , *ARTIFICIAL intelligence , *FEATURE extraction , *DEEP learning - Abstract
With profound effects on patient care, the role of artificial intelligence (AI) in radiomics has become a disruptive force in contemporary medicine. Radiomics, the quantitative feature extraction and analysis from medical images, offers useful imaging biomarkers that can reveal important information about the nature of diseases, how well patients respond to treatment and patient outcomes. The use of AI techniques in radiomics, such as machine learning and deep learning, has made it possible to create sophisticated computer-aided diagnostic systems, predictive models, and decision support tools. The many uses of AI in radiomics are examined in this review, encompassing its involvement of quantitative feature extraction from medical images, the machine learning, deep learning and computer-aided diagnostic (CAD) systems approaches in radiomics, and the effect of radiomics and AI on improving workflow automation and efficiency, optimize clinical trials and patient stratification. This review also covers the predictive modeling improvement by machine learning in radiomics, the multimodal integration and enhanced deep learning architectures, and the regulatory and clinical adoption considerations for radiomics-based CAD. Particular emphasis is given to the enormous potential for enhancing diagnosis precision, treatment personalization, and overall patient outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Enhancing Brain Tumor Detection Through Custom Convolutional Neural Networks and Interpretability-Driven Analysis.
- Author
-
Dewage, Kavinda Ashan Kulasinghe Wasalamuni, Hasan, Raza, Rehman, Bacha, and Mahmood, Salman
- Subjects
- *
CONVOLUTIONAL neural networks , *BRAIN tumors , *DIAGNOSTIC imaging , *ARTIFICIAL intelligence , *MEDICAL personnel - Abstract
Brain tumor detection is crucial for effective treatment planning and improved patient outcomes. However, existing methods often face challenges, such as limited interpretability and class imbalance in medical-imaging data. This study presents a novel, custom Convolutional Neural Network (CNN) architecture, specifically designed to address these issues by incorporating interpretability techniques and strategies to mitigate class imbalance. We trained and evaluated four CNN models (proposed CNN, ResNetV2, DenseNet201, and VGG16) using a brain tumor MRI dataset, with oversampling techniques and class weighting employed during training. Our proposed CNN achieved an accuracy of 94.51%, outperforming other models in regard to precision, recall, and F1-Score. Furthermore, interpretability was enhanced through gradient-based attribution methods and saliency maps, providing valuable insights into the model's decision-making process and fostering collaboration between AI systems and clinicians. This approach contributes a highly accurate and interpretable framework for brain tumor detection, with the potential to significantly enhance diagnostic accuracy and personalized treatment planning in neuro-oncology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Progressive Thoracolumbar Tuberculosis in a Young Male: Diagnostic, Therapeutic, and Surgical Insights.
- Author
-
Nedelea, Dana-Georgiana, Vulpe, Diana Elena, Viscopoleanu, George, Radulescu, Alexandru Constantin, Mihailescu, Alexandra Ana, Gradinaru, Sebastian, Orghidan, Mihnea, Scheau, Cristian, Cergan, Romica, and Dragosloveanu, Serban
- Subjects
- *
SPINAL tuberculosis , *NEEDLE biopsy , *IMAGE reconstruction , *TUBERCULOSIS , *BACKACHE - Abstract
Objective: We present the case of a 26-year-old male with severe spinal tuberculosis of the thoracolumbar region. The patient suffered from worsening back pain over five years, initially responding to over-the-counter analgesics. Despite being proposed surgery in 2019, the patient refused the intervention and subsequently experienced significant disease progression. Methods: Upon re-presentation in 2022, mild involvement of the T12-L1 vertebrae was recorded by imaging, leading to a percutaneous needle biopsy which confirmed tuberculosis. Despite undergoing anti-tuberculous therapy for one year, the follow-up in 2024 revealed extensive infection from T10 to S1, with large psoas abscesses and a pseudo-tumoral mass of the right thigh. The patient was ultimately submitted to a two-stage surgical intervention: anterior resection and reconstruction of T11-L1 with an expandable cage, followed by posterior stabilization from T8-S1. Results: Postoperative recovery was uneventful, with significant pain relief and no neurological deficits. The patient was discharged on a continued anti-tuberculous regimen and remains under close surveillance. Conclusions: This paper presents details on the challenges of diagnosis and management of severe spinal tuberculosis, with emphasis on the importance of timely intervention and multidisciplinary care. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Fully Automated Detection of the Appendix Using U-Net Deep Learning Architecture in CT Scans.
- Author
-
Baştuğ, Betül Tiryaki, Güneri, Gürkan, Yıldırım, Mehmet Süleyman, Çorbacı, Kadir, and Dandıl, Emre
- Subjects
- *
DEEP learning , *COMPUTER-assisted image analysis (Medicine) , *COMPUTED tomography , *IMAGE segmentation , *DATA augmentation , *APPENDICITIS - Abstract
Background: The accurate segmentation of the appendix with well-defined boundaries is critical for diagnosing conditions such as acute appendicitis. The manual identification of the appendix is time-consuming and highly dependent on the expertise of the radiologist. Method: In this study, we propose a fully automated approach to the detection of the appendix using deep learning architecture based on the U-Net with specific training parameters in CT scans. The proposed U-Net architecture is trained on an annotated original dataset of abdominal CT scans to segment the appendix efficiently and with high performance. In addition, to extend the training set, data augmentation techniques are applied for the created dataset. Results: In experimental studies, the proposed U-Net model is implemented using hyperparameter optimization and the performance of the model is evaluated using key metrics to measure diagnostic reliability. The trained U-Net model achieved the segmentation performance for the detection of the appendix in CT slices with a Dice Similarity Coefficient (DSC), Volumetric Overlap Error (VOE), Average Symmetric Surface Distance (ASSD), Hausdorff Distance 95 (HD95), Precision (PRE) and Recall (REC) of 85.94%, 23.29%, 1.24 mm, 5.43 mm, 86.83% and 86.62%, respectively. Moreover, our model outperforms other methods by leveraging the U-Net's ability to capture spatial context through encoder–decoder structures and skip connections, providing a correct segmentation output. Conclusions: The proposed U-Net model showed reliable performance in segmenting the appendix region, with some limitations in cases where the appendix was close to other structures. These improvements highlight the potential of deep learning to significantly improve clinical outcomes in appendix detection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Deep-learning-based Attenuation Correction for 68Ga-DOTATATE Whole-body PET Imaging: A Dual-center Clinical Study.
- Author
-
Lord, Mahsa Sobhi, Islamian, Jalil Pirayesh, Seyyedi, Negisa, Samimi, Rezvan, Farzanehfar, Saeed, Shahrbabk, Mahsa, and Sheikhzadeh, Peyman
- Subjects
- *
X-ray imaging , *STANDARD deviations , *DEEP learning , *DIAGNOSTIC imaging , *SIGNAL-to-noise ratio - Abstract
Objectives: Attenuation correction is a critical phenomenon in quantitative positron emission tomography (PET) imaging with its own special challenges. However, computed tomography (CT) modality which is used for attenuation correction and anatomical localization increases patient radiation dose. This study was aimed to develop a deep learning model for attenuation correction of whole-body 68Ga-DOTATATE PET images. Methods: Non-attenuation-corrected and computed tomography-based attenuation-corrected (CTAC) whole-body 68Ga-DOTATATE PET images of 118 patients from two different imaging centers were used. We implemented a residual deep learning model using the NiftyNet framework. The model was trained four times and evaluated six times using the test data from the centers. The quality of the synthesized PET images was compared with the PET-CTAC images using different evaluation metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), mean square error (MSE), and root mean square error (RMSE). Results: Quantitative analysis of four network training sessions and six evaluations revealed the highest and lowest PSNR values as (52.86±6.6) and (47.96±5.09), respectively. Similarly, the highest and lowest SSIM values were obtained (0.99±0.003) and (0.97±0.01), respectively. Additionally, the highest and lowest RMSE and MSE values fell within the ranges of (0.0117±0.003), (0.0015±0.000103), and (0.01072±0.002), (0.000121±5.07xe-5), respectively. The study found that using datasets from the same center resulted in the highest PSNR, while using datasets from different centers led to lower PSNR and SSIM values. In addition, scenarios involving datasets from both centers achieved the best SSIM and the lowest MSE and RMSE. Conclusion: Acceptable accuracy of attenuation correction on 68Ga-DOTATATE PET images using a deep learning model could potentially eliminate the need for additional X-ray imaging modalities, thereby imposing a high radiation dose on the patient. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Radiographers and other radiology workers' education and training in infection prevention and control: A scoping review.
- Author
-
Freihat, R., Jimenez, Y., Lewis, S., and Kench, P.
- Abstract
Infection prevention and control (IPC) is crucial in healthcare settings, particularly during pandemics like COVID-19. Radiographers play a vital role in maintaining patient safety by following IPC guidelines. However, there is concern that inadequate knowledge and practice of IPC among radiographers may compromise patient safety. Education and training programs can enhance radiographers' understanding of IPC to maintain safety in radiology departments. This scoping review aims to explore the literature on the knowledge of radiographers in IPC and the effectiveness of IPC education/training programs provided to radiographers and other healthcare workers (HCWs) in the radiology department, with a specific focus on the periods before, during, and after the COVID-19 pandemic. This scoping review followed the Joanna Briggs Institute's framework. The steps involved were: Define objectives and questions, align inclusion criteria with objectives, planning the evidence search and extraction, searching for evidence, selecting relevant evidence, extracting evidence, analysing evidence, presenting results, and summarising findings and noting implications. Sixty-eight articles were included. Prior to the COVID-19 pandemic, practices among radiology HCWs were suboptimal, but improved significantly during the pandemic. During the pandemic, radiology departments implemented education programs to address inconsistence knowledge in IPC. Unfortunately, no studies explored IPC practices after the pandemic, leaving uncertainty about sustained improvements or potential regression. The review highlights the limited assessment of IPC knowledge and practice among radiology HCWs, with most studies recommending further education and training programs. This scoping review explored IPC education and training among radiology HCWs, which is an important research topic after the COVID-19 pandemic to help reduce infection transmission in healthcare environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. A Novel Approach to Image Classification for Detecting Abnormalities in Neuroimages based on the Structural Similarity Index Measure.
- Author
-
Lad, Rashmi Y., Mapari, Shrikant, and Sibai, Fadi N.
- Abstract
Medical imaging has improved image quality and enables accurate diagnosis and treatment. Medical imaging is used in the early detection and diagnosis of mental disorders or mental illnesses, and treatment. This study performs image-based classification using the Structural Similarity Index Measure (SSIM) to detect normal and abnormal neuroimages. Two experiments were performed on the same dataset. 342 Dicom images were divided into standard and abnormal categories. At first, the SSIM between images was calculated. SVM, KNN, Naïve Bayes, and Decision Tree classifiers were applied and compared. Similarly, an artificial neural network using two optimizers, Adam and SGD, was applied to the same dataset. In theexperiments, 100% and 97% accuracy was achieved in image-based classification, while SSIM-based classification achieved 100% and 61 % for different classifiers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Combining Local and Global Feature Extraction for Brain Tumor Classification: A Vision Transformer and iResNet Hybrid Model.
- Author
-
Jaffar, Amar Y.
- Abstract
Early diagnosis of brain tumors is crucial for effective treatment and patient prognosis. Traditional Convolutional Neural Networks (CNNs) have shown promise in medical imaging but have limitations in capturing long-range dependencies and contextual information. Vision Transformers (ViTs) address these limitations by leveraging self-attention mechanisms to capture both local and global features. This study aims to enhance brain tumor classification by integrating an improved ResNet (iResNet) architecture with a ViT, creating a robust hybrid model that combines the local feature extraction capabilities of iResNet with the global feature extraction strengths of ViTs. This integration results in a significant improvement in classification accuracy, achieving an overall accuracy of 99.2%, outperforming established models such as InceptionV3, ResNet, and DenseNet. High precision, recall, and F1 scores were observed across all tumor classes, demonstrating the model's robustness and reliability. The significance of the proposed method lies in its ability to effectively capture both local and global features, leading to superior performance in brain tumor classification. This approach offers a powerful tool for clinical decision-making, improving early detection and treatment planning, ultimately contributing to better patient outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Synthetic Microwave Focusing Techniques for Medical Imaging: Fundamentals, Limitations, and Challenges.
- Author
-
Abbosh, Younis M., Sultan, Kamel, Guo, Lei, and Abbosh, Amin
- Subjects
MICROWAVE imaging ,GREEN'S functions ,TIME reversal ,ELECTROMAGNETIC wave scattering ,DIAGNOSTIC imaging - Abstract
Synthetic microwave focusing methods have been widely adopted in qualitative medical imaging to detect and localize anomalies based on their electromagnetic scattering signatures. This paper discusses the principles, challenges, and limitations of synthetic microwave-focusing techniques in medical applications. It is shown that the various focusing techniques, including time reversal, confocal imaging, and delay-and-sum, are all based on the scalar solution of the electromagnetic scattering problem, assuming the imaged object, i.e., the tissue or object, is linear, reciprocal, and time-invariant. They all aim to generate a qualitative image, revealing any strong scatterer within the imaged domain. The differences among these techniques lie only in the assumptions made to derive the solution and create an image of the relevant tissue or object. To get a fast solution using limited computational resources, those methods assume the tissue is homogeneous and non-dispersive, and thus, a simplified far-field Green's function is used. Some focusing methods compensate for dispersive effects and attenuation in lossy tissues. Other approaches replace the simplified Green's function with more representative functions. While these focusing techniques offer benefits like speed and low computational requirements, they face significant ongoing challenges in real-life applications due to their oversimplified linear solutions to the complex problem of non-linear medical microwave imaging. This paper discusses these challenges and potential solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Enhancing Brain Tumor Diagnosis with L-Net: A Novel Deep Learning Approach for MRI Image Segmentation and Classification.
- Author
-
Dénes-Fazakas, Lehel, Kovács, Levente, Eigner, György, and Szilágyi, László
- Subjects
CONVOLUTIONAL neural networks ,CANCER diagnosis ,PITUITARY tumors ,IMAGE recognition (Computer vision) ,MAGNETIC resonance imaging ,BRAIN tumors - Abstract
Background: Brain tumors are highly complex, making their detection and classification a significant challenge in modern medical diagnostics. The accurate segmentation and classification of brain tumors from MRI images are crucial for effective treatment planning. This study aims to develop an advanced neural network architecture that addresses these challenges. Methods: We propose L-net, a novel architecture combining U-net for tumor boundary segmentation and a convolutional neural network (CNN) for tumor classification. These two units are coupled such a way that the CNN classifies the MRI images based on the features extracted by the U-net while segmenting the tumor, instead of relying on the original input images. The model is trained on a dataset of 3064 high-resolution MRI images, encompassing gliomas, meningiomas, and pituitary tumors, ensuring robust performance across different tumor types. Results: L-net achieved a classification accuracy of up to 99.6%, surpassing existing models in both segmentation and classification tasks. The model demonstrated effectiveness even with lower image resolutions, making it suitable for diverse clinical settings. Conclusions: The proposed L-net model provides an accurate and unified approach to brain tumor segmentation and classification. Its enhanced performance contributes to more reliable and precise diagnosis, supporting early detection and treatment in clinical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging.
- Author
-
Bhati, Deepshikha, Neha, Fnu, and Amiruzzaman, Md
- Subjects
COMPUTER-assisted image analysis (Medicine) ,ARTIFICIAL intelligence ,IMAGE analysis ,MACHINE learning ,DIAGNOSTIC imaging ,DEEP learning - Abstract
The combination of medical imaging and deep learning has significantly improved diagnostic and prognostic capabilities in the healthcare domain. Nevertheless, the inherent complexity of deep learning models poses challenges in understanding their decision-making processes. Interpretability and visualization techniques have emerged as crucial tools to unravel the black-box nature of these models, providing insights into their inner workings and enhancing trust in their predictions. This survey paper comprehensively examines various interpretation and visualization techniques applied to deep learning models in medical imaging. The paper reviews methodologies, discusses their applications, and evaluates their effectiveness in enhancing the interpretability, reliability, and clinical relevance of deep learning models in medical image analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. SSP: self-supervised pertaining technique for classification of shoulder implants in x-ray medical images: a broad experimental study.
- Author
-
Alzubaidi, Laith, Fadhel, Mohammed A., Hollman, Freek, Salhi, Asma, Santamaria, Jose, Duan, Ye, Gupta, Ashish, Cutbush, Kenneth, Abbosh, Amin, and Gu, Yuantong
- Subjects
X-ray imaging ,COMPUTER-assisted image analysis (Medicine) ,DIAGNOSTIC imaging ,ARTHROPLASTY ,GLENOHUMERAL joint ,SHOULDER - Abstract
Multiple pathologic conditions can lead to a diseased and symptomatic glenohumeral joint for which total shoulder arthroplasty (TSA) replacement may be indicated. The long-term survival of implants is limited. With the increasing incidence of joint replacement surgery, it can be anticipated that joint replacement revision surgery will become more common. It can be challenging at times to retrieve the manufacturer of the in situ implant. Therefore, certain systems facilitated by AI techniques such as deep learning (DL) can help correctly identify the implanted prosthesis. Correct identification of implants in revision surgery can help reduce perioperative complications and complications. DL was used in this study to categorise different implants based on X-ray images into four classes (as a first case study of the small dataset): Cofield, Depuy, Tornier, and Zimmer. Imbalanced and small public datasets for shoulder implants can lead to poor performance of DL model training. Most of the methods in the literature have adopted the idea of transfer learning (TL) from ImageNet models. This type of TL has been proven ineffective due to some concerns regarding the contrast between features learnt from natural images (ImageNet: colour images) and shoulder implants in X-ray images (greyscale images). To address that, a new TL approach (self-supervised pertaining (SSP)) is proposed to resolve the issue of small datasets. The SSP approach is based on training the DL models (ImageNet models) on a large number of unlabelled greyscale medical images in the domain to update the features. The models are then trained on a small labelled data set of X-ray images of shoulder implants. The SSP shows excellent results in five ImageNet models, including MobilNetV2, DarkNet19, Xception, InceptionResNetV2, and EfficientNet with precision of 96.69%, 95.45%, 98.76%, 98.35%, and 96.6%, respectively. Furthermore, it has been shown that different domains of TL (such as ImageNet) do not significantly affect the performance of shoulder implants in X-ray images. A lightweight model trained from scratch achieves 96.6% accuracy, which is similar to using standard ImageNet models. The features extracted by the DL models are used to train several ML classifiers that show outstanding performance by obtaining an accuracy of 99.20% with Xception+SVM. Finally, extended experimentation has been carried out to elucidate our approach's real effectiveness in dealing with different medical imaging scenarios. Specifically, five different datasets are trained and tested with and without the proposed SSP, including the shoulder X-ray with an accuracy of 99.47% and CT brain stroke with an accuracy of 98.60%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Marine Predators Algorithm with Deep Learning-Based Leukemia Cancer Classification on Medical Images.
- Author
-
Das, Sonali, Rout, Saroja Kumar, Panda, Sujit Kumar, Mohapatra, Pradyumna Kumar, Almazyad, Abdulaziz S., Jasser, Muhammed Basheer, Xiong, Guojiang, and Mohamed, Ali Wagdy
- Subjects
IMAGE recognition (Computer vision) ,LEUCOCYTES ,COMPUTER-assisted image analysis (Medicine) ,MACHINE learning ,COMPUTER vision ,DEEP learning - Abstract
In blood or bone marrow, leukemia is a form of cancer. A person with leukemia has an expansion of white blood cells (WBCs). It primarily affects children and rarely affects adults. Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body. Identifying leukemia in the initial stage is vital to providing timely patient care. Medical image-analysis-related approaches grant safer, quicker, and less costly solutions while ignoring the difficulties of these invasive processes. It can be simple to generalize Computer vision (CV)-based and image-processing techniques and eradicate human error. Many researchers have implemented computer-aided diagnostic methods and machine learning (ML) for laboratory image analysis, hopefully overcoming the limitations of late leukemia detection and determining its subgroups. This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification (MPADL-LCC) algorithm on Medical Images. The projected MPADL-LCC system uses a bilateral filtering (BF) technique to pre-process medical images. The MPADL-LCC system uses Faster SqueezeNet with Marine Predators Algorithm (MPA) as a hyperparameter optimizer for feature extraction. Lastly, the denoising autoencoder (DAE) methodology can be executed to accurately detect and classify leukemia cancer. The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance. Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. An intelligent MRI assisted diagnosis and treatment system for osteosarcoma based on super-resolution.
- Author
-
Zhong, Xu, Gou, Fangfang, and Wu, Jia
- Subjects
CONVOLUTIONAL neural networks ,MAGNETIC resonance imaging ,IMAGE segmentation ,ARTIFICIAL intelligence ,IMAGING systems - Abstract
Magnetic resonance imaging (MRI) examinations are a routine part of the cancer treatment process. In developing countries, disease diagnosis is often time-consuming and associated with serious prognostic problems. Moreover, MRI is characterized by high noise and low resolution. This creates difficulties in automatic segmentation of the lesion region, leading to a decrease in the segmentation performance of the model. This paper proposes a deep convolutional neural network osteosarcoma image segmentation system based on noise reduction and super-resolution reconstruction, which is the first time to introduce super-resolution methods in the task of osteosarcoma MRI image segmentation, effectively improving the Model generalization performance. We first refined the initial osteosarcoma dataset using a Differential Activation Filter, separating those image data that had little effect on model training. At the same time, we carry out rough initial denoising of the image. Then, an improved information multi-distillation network based on adaptive cropping is proposed to reconstruct the original image and improve the resolution of the image. Finally, a high-resolution network is used to segment the image, and the segmentation boundary is optimized to provide a reference for doctors. Experimental results show that this algorithm has a stronger segmentation effect and anti-noise ability than existing methods. Code: https://github.com/GFF1228/NSRDN. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Automated stenosis detection in coronary artery disease using yolov9c: Enhanced efficiency and accuracy in real-time applications.
- Author
-
Akgül, Muhammet, Kozan, Hasan İbrahim, Akyürek, Hasan Ali, and Taşdemir, Şakir
- Abstract
Coronary artery disease (CAD) is a prevalent cardiovascular condition and a leading cause of mortality. An accurate and timely diagnosis of CAD is crucial for treatment. This study aims to detect stenosis in real-time and automatically during angiographic imaging for CAD diagnosis, using the YOLOv9c model. A dataset comprising 8325 grayscale images was utilized, sourced from 100 patients diagnosed with one-vessel CAD. To enhance sensitivity and accuracy during the training, testing, and validation phases of stenosis detection, fine-tuning and augmentations were applied. The Python API, utilizing YOLO and Ultralytics libraries, was employed for these processes. The analysis revealed that the YOLOv9c model achieved remarkably high performance in both processing speed and detection accuracy, with an F1-score of 0.99 and mAP@50 of 0.99. The inference time was reduced to 18 ms, fine-tuning time to 3.5 h, and training time to 11 h. When the same dataset was tested using another significant diagnostic algorithm, SSD MobileNet V1, the YOLOv9c model outperformed it by achieving 1.36 × better F1-score and 1.42 × better mAP@50. These results indicate that the developed YOLOv9c algorithm can provide highly accurate and real-time results for stenosis detection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Genetic Architectures of Medical Images Revealed by Registration of Multiple Modalities.
- Author
-
Friedman, Sam Freesun, Moran, Gemma Elyse, Rakic, Marianne, and Phillipakis, Anthony
- Subjects
- *
MAGNETIC resonance imaging , *DIAGNOSTIC imaging , *MEDICAL screening , *IMAGE registration , *DUAL-energy X-ray absorptiometry - Abstract
The advent of biobanks with vast quantities of medical imaging and paired genetic measurements creates huge opportunities for a new generation of genotype–phenotype association studies. However, disentangling biological signals from the many sources of bias and artifacts remains difficult. Using diverse medical images and time-series (ie, magnetic resonance imagings [MRIs], electrocardiograms [ECGs], and dual-energy X-ray absorptiometries [DXAs]), we show how registration, both spatial and temporal, guided by domain knowledge or learned de novo, helps uncover biological information. A multimodal autoencoder comparison framework quantifies and characterizes how registration affects the representations that unsupervised and self-supervised encoders learn. In this study we (1) train autoencoders before and after registration with nine diverse types of medical image, (2) demonstrate how neural network-based methods (VoxelMorph, DeepCycle, and DropFuse) can effectively learn registrations allowing for more flexible and efficient processing than is possible with hand-crafted registration techniques, and (3) conduct exhaustive phenotypic screening, comprised of millions of statistical tests, to quantify how registration affects the generalizability of learned representations. Genome- and phenome-wide association studies (GWAS and PheWAS) uncover significantly more associations with registered modality representations than with equivalently trained and sized representations learned from native coordinate spaces. Specifically, registered PheWAS yielded 61 more disease associations for ECGs, 53 more disease associations for cardiac MRIs, and 10 more disease associations for brain MRIs. Registration also yields significant increases in the coefficient of determination when regressing continuous phenotypes (eg, 0.36 ± 0.01 with ECGs and 0.11 ± 0.02 for DXA scans). Our findings reveal the crucial role registration plays in enhancing the characterization of physiological states across a broad range of medical imaging data types. Importantly, this finding extends to more flexible types of registration, such as the cross-modal and the circular mapping methods presented here. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Skin lesion segmentation using deep learning algorithm with ant colony optimization.
- Author
-
Sarwar, Nadeem, Irshad, Asma, Naith, Qamar H., D.Alsufiani, Kholod, and Almalki, Faris A.
- Subjects
- *
ANT algorithms , *MACHINE learning , *OPTIMIZATION algorithms , *SKIN imaging , *ARTIFICIAL intelligence - Abstract
Background: Segmentation of skin lesions remains essential in histological diagnosis and skin cancer surveillance. Recent advances in deep learning have paved the way for greater improvements in medical imaging. The Hybrid Residual Networks (ResUNet) model, supplemented with Ant Colony Optimization (ACO), represents the synergy of these improvements aimed at improving the efficiency and effectiveness of skin lesion diagnosis. Objective: This paper seeks to evaluate the effectiveness of the Hybrid ResUNet model for skin lesion classification and assess its impact on optimizing ACO performance to bridge the gap between computational efficiency and clinical utility. Methods: The study used a deep learning design on a complex dataset that included a variety of skin lesions. The method includes training a Hybrid ResUNet model with standard parameters and fine-tuning using ACO for hyperparameter optimization. Performance was evaluated using traditional metrics such as accuracy, dice coefficient, and Jaccard index compared with existing models such as residual network (ResNet) and U-Net. Results: The proposed hybrid ResUNet model exhibited excellent classification accuracy, reflected in the noticeable improvement in all evaluated metrics. His ability to describe complex lesions was particularly outstanding, improving diagnostic accuracy. Our experimental results demonstrate that the proposed Hybrid ResUNet model outperforms existing state-of-the-art methods, achieving an accuracy of 95.8%, a Dice coefficient of 93.1%, and a Jaccard index of 87.5. Conclusion: The addition of ResUNet to ACO in the proposed Hybrid ResUNet model significantly improves the classification of skin lesions. This integration goes beyond traditional paradigms and demonstrates a viable strategy for deploying AI-powered tools in clinical settings. Future work: Future investigations will focus on increasing the version's abilities by using multi-modal imaging information, experimenting with alternative optimization algorithms, and comparing real-world medical applicability. There is also a promising scope for enhancing computational performance and exploring the model's interpretability for more clinical adoption. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.