4,961 results on '"medical image analysis"'
Search Results
2. SEMANTIC SEGMENTATION IN MEDICAL IMAGE ANALYSIS WITH CONVOLUTIONAL NEURAL NETWORKS.
- Author
-
Jain, Shweta Nishit, Pise, Priya, and Mishra, Akhilesh
- Subjects
IMAGE analysis ,CONVOLUTIONAL neural networks ,IMAGE segmentation ,DIAGNOSTIC imaging ,IMAGE processing ,IMAGE recognition (Computer vision) - Abstract
Medical image analysis plays a pivotal role in modern healthcare, aiding clinicians in accurate diagnosis and treatment planning. However, the complexity and diversity of medical images pose significant challenges for traditional image processing methods. Existing methods often struggle to precisely delineate structures in medical images, leading to suboptimal diagnostic accuracy. The demand for automated and accurate segmentation tools in medical imaging has grown, highlighting the necessity for robust and efficient algorithms capable of handling diverse anatomical variations and pathologies. While CNNs have shown promise in image analysis, their application to medical images requires customization to accommodate unique challenges. The literature lacks comprehensive studies that bridge the gap between general-purpose CNNs and the specific demands of medical image segmentation, especially concerning the diverse and intricate structures present in medical imagery. This study addresses the need for advanced techniques by leveraging Convolutional Neural Networks (CNNs) for semantic segmentation in medical image analysis. Our approach involves the design and implementation of a specialized CNN architecture tailored to the nuances of medical image data. We employ state-of-the-art techniques for data preprocessing, model training, and validation. The model is trained on a diverse dataset encompassing various medical imaging modalities, ensuring its adaptability and generalizability. The proposed CNN-based semantic segmentation model demonstrates superior performance in accurately delineating anatomical structures compared to traditional methods. Evaluation metrics, including Dice coefficient and sensitivity, indicate the model efficacy in achieving precise segmentation. The results underscore the potential of CNNs in advancing medical image analysis for improved clinical outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Applying Medical Language Models to Medical Image Analysis
- Author
-
Guo, Danfeng
- Subjects
Artificial intelligence ,Computer science ,Medical imaging ,Computer Vision ,Image Segmentation ,Large Language Models ,Natural Language Generation ,Visual Question Answering - Abstract
Medical image analysis powered by deep learning computer vision models has achieved significant advancements in the past decade. Deep learning models have demonstrated remarkable capabilities in a wide range of tasks, including medical image classification, detection, and segmentation. However, the limited availability of annotations has become a persistent challenge. Annotating medical images requires specialized professional knowledge, making it a costly process. This dissertation aims to relieve the reliance on medical image annotations by leveraging medical reports directly, which are usually associated with corresponding medical images and readily available. This thesis delves into the application of vision-language models, including large vision-language models, for enhancing medical image analysis. Existing vision-language models are modified and applied for three critical tasks: disease diagnosis, disease segmentation and medical report generation. In particular, the main contributions include: (1) proposing two prompting strategies to improve the accuracy of disease diagnosis through visual question answering in large vision language models; (2) introducing a disease segmentation model using medical reports as weak supervision; (3) evaluating medical large vision-language models in terms of the hallucination in generated reports across multiple complex diseases and applying existing techniques to mitigate the diagnostic errors in generated reports.
- Published
- 2024
4. UMS-Rep: Unified modality-specific representation for efficient medical image analysis
- Author
-
Ghada Zamzmi, Sivaramakrishnan Rajaraman, and Sameer Antani
- Subjects
Medical image analysis ,Deep learning ,Disease classification ,Image segmentation ,Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Medical image analysis typically includes several tasks such as enhancement, segmentation, and classification. Traditionally, these tasks are implemented using separate deep learning models for separate tasks, which is not efficient because it involves unnecessary training repetitions, demands greater computational resources, and requires a relatively large amount of labeled data. In this paper, we propose a multi-task training approach for medical image analysis, where individual tasks are fine-tuned simultaneously through relevant knowledge transfer using a unified modality-specific feature representation (UMS-Rep). We explore different fine-tuning strategies to demonstrate the impact of the strategy on the performance of target medical image tasks. We experiment with different visual tasks (e.g., image denoising, segmentation, and classification) to highlight the advantages offered with our approach for two imaging modalities, chest X-ray and Doppler echocardiography. Our results demonstrate that the proposed approach reduces the overall demand for computational resources and improves target task generalization and performance. Specifically, the proposed approach improves accuracy (up to ~ 9% ↑) and decreases computational time (up to ~ 86% ↓) as compared to the baseline approach. Further, our results prove that the performance of target tasks in medical images is highly influenced by the utilized fine-tuning strategy.
- Published
- 2021
- Full Text
- View/download PDF
5. Deep Learning in Medical Image Analysis
- Author
-
Tang, Hao
- Subjects
Computer science ,Deep learning ,Image segmentation ,Medical image analysis ,Object detection - Abstract
Developing algorithms to better interpret images has been a fundamental problem in the field of medical imaging analysis. Recent advances in machine learning, especially deep convolutional neural networks (DCNNs), have demonstrated great improvement to the speed and accuracy of many medical image analysis tasks, such as image registration, anatomical structures/tissue segmentation, and computer-aided diagnosis. Despite previous progress, these problems still remain challenging due to the limited amount of labeled data, large anatomical variance among patients, etc.In this dissertation, we propose various approaches to address the aforementioned challenges in order to achieve better accuracy, higher efficiency, and use fewer labeled data. First, to address the difficulty of accurately detecting pulmonary nodules in its early stage, we propose a novel CAD framework that consists entirely of 3D DCNNs for detecting pulmonary nodules and reducing false positives in chest CT images. Second, to avoid training several deep learning models to solve nodule detection, false-positive reduction, and segmentation separately which may be suboptimal and resource-intensive, we propose NoduleNet to solve the three tasks jointly in a multi-task fashion. To avoid friction between different tasks and encourage feature diversification, we incorporate two major design tricks: 1) decoupled feature maps for nodule detection and false positive reduction, and 2) a segmentation refinement subnet for increasing the precision of nodule segmentation. Third, to address the limitation in scope and/or scale of previous works on organs-at-risk (OAR) delineation - with only a few OARs delineated and a limited number of samples tested, we propose a new deep learning model that can delineate a comprehensive set of 28 OARs in the head and neck area, trained with 215 CT samples collected and carefully annotated by experienced radiation oncologists with over ten years of experience. The accuracy of our model was compared to both previous state-of-the-art methods and a radiotherapy practitioner. Moreover, we deployed our deep learning model in actual RT planning of new patient cases, and evaluated the clinical utility of the model. Fourth, to reduce the information loss from cropping/downsampling 3D images due to limited GPU memory, we propose a new framework for combining 3D and 2D models, in which the segmentation is realized through high-resolution 2D convolutions, but guided by spatial contextual information extracted from a low-resolution 3D model. A self-attention mechanism is implemented to control which 3D features should be used to guide 2D segmentation. Last but not least, since DCNNs often require a large amount of data with manual annotation for training and are difficult to generalize to unseen classes, we propose a new few-shot segmentation framework RP-Net to address this issue. RP-Net has two important modules: 1) a context relation encoder (CRE) that uses correlation to capture local relation features between foreground and background regions, and 2) a recurrent mask refinement module that repeatedly uses the CRE and a prototypical network to recapture the change of context relationship and refine the segmentation mask iteratively.
- Published
- 2021
6. Optimizing anomaly detection in 3D MRI scans: The role of ConvLSTM in medical image analysis.
- Author
-
Durairaj, Anuradha, Madhan, E.S., Rajkumar, M., and Shameem, Syed
- Subjects
CONVOLUTIONAL neural networks ,IMAGE analysis ,MAGNETIC resonance imaging ,RECURRENT neural networks ,ANOMALY detection (Computer security) - Abstract
The analysis of Medical Images (MI), particularly the detection and classification of anomalies in 3D MRI (Magnetic Resonance Imaging) scans, plays a critical part in timely intervention and personalized therapy plans. In our paper, a comprehensive methodology for anomaly detection in 3D MRI scans of the brain is proposed that combines advanced Deep Learning (DL) techniques, including Convolutional Long Short-Term Memory (ConvLSTM) model, with efficient statistical feature extraction from segmented anomaly regions. The data collection phase utilizes three main datasets namely the Brain Tumor Segmentation Challenge (BRATS), the Federated Tumor Segmentation Challenge (FETS), and the Medical Segmentation Decathlon (MSD). The research begins with preprocessing steps including image resizing, intensity normalization, and alignment of anomaly region segmentation masks. The targeted anomaly regions within the samples are segmented using U-Net architecture and then follow statistical feature extraction procedure. Dimensionality reduction methods such as Principal Component Analysis (PCA) and Recursive Feature Elimination (RFE) are utilized to streamline the feature space. The ConvLSTM is then used to classify the anomalies using both convolution and recurrent layers to capture spatiotemporal patterns in MRI data. The model is fine-tuned and iterated for better classification performance using Adam optimizer. The statistical evaluation results with accuracy of 98.9 % showed that the designed ConvLSTM method is more suitable for clinical diagnosis and treatment design in detecting anomalies in 3D MRI images. • Advanced Anomaly Detection in 3D MRI. • Deep Learning Integration. • Crucial for ensuring data quality and consistency. • This optimizes feature space while preserving discriminative information. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. From Fully-Supervised, Single-Task to Scarcely-Supervised, Multi-Task Deep Learning for Medical Image Analysis
- Author
-
Imran, Abdullah-Al-Zubaer
- Subjects
Computer science ,Deep generative modeling ,Deep learning ,Image Segmentation ,Medical image analysis ,Multi-Task Learning ,Semi-supervised learning - Abstract
Image analysis based on machine learning has gained prominence with the advent of deep learning, particularly in medical imaging. To be effective in addressing challenging image analysis tasks, however, conventional deep neural networks require large corpora of annotated training data, which are unfortunately scarce in the medical domain, thus often rendering fully-supervised learning strategies ineffective.This thesis devises for use in a variety of medical image analysis applications a series of novel deep learning methods, ranging from fully-supervised, single-task learning to scarcely-supervised, multi-task learning that makes efficient use of annotated training data. Specifically, its main contributions include (1) fully-supervised, single-task learning for the segmentation of pulmonary lobes from chest CT scans and the analysis of scoliosis from spine X-ray images; (2) supervised, single-task, domain-generalized pulmonary segmentation in chest X-ray images and retinal vasculature segmentation in fundoscopic images; (3) largely-unsupervised, multiple-task learning via deep generative modeling for the joint synthesis and classification of medical image data; and (4) partly-supervised, multiple-task learning for the combined segmentation and classification of chest and spine X-ray images.
- Published
- 2020
8. Automatic Segmentation of Multiple Organs on 3D CT Images by Using Deep Learning Approaches
- Author
-
Zhou, Xiangrong, Crusio, Wim E., Series Editor, Lambris, John D., Series Editor, Radeke, Heinfried H., Series Editor, Rezaei, Nima, Series Editor, Lee, Gobert, editor, and Fujita, Hiroshi, editor
- Published
- 2020
- Full Text
- View/download PDF
9. Fuzzy Inference System for Efficient Lung Cancer Detection
- Author
-
Tiwari, Laxmikant, Raja, Rohit, Sharma, Vaibhav, Miri, Rohit, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Gupta, Mousumi, editor, Konar, Debanjan, editor, Bhattacharyya, Siddhartha, editor, and Biswas, Sambhunath, editor
- Published
- 2020
- Full Text
- View/download PDF
10. Retinal Image Quality Assessment via Specific Structures Segmentation
- Author
-
Zhou, Xinqiang, Wu, Yicheng, Xia, Yong, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Fu, Huazhu, editor, Garvin, Mona K., editor, MacGillivray, Tom, editor, Xu, Yanwu, editor, and Zheng, Yalin, editor
- Published
- 2020
- Full Text
- View/download PDF
11. Automatic Segmentation of Cortex and Nucleus in Anterior Segment OCT Images
- Author
-
Yin, Pengshuai, Tan, Mingkui, Min, Huaqing, Xu, Yanwu, Xu, Guanghui, Wu, Qingyao, Tong, Yunfei, Risa, Higashita, Liu, Jiang, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Stoyanov, Danail, editor, Taylor, Zeike, editor, Ciompi, Francesco, editor, Xu, Yanwu, editor, Martel, Anne, editor, Maier-Hein, Lena, editor, Rajpoot, Nasir, editor, van der Laak, Jeroen, editor, Veta, Mitko, editor, McKenna, Stephen, editor, Snead, David, editor, Trucco, Emanuele, editor, Garvin, Mona K., editor, Chen, Xin Jan, editor, and Bogunovic, Hrvoje, editor
- Published
- 2018
- Full Text
- View/download PDF
12. A survey of machine learning-based methods for COVID-19 medical image analysis.
- Author
-
Sailunaz, Kashfia, Özyer, Tansel, Rokne, Jon, and Alhajj, Reda
- Subjects
- *
COMPUTER-assisted image analysis (Medicine) , *IMAGE analysis , *DIAGNOSTIC imaging , *IMAGE segmentation , *SARS-CoV-2 , *COVID-19 , *DIAGNOSTIC ultrasonic imaging - Abstract
The ongoing COVID-19 pandemic caused by the SARS-CoV-2 virus has already resulted in 6.6 million deaths with more than 637 million people infected after only 30 months since the first occurrences of the disease in December 2019. Hence, rapid and accurate detection and diagnosis of the disease is the first priority all over the world. Researchers have been working on various methods for COVID-19 detection and as the disease infects lungs, lung image analysis has become a popular research area for detecting the presence of the disease. Medical images from chest X-rays (CXR), computed tomography (CT) images, and lung ultrasound images have been used by automated image analysis systems in artificial intelligence (AI)- and machine learning (ML)-based approaches. Various existing and novel ML, deep learning (DL), transfer learning (TL), and hybrid models have been applied for detecting and classifying COVID-19, segmentation of infected regions, assessing the severity, and tracking patient progress from medical images of COVID-19 patients. In this paper, a comprehensive review of some recent approaches on COVID-19-based image analyses is provided surveying the contributions of existing research efforts, the available image datasets, and the performance metrics used in recent works. The challenges and future research scopes to address the progress of the fight against COVID-19 from the AI perspective are also discussed. The main objective of this paper is therefore to provide a summary of the research works done in COVID detection and analysis from medical image datasets using ML, DL, and TL models by analyzing their novelty and efficiency while mentioning other COVID-19-based review/survey researches to deliver a brief overview on the maximum amount of information on COVID-19-based existing researches. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. ASD-Net: a novel U-Net based asymmetric spatial-channel convolution network for precise kidney and kidney tumor image segmentation
- Author
-
Ji, Zhanlin, Mu, Juncheng, Liu, Jianuo, Zhang, Haiyang, Dai, Chenxu, Zhang, Xueji, and Ganchev, Ivan
- Published
- 2024
- Full Text
- View/download PDF
14. A Deep Level Set Method for Image Segmentation
- Author
-
Tang, Min, Valipour, Sepehr, Zhang, Zichen, Cobzas, Dana, Jagersand, Martin, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Cardoso, M. Jorge, editor, Arbel, Tal, editor, Carneiro, Gustavo, editor, Syeda-Mahmood, Tanveer, editor, Tavares, João Manuel R.S., editor, Moradi, Mehdi, editor, Bradley, Andrew, editor, Greenspan, Hayit, editor, Papa, João Paulo, editor, Madabhushi, Anant, editor, Nascimento, Jacinto C., editor, Cardoso, Jaime S., editor, Belagiannis, Vasileios, editor, and Lu, Zhi, editor
- Published
- 2017
- Full Text
- View/download PDF
15. Medical image analysis using improved SAM-Med2D: segmentation and classification perspectives.
- Author
-
Sun, Jiakang, Chen, Ke, He, Zhiyi, Ren, Siyuan, He, Xinyang, Liu, Xu, and Peng, Cheng
- Subjects
IMAGE recognition (Computer vision) ,IMAGE analysis ,IMAGE segmentation ,MEDICAL coding ,DIAGNOSTIC imaging - Abstract
Recently emerged SAM-Med2D represents a state-of-the-art advancement in medical image segmentation. Through fine-tuning the Large Visual Model, Segment Anything Model (SAM), on extensive medical datasets, it has achieved impressive results in cross-modal medical image segmentation. However, its reliance on interactive prompts may restrict its applicability under specific conditions. To address this limitation, we introduce SAM-AutoMed, which achieves automatic segmentation of medical images by replacing the original prompt encoder with an improved MobileNet v3 backbone. The performance on multiple datasets surpasses both SAM and SAM-Med2D. Current enhancements on the Large Visual Model SAM lack applications in the field of medical image classification. Therefore, we introduce SAM-MedCls, which combines the encoder of SAM-Med2D with our designed attention modules to construct an end-to-end medical image classification model. It performs well on datasets of various modalities, even achieving state-of-the-art results, indicating its potential to become a universal model for medical image classification. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Progressive expansion: Cost-efficient medical image analysis model with reversed once-for-all network training paradigm.
- Author
-
Lim, Shin Wei, Chan, Chee Seng, Mohd Faizal, Erma Rahayu, and Ewe, Kok Howg
- Subjects
- *
COMPUTER-assisted image analysis (Medicine) , *IMAGE analysis , *DIAGNOSTIC imaging , *ARTIFICIAL intelligence , *IMAGE segmentation , *HIPPOCAMPUS (Brain) - Abstract
Low computational cost artificial intelligence (AI) models are vital in promoting the accessibility of real-time medical services in underdeveloped areas. The recent Once-For-All (OFA) network (without retraining) can directly produce a set of sub-network designs with Progressive Shrinking (PS) algorithm; however, the training resource and time inefficiency downfalls are apparent in this method. In this paper, we propose a new OFA training algorithm, namely the Progressive Expansion (ProX) to train the medical image analysis model. It is a reversed paradigm to PS, where technically we train the OFA network from the minimum configuration and gradually expand the training to support larger configurations. Empirical results showed that the proposed paradigm could reduce training time up to 68%; while still being able to produce sub-networks that have either similar or better accuracy compared to those trained with OFA-PS on ROCT (classification), BRATS and Hippocampus (3D-segmentation) public medical datasets. The code implementation for this paper is accessible at: https://github.com/shin-wl/ProX-OFA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Domain Adaptation for Medical Image Analysis: A Survey.
- Author
-
Guan, Hao and Liu, Mingxia
- Subjects
- *
IMAGE analysis , *DIAGNOSTIC imaging , *MAGNETIC resonance imaging , *TASK analysis , *PHYSIOLOGICAL adaptation , *MACHINE learning - Abstract
Machine learning techniques used in computer-aided medical image analysis usually suffer from the domain shift problem caused by different distributions between source/reference data and target data. As a promising solution, domain adaptation has attracted considerable attention in recent years. The aim of this paper is to survey the recent advances of domain adaptation methods in medical image analysis. We first present the motivation of introducing domain adaptation techniques to tackle domain heterogeneity issues for medical image analysis. Then we provide a review of recent domain adaptation models in various medical image analysis tasks. We categorize the existing methods into shallow and deep models, and each of them is further divided into supervised, semi-supervised and unsupervised methods. We also provide a brief summary of the benchmark medical image datasets that support current domain adaptation research. This survey will enable researchers to gain a better understanding of the current status, challenges and future directions of this energetic research field. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. An Intelligent Auxiliary Framework for Bone Malignant Tumor Lesion Segmentation in Medical Image Analysis.
- Author
-
Zhan, Xiangbing, Liu, Jun, Long, Huiyun, Zhu, Jun, Tang, Haoyu, Gou, Fangfang, and Wu, Jia
- Subjects
- *
COMPUTER-assisted image analysis (Medicine) , *IMAGE analysis , *DIAGNOSTIC imaging , *IMAGE segmentation , *LIMB salvage - Abstract
Bone malignant tumors are metastatic and aggressive, with poor treatment outcomes and prognosis. Rapid and accurate diagnosis is crucial for limb salvage and increasing the survival rate. There is a lack of research on deep learning to segment bone malignant tumor lesions in medical images with complex backgrounds and blurred boundaries. Therefore, we propose a new intelligent auxiliary framework for the medical image segmentation of bone malignant tumor lesions, which consists of a supervised edge-attention guidance segmentation network (SEAGNET). We design a boundary key points selection module to supervise the learning of edge attention in the model to retain fine-grained edge feature information. We precisely locate malignant tumors by instance segmentation networks while extracting feature maps of tumor lesions in medical images. The rich contextual-dependent information in the feature map is captured by mixed attention to better understand the uncertainty and ambiguity of the boundary, and edge attention learning is used to guide the segmentation network to focus on the fuzzy boundary of the tumor region. We implement extensive experiments on real-world medical data to validate our model. It validates the superiority of our method over the latest segmentation methods, achieving the best performance in terms of the Dice similarity coefficient (0.967), precision (0.968), and accuracy (0.996). The results prove the important contribution of the framework in assisting doctors to improve the accuracy of diagnosis and clinical efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. A path along deep learning for medical image analysis : With focus on burn wounds and brain tumors
- Author
-
Cirillo, Marco Domenico
- Subjects
Image segmentation ,Medical Image Processing ,Image classification ,Medical image analysis ,Medicinsk bildbehandling ,CNNs ,Deep learning ,Burn wounds ,GANs ,Image augmentation ,Brain tumors - Abstract
The number of medical images that clinicians need to review on a daily basis has increased dramatically during the last decades. Since the number of clinicians has not increased as much, it is necessary to develop tools which can help doctors to work more efficiently. Deep learning is the last trend in the medical imaging field, as methods based on deep learning often outperform more traditional analysis methods. However, in medical imaging a general problem for deep learning is to obtain large, annotated datasets for training the deep networks. This thesis presents how deep learning can be used for two medical problems: assessment of burn wounds and brain tumors. The first papers present methods for analyzing 2D burn wound images; to estimate how large the burn wound is (through image segmentation) and to classify how deep a burn wound is (image classification). The last papers present methods for analyzing 3D magnetic resonance imaging (MRI) volumes containing brain tumors; to estimate how large the different parts of the tumor are (image segmentation). Since medical imaging datasets are often rather small, image augmentation is necessary to artificially increase the size of the dataset and, at the same time, the performance of a convolutional neural network. Traditional augmentation techniques simply apply operations such as rotation, scaling and elastic deformations to generate new similar images, but it is often not clear what type of augmentation that is best for a certain problem. Generative adversarial networks (GANs), on the other hand, can generate completely new images by learning the high dimensional data distribution of images and sampling from it (which can be seen as advanced augmentation). GANs can also be trained to generate images of type B from images of type A, which can be used for image segmentation. The conclusion of this thesis is that deep learning is a powerful technology that doctors can benefit from, to assess injuries and diseases more accurately and more quickly. In the end, this can lead to better healthcare for the patients.
- Published
- 2021
20. 医学图像分析深度学习方法研究与挑战.
- Author
-
田娟秀, 刘国才, 谷珊珊, 鞠忠建, 刘劲光, and 顾冬冬
- Abstract
Copyright of Acta Automatica Sinica is the property of Chinese Academy of Sciences, Institute of Automation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2018
- Full Text
- View/download PDF
21. 3D Dense Separated Convolution Module for Volumetric Medical Image Analysis
- Author
-
Lei Qu, Changfeng Wu, and Liang Zou
- Subjects
convolutional neural networks ,biomedical imaging ,image segmentation ,medical diagnosis ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
With the thriving of deep learning, 3D convolutional neural networks have become a popular choice in volumetric image analysis due to their impressive 3D context mining ability. However, the 3D convolutional kernels will introduce a significant increase in the amount of trainable parameters. Considering the training data are often limited in biomedical tasks, a trade-off has to be made between model size and its representational power. To address this concern, in this paper, we propose a novel 3D Dense Separated Convolution (3D-DSC) module to replace the original 3D convolutional kernels. The 3D-DSC module is constructed by a series of densely connected 1D filters. The decomposition of 3D kernel into 1D filters reduces the risk of overfitting by removing the redundancy of 3D kernels in a topologically constrained manner, while providing the infrastructure for deepening the network. By further introducing nonlinear layers and dense connections between 1D filters, the network’s representational power can be significantly improved while maintaining a compact architecture. We demonstrate the superiority of 3D-DSC on volumetric medical image classification and segmentation, which are two challenging tasks often encountered in biomedical image computing.
- Published
- 2020
- Full Text
- View/download PDF
22. Deep Feature Learning for Medical Image Analysis with Convolutional Autoencoder Neural Network
- Author
-
Yin Zhang, Di Wu, Min Chen, Mohsen Guizani, and Xiaobo Shi
- Subjects
Information Systems and Management ,Artificial neural network ,business.industry ,Computer science ,Deep learning ,Feature extraction ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,Image segmentation ,Semi-supervised learning ,Machine learning ,computer.software_genre ,Autoencoder ,ComputingMethodologies_PATTERNRECOGNITION ,0202 electrical engineering, electronic engineering, information engineering ,Unsupervised learning ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Feature learning ,computer ,Information Systems - Abstract
At present, computed tomography (CT) is widely used to assist disease diagnosis. Especially, computer aided diagnosis (CAD) based on artificial intelligence (AI) recently exhibits its importance in intelligent healthcare. However, it is a great challenge to establish an adequate labeled dataset for CT analysis assistance, due to the privacy and security issues. Therefore, this paper proposes a convolutional autoencoder deep learning framework to support unsupervised image features learning for lung nodule through unlabeled data, which only needs a small amount of labeled data for efficient feature learning. Through comprehensive experiments, it shows that the proposed scheme is superior to other approaches, which effectively solves the intrinsic labor-intensive problem during artificial image labeling. Moreover, it verifies that the proposed convolutional autoencoder approach can be extended for similarity measurement of lung nodules images. Especially, the features extracted through unsupervised learning are also applicable in other related scenarios.
- Published
- 2021
23. Deep Learning Applications in Medical Image Analysis
- Author
-
Tushar Patel, Rini Smita Thakur, and Ananya Singha
- Subjects
Contextual image classification ,Computer science ,business.industry ,Deep learning ,Computer vision ,Artificial intelligence ,Image segmentation ,Medical diagnosis ,business ,Image (mathematics) - Published
- 2021
24. Increasing the Accuracy of Determining the Cardiothoracic Ratio with the Help of an Ensemble of Neural Networks
- Author
-
Vladyslav D. Koniukhov and Serhii V. Ugrimov
- Subjects
machine learning ,neural networks ,deep learning ,image segmentation ,medical image analysis ,Mechanical engineering and machinery ,TJ1-1570 - Abstract
The cardiothoracic ratio is one of the main screening tools for heart health. Cardiothoracic ratio is usually measured manually by a cardiologist or radiologist. In the era of neural networks, which are currently developing very rapidly, we can help doctors automate and improve this process. The use of deep learning for image segmentation has proven itself as a tool that can significantly accelerate and improve the process of medical automation. In this paper, a comparative analysis of the use of several neural networks for the segmentation of the lungs and heart on X-ray images was carried out for further improvement of the automatic calculation of the cardiothoracic ratio. Using a sample of 10 test images, manual cardiothoracic ratio measurements and 7 automatic measurement options were performed. The average accuracy of the measurement of the cardiothoracic ratio of the best of the two neural networks is 93.80%, and the method that used the ensemble of networks obtained a result of 97.15%, with the help of the ensemble of neural networks it was possible to improve the ratio determination by 3.35%. The obtained results indicate that thanks to the use of an ensemble of neural networks, it was possible to improve the result of automatic measurement, and also testify to the effectiveness and prospects of using this method in the medical field
- Published
- 2024
- Full Text
- View/download PDF
25. Obtaining the potential number of object models/atlases needed in medical image analysis
- Author
-
Ze Jin, Jayaram K. Udupa, and Drew A. Torigian
- Subjects
education.field_of_study ,business.industry ,Computer science ,Population ,Image processing ,Pattern recognition ,Image segmentation ,Object (computer science) ,Article ,Image (mathematics) ,Encoding (memory) ,Body region ,Segmentation ,Artificial intelligence ,business ,education - Abstract
Medical image processing and analysis operations, particularly segmentation, can benefit a great deal from prior information encoded to capture variations over a population in form, shape, anatomic layout, and image appearance of objects. Model/atlas-based methods are extant in medical image segmentation. Although multi-atlas/ multi-model methods have shown improved accuracy for image segmentation, if the atlases/models do not cover representatively the distinct groups, then the methods may not be generalizable to new populations. In a previous study, we have given an answer to address the following problem at image level: How many models/ atlases are needed for optimally encoding prior information to address the differing body habitus factor in a population? However, the number of models for different objects may be different, and at the image level, it may not be possible to infer the number of models needed for each object. So, the modified question to which we are now seeking an answer to in this paper is: How many models/ atlases are needed for optimally encoding prior information to address the differing body habitus factor for each object in a body region? To answer this question, we modified our method in the previous study for seeking the optimum grouping for a given population of images but focusing on the individual objects. We present our results on head and neck computed tomography (CT) scans of 298 patients.
- Published
- 2022
26. UMS-Rep: Unified modality-specific representation for efficient medical image analysis
- Author
-
Sameer Antani, Sivaramakrishnan Rajaraman, and Ghada Zamzmi
- Subjects
0301 basic medicine ,Generalization ,Computer science ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Health Informatics ,Machine learning ,computer.software_genre ,Image (mathematics) ,03 medical and health sciences ,0302 clinical medicine ,Medical image analysis ,Segmentation ,Representation (mathematics) ,Image segmentation ,Modality (human–computer interaction) ,business.industry ,Deep learning ,Disease classification ,Task (computing) ,030104 developmental biology ,Feature (computer vision) ,030220 oncology & carcinogenesis ,Artificial intelligence ,business ,computer - Abstract
Medical image analysis typically includes several tasks such as enhancement, segmentation, and classification. Traditionally, these tasks are implemented using separate deep learning models for separate tasks, which is not efficient because it involves unnecessary training repetitions, demands greater computational resources, and requires a relatively large amount of labeled data. In this paper, we propose a multi-task training approach for medical image analysis, where individual tasks are fine-tuned simultaneously through relevant knowledge transfer using a unified modality-specific feature representation (UMS-Rep). We explore different fine-tuning strategies to demonstrate the impact of the strategy on the performance of target medical image tasks. We experiment with different visual tasks (e.g., image denoising, segmentation, and classification) to highlight the advantages offered with our approach for two imaging modalities, chest X-ray and Doppler echocardiography. Our results demonstrate that the proposed approach reduces the overall demand for computational resources and improves target task generalization and performance. Specifically, the proposed approach improves accuracy (up to ~ 9% ↑ ) and decreases computational time (up to ~ 86% ↓ ) as compared to the baseline approach. Further, our results prove that the performance of target tasks in medical images is highly influenced by the utilized fine-tuning strategy.
- Published
- 2021
27. Weakly supervised large-scale pancreatic cancer detection using multi-instance learning
- Author
-
Shyamapada Mandal, Keerthiveena Balraj, Hariprasad Kodamana, Chetan Arora, Julie M. Clark, David S. Kwon, and Anurag S. Rathore
- Subjects
pancreatic cancer ,multi-instance learning ,image segmentation ,feature extraction ,medical image analysis ,Neoplasms. Tumors. Oncology. Including cancer and carcinogens ,RC254-282 - Abstract
IntroductionEarly detection of pancreatic cancer continues to be a challenge due to the difficulty in accurately identifying specific signs or symptoms that might correlate with the onset of pancreatic cancer. Unlike breast or colon or prostate cancer where screening tests are often useful in identifying cancerous development, there are no tests to diagnose pancreatic cancers. As a result, most pancreatic cancers are diagnosed at an advanced stage, where treatment options, whether systemic therapy, radiation, or surgical interventions, offer limited efficacy.MethodsA two-stage weakly supervised deep learning-based model has been proposed to identify pancreatic tumors using computed tomography (CT) images from Henry Ford Health (HFH) and publicly available Memorial Sloan Kettering Cancer Center (MSKCC) data sets. In the first stage, the nnU-Net supervised segmentation model was used to crop an area in the location of the pancreas, which was trained on the MSKCC repository of 281 patient image sets with established pancreatic tumors. In the second stage, a multi-instance learning-based weakly supervised classification model was applied on the cropped pancreas region to segregate pancreatic tumors from normal appearing pancreas. The model was trained, tested, and validated on images obtained from an HFH repository with 463 cases and 2,882 controls.ResultsThe proposed deep learning model, the two-stage architecture, offers an accuracy of 0.907 ± 0.01, sensitivity of 0.905 ± 0.01, specificity of 0.908 ± 0.02, and AUC (ROC) 0.903 ± 0.01. The two-stage framework can automatically differentiate pancreatic tumor from non-tumor pancreas with improved accuracy on the HFH dataset.DiscussionThe proposed two-stage deep learning architecture shows significantly enhanced performance for predicting the presence of a tumor in the pancreas using CT images compared with other reported studies in the literature.
- Published
- 2024
- Full Text
- View/download PDF
28. Enhancing Breast Cancer Diagnosis: A CNN-Based Approach for Medical Image Segmentation and Classification
- Author
-
Saifullah, Shoffan, Dreżewski, Rafał, Hartmanis, Juris, Founding Editor, van Leeuwen, Jan, Series Editor, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Kobsa, Alfred, Series Editor, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Nierstrasz, Oscar, Series Editor, Pandu Rangan, C., Editorial Board Member, Sudan, Madhu, Series Editor, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Vardi, Moshe Y, Series Editor, Goos, Gerhard, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Franco, Leonardo, editor, de Mulatier, Clélia, editor, Paszynski, Maciej, editor, Krzhizhanovskaya, Valeria V., editor, Dongarra, Jack J., editor, and Sloot, Peter M. A., editor
- Published
- 2024
- Full Text
- View/download PDF
29. Medical image analysis and 3-d modeling to quantify changes and functional restoration in denervated muscle undergoing electrical stimulation treatment
- Author
-
Gargiulo, Paolo, Helgason, Thordur, Ingvarsson, Páll, Mayr, Winfried, Kern, Helmut, and Carraro, Ugo
- Published
- 2012
- Full Text
- View/download PDF
30. Cloud Deployment of High-Resolution Medical Image Analysis With TOMAAT
- Author
-
Fausto Milletari, Gerome Vivar, Moustafa Aboulatta, Seyed-Ahmad Ahmadi, and Johann Frei
- Subjects
Diagnostic Imaging ,Service (systems architecture) ,Computer science ,Health Informatics ,Cloud computing ,Client ,computer.software_genre ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Deep Learning ,0302 clinical medicine ,Health Information Management ,Server ,Image Interpretation, Computer-Assisted ,Humans ,Electrical and Electronic Engineering ,Multimedia ,business.industry ,Image segmentation ,Cloud Computing ,Computer Science Applications ,Workflow ,Software deployment ,The Internet ,business ,computer ,Algorithms ,030217 neurology & neurosurgery - Abstract
Background: Deep learning has been recently applied to a multitude of computer vision and medical image analysis problems. Although recent research efforts have improved the state of the art, most of the methods cannot be easily accessed, compared or used by other researchers or clinicians. Even if developers publish their code and pre-trained models on the internet, integration in stand-alone applications and existing workflows is often not straightforward, especially for clinical research partners. In this paper, we propose an open-source framework to provide AI-enabled medical image analysis through the network. Methods: TOMAAT provides a cloud environment for general medical image analysis, composed of three basic components: (i) an announcement service, maintaining a public registry of (ii) multiple distributed server nodes offering various medical image analysis solutions, and (iii) client software offering simple interfaces for users. Deployment is realized through HTTP-based communication, along with an API and wrappers for common image manipulations during pre- and post-processing. Results: We demonstrate the utility and versatility of TOMAAT on several hallmark medical image analysis tasks: segmentation, diffeomorphic deformable atlas registration, landmark localization, and workflow integration. Through TOMAAT, the high hardware demands, setup and model complexity of demonstrated approaches are transparent to users, who are provided with simple client interfaces. We present example clients in three-dimensional Slicer, in the web browser, on iOS devices and in a commercially available, certified medical image analysis suite. Conclusion: TOMAAT enables deployment of state-of-the-art image segmentation in the cloud, fostering interaction among deep learning researchers and medical collaborators in the clinic. Currently, a public announcement service is hosted by the authors, and several ready-to-use services are registered and enlisted at http://tomaat.cloud .
- Published
- 2019
31. Using an Ensemble of Neural Networks for Determining the Diagnostic Parameters of the Vertebrae
- Author
-
Vladyslav D. Koniukhov
- Subjects
machine learning ,neural networks ,deep learning ,image segmentation ,medical image analysis ,Mechanical engineering and machinery ,TJ1-1570 - Abstract
Artificial intelligence opens up great prospects in many areas of human activity, primarily in medicine. One of the priority directions of using artificial intelligence in this field is the segmentation of medical images for the purpose of automatic diagnosis of common diseases. The application of neural network approaches to image analysis of medical images is becoming an increasingly promising direction in the field of medical diagnostics. In particular, this paper investigates the possibility of using an ensemble of neural networks for diagnosing osteoporosis. To achieve this goal, a study was conducted on the possibility of using machine learning methods to segment and determine the shape and size of certain vertebrae: Th8, Th9, Th10, Th11 of a human vertebra on X-ray images obtained in real conditions. Each network is configured and tested on different sets of medical images. Then, the two best networks were selected according to the accuracy and efficiency of the segmentation. One of the main results of the study was the selection of the two best neural networks that provide the most accurate segmentation of vertebrae. Next, the ensemble method was applied, based on the averaging of the predictions of the selected networks. This approach made it possible to improve the overall accuracy of determining the diagnostic parameters of the spine. The obtained results emphasize the effectiveness of using an ensemble of neural networks in the context of medical segmentation. Ensembles provide more stable and accurate predictions by reducing the impact of random errors of individual networks. Ensemble predictions of these networks lead to a statistically significant improvement in results compared to individual approaches. This is an important step in the direction of creating reliable systems of automated diagnostics capable of helping doctors in conducting more accurate and operative analyses.
- Published
- 2024
- Full Text
- View/download PDF
32. A Survey of Medical Image Analysis Using Deep Learning Approaches
- Author
-
Muheet Ahmed Butt, Aasia Rehman, and Majid Zaman
- Subjects
Modalities ,Modality (human–computer interaction) ,business.industry ,Computer science ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image segmentation ,010501 environmental sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,Field (computer science) ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,ComputingMethodologies_PATTERNRECOGNITION ,0302 clinical medicine ,Pattern recognition (psychology) ,Medical imaging ,Segmentation ,Artificial intelligence ,business ,computer ,0105 earth and related environmental sciences - Abstract
With the expanding development of Deep Learning techniques Medical Image Analysis have become an active field of research. Medical Image Analysis typically refers to the utilization of various kinds of image modalities and techniques to obtain images of the human body which in turn can be used by medical experts for diagnosis along with treatment of patients. This paper provides a survey of various improvements that have been made in Medical Image Analysis using DL techniques related to different pattern recognition tasks. These pattern recognition tasks include Classification, Detection/Localization, Segmentation, and Registration. . The paper discusses several recently published research papers related to different pattern recognition tasks including liver lesion classification and segmentation, lung nodule detection & classification, lung nodule segmentation, brain tumor classification, and detection, brain tumor segmentation, Breast cancer detection, etc. Comparative description of these papers is also provided in terms of organ, modality, dataset, model used and limitation/improvements needed. This survey briefly describes several medical imaging modalities used in medical image. Also, the proposed research work has evaluated various challenges encountered in the Medical Imaging domain and have discussed about the current trends for new researchers/ medical instrument experts encouraging them to take full advantage of DL techniques in the future.
- Published
- 2021
33. Autoencoder based self-supervised test-time adaptation for medical image analysis
- Author
-
Aaron Carass, Yufan He, Lianrui Zuo, Jerry L. Prince, and Blake E. Dewey
- Subjects
Computer science ,Health Informatics ,Machine learning ,computer.software_genre ,Retina ,Article ,030218 nuclear medicine & medical imaging ,Image (mathematics) ,Domain (software engineering) ,03 medical and health sciences ,0302 clinical medicine ,Image Processing, Computer-Assisted ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Radiological and Ultrasound Technology ,Standard test image ,Artificial neural network ,business.industry ,Deep learning ,Image segmentation ,Computer Graphics and Computer-Aided Design ,Autoencoder ,Magnetic Resonance Imaging ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Neural Networks, Computer ,business ,computer ,030217 neurology & neurosurgery ,Tomography, Optical Coherence - Abstract
Deep neural networks have been successfully applied to medical image analysis tasks like segmentation and synthesis. However, even if a network is trained on a large dataset from the source domain, its performance on unseen test domains is not guaranteed. The performance drop on data obtained differently from the network’s training data is a major problem (known as domain shift) in deploying deep learning in clinical practice. Existing work focuses on retraining the model with data from the test domain, or harmonizing the test domain’s data to the network training data. A common practice is to distribute a carefully-trained model to multiple users (e.g., clinical centers), and then each user uses the model to process their own data, which may have a domain shift (e.g., varying imaging parameters and machines). However, the lack of availability of the source training data and the cost of training a new model often prevents the use of known methods to solve user-specific domain shifts. Here, we ask whether we can design a model that, once distributed to users, can quickly adapt itself to each new site without expensive retraining or access to the source training data? In this paper, we propose a model that can adapt based on a single test subject during inference. The model consists of three parts, which are all neural networks: a task model (T) which performs the image analysis task like segmentation; a set of autoencoders (AEs); and a set of adaptors (As). The task model and autoencoders are trained on the source dataset and can be computationally expensive. In the deployment stage, the adaptors are trained to transform the test image and its features to minimize the domain shift as measured by the autoencoders’ reconstruction loss. Only the adaptors are optimized during the testing stage with a single test subject thus is computationally efficient. The method was validated on both retinal optical coherence tomography (OCT) image segmentation and magnetic resonance imaging (MRI) T1-weighted to T2-weighted image synthesis. Our method, with its short optimization time for the adaptors (10 iterations on a single test subject) and its additional required disk space for the autoencoders (around 15 MB), can achieve significant performance improvement. Our code is publicly available at: https://github.com/YufanHe/self-domain-adapted-network .
- Published
- 2021
34. Deep Learning Approach for Medical Image Analysis.
- Author
-
Adegun, Adekanmi Adeyinka, Viriri, Serestina, and Ogundokun, Roseline Oluwaseun
- Subjects
- *
COMPUTER-assisted image analysis (Medicine) , *IMAGE analysis , *DIAGNOSTIC imaging , *MAGNETIC resonance imaging , *IMAGE segmentation , *SKIN disease diagnosis , *DEEP learning - Abstract
Localization of region of interest (ROI) is paramount to the analysis of medical images to assist in the identification and detection of diseases. In this research, we explore the application of a deep learning approach in the analysis of some medical images. Traditional methods have been restricted due to the coarse and granulated appearance of most of these images. Recently, deep learning techniques have produced promising results in the segmentation of medical images for the diagnosis of diseases. This research experiments on medical images using a robust deep learning architecture based on the Fully Convolutional Network- (FCN-) UNET method for the segmentation of three samples of medical images such as skin lesion, retinal images, and brain Magnetic Resonance Imaging (MRI) images. The proposed method can efficiently identify the ROI on these images to assist in the diagnosis of diseases such as skin cancer, eye defects and diabetes, and brain tumor. This system was evaluated on publicly available databases such as the International Symposium on Biomedical Imaging (ISBI) skin lesion images, retina images, and brain tumor datasets with over 90% accuracy and dice coefficient. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Medical Image Analysis using Deep Learning: A Review
- Author
-
M. A. B. Ali, Syed Qamrun Nisa, Amelia Ritahani Ismail, and Mohammad Shadab Khan
- Subjects
Computer science ,business.industry ,Deep learning ,Image segmentation ,Machine learning ,computer.software_genre ,Convolutional neural network ,Object detection ,Recurrent neural network ,Medical imaging ,Segmentation ,Artificial intelligence ,business ,Encoder ,computer - Abstract
Over the recent past, deep learning is one of the core research directions which has gained a great deal of attention due to its outstanding performance in the area of medical image analysis. This paper aims to present a review of deep learning concepts related to medical imaging. We examine the use of deep learning for medical image analysis including segmentation, object detection and classification. Deep learning techniques including convolutional neural networks (CNNs), recurrent neural network (RNNs) and auto- encoder (AE) are also discussed in this paper.
- Published
- 2020
36. On the Effective Transfer Learning Strategy for Medical Image Analysis in Deep Learning
- Author
-
Chuan Zhou, Huiru Zeng, Leiting Chen, Rui Guo, Yu Deng, Shuo Xi, and Yang Wen
- Subjects
business.industry ,Computer science ,Deep learning ,02 engineering and technology ,Image segmentation ,Machine learning ,computer.software_genre ,Convolutional neural network ,Field (computer science) ,030218 nuclear medicine & medical imaging ,Visualization ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,business ,Transfer of learning ,computer - Abstract
In this study, we focus on exploring different strategies of transfer learning for medical applications. Firstly, we report competitive results indicating that convolutional neural networks (CNNs) that were pre-trained with different annotations could have diverse effects on the performance of medical image analysis, especially for segmentation tasks. Then, we present our further explorations of transferring different components of the CNNs, which revealed the importance of the decoder on medical segmentation. Finally, we demonstrate the advantages and disadvantages of transfer learning methods based on model integration. These observations present novel aspects of transfer learning for visual tasks in the medical field, and we expect that these discoveries will encourage the exploration of more effective transfer learning strategies for CNN-based medical image analysis.
- Published
- 2020
37. A review on optimization techniques for medical image analysis.
- Author
-
Kaur, Palwinder and Singh, Rajesh Kumar
- Subjects
IMAGE analysis ,MATHEMATICAL optimization ,COMPUTER-aided diagnosis ,DIAGNOSTIC imaging ,IMAGE segmentation ,IMAGE processing - Abstract
Summary: Data mining of medical imaging approaches makes it difficult to determine their value in the disease's insight, analysis, and diagnosis. Image classification presents a significant difficulty in image analysis and plays a vital part in computer‐aided diagnosis. This task concerned the use of optimization techniques for the utilization of image processing, pattern recognition, and classification techniques, as well as the validation of image classification results in medical expert reports. The primary intention of this study is to analyze the performance of optimization techniques explored in the area of medical image analysis. For this motive, the optimization techniques employed in existing literature from 2012 to 2021 are reviewed in this study. The contribution of optimization‐based medical image classification and segmentation utilized image modalities, data sets, and tradeoffs for each technique are also discussed in this review study. Finally, this review study provides the gap analysis of optimization techniques used in medical image analysis with the possible future research direction. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis.
- Author
-
Cheplygina, Veronika, de Bruijne, Marleen, and Pluim, Josien P.W.
- Subjects
- *
IMAGE analysis , *DIAGNOSTIC imaging , *TRANSFER of training , *MEDICAL imaging systems , *MACHINE learning , *IMAGE segmentation - Abstract
• We discuss different forms of supervision in medical image analysis. • Over 140 papers using semi-supervised, multi-instance or transfer learning are covered. • We discuss connections between these scenarios and further opportunities for research. Machine learning (ML) algorithms have made a tremendous impact in the field of medical imaging. While medical imaging datasets have been growing in size, a challenge for supervised ML algorithms that is frequently mentioned is the lack of annotated data. As a result, various methods that can learn with less/other types of supervision, have been proposed. We give an overview of semi-supervised, multiple instance, and transfer learning in medical imaging, both in diagnosis or segmentation tasks. We also discuss connections between these learning scenarios, and opportunities for future research. A dataset with the details of the surveyed papers is available via https://figshare.com/articles/Database_of_surveyed_literature_in_Not-so-supervised_a_survey_of_semi-supervised_multi-instance_and_transfer_learning_in_medical_image_analysis_/7479416. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
39. A Review on Medical Image Analysis with Convolutional Neural Networks
- Author
-
Valentina Emilia Balas and Paarth Bir
- Subjects
Contextual image classification ,business.industry ,Computer science ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image segmentation ,Machine learning ,computer.software_genre ,Convolutional neural network ,Field (computer science) ,Medical imaging ,Domain knowledge ,Segmentation ,Artificial intelligence ,business ,computer - Abstract
Over the last few years, deep learning has grown rapidly from a promising to a viable option to analyze medical images. With an increase in use of medical imaging for diagnosis and treatment, the field offers a significant potential for research. A major advantage offered by deep learning is using large amounts of data to avoid tedious hand-crafting of features which requires extensive domain knowledge. This review introduces a few popular algorithms using Convolutional Neural Networks(CNNs) being used in the field along with their applications: Classification, Detection, Segmentation, Registration and Image Enhancement. The paper further provides some useful resources on some of the most promising anatomical areas of application in medical image analysis with Convolutional Neural Networks: brain, breast, chest, eye and skin.
- Published
- 2020
40. Improved fuzzy clustering with swarm intelligence for medical image analysis
- Author
-
Fateme Gholami
- Subjects
Set (abstract data type) ,Fuzzy clustering ,Computer science ,business.industry ,Pattern recognition ,Segmentation ,Noise (video) ,Image segmentation ,Sensitivity (control systems) ,Artificial intelligence ,Cluster analysis ,business ,Swarm intelligence - Abstract
One of the challenges in the world today is the existence of a variety of diseases, some of which require the processing of medical images to diagnose and evaluate, such as images of brain tumors. One of the methods of analyzing and evaluating patients related to the brain is magnetic resonance imaging(MRI). Data mining methods such as clustering can be used to analyze magnetic resonance images. Clustering techniques can take the area of brain tumors from brain tissue and use it to diagnose disease. Various clustering methods have been proposed so far, one of which is the fuzzy clustering or FCM method, and it has a high accuracy for clustering and segmentation of brain tissues. Fuzzy clustering is less sensitive to the noise in these images and therefore its segmentation accuracy is somewhat desirable. To improve the performance of FCM clustering, in identifying the edges and borders of tumors, it is necessary to select the optimal clustering centers. The optimal selection of cluster centers increases its accuracy in learning and segmentation. Given that the optimal selection of cluster centers is an optimization method, metaheuristic algorithms can be used for this purpose. In this research, swarm intelligence algorithms have been used to optimally select cluster centers in FCM. The analysis of the proposed method on a set of images of brain magnetic resonance shows that the proposed algorithm has the specificity, sensitivity, and accuracy of 96.87%, 88.36%, and 91.32% in the diagnosis of brain tumors, respectively. The proposed method of hybrid methods, such as the fuzzy method, better detects brain tumors.
- Published
- 2020
41. Do Noises Bother Human and Neural Networks In the Same Way? A Medical Image Analysis Perspective
- Author
-
Qianjun Jia, Yu-Jen Chen, Tsung-Yi Ho, Jian Zhuang, Shao-Cheng Wen, Wujie Wen, Zihao Liu, Meiping Huang, Yiyu Shi, and Xiaowei Xu
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Noise reduction ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Machine Learning (cs.LG) ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,FOS: Electrical engineering, electronic engineering, information engineering ,0202 electrical engineering, electronic engineering, information engineering ,Discrete cosine transform ,Segmentation ,Artificial neural network ,business.industry ,Deep learning ,Image and Video Processing (eess.IV) ,Perspective (graphical) ,Pattern recognition ,Image segmentation ,Electrical Engineering and Systems Science - Image and Video Processing ,ComputingMethodologies_PATTERNRECOGNITION ,Computer Science::Computer Vision and Pattern Recognition ,020201 artificial intelligence & image processing ,Noise (video) ,Artificial intelligence ,business - Abstract
Deep learning had already demonstrated its power in medical images, including denoising, classification, segmentation, etc. All these applications are proposed to automatically analyze medical images beforehand, which brings more information to radiologists during clinical assessment for accuracy improvement. Recently, many medical denoising methods had shown their significant artifact reduction result and noise removal both quantitatively and qualitatively. However, those existing methods are developed around human-vision, i.e., they are designed to minimize the noise effect that can be perceived by human eyes. In this paper, we introduce an application-guided denoising framework, which focuses on denoising for the following neural networks. In our experiments, we apply the proposed framework to different datasets, models, and use cases. Experimental results show that our proposed framework can achieve a better result than human-vision denoising network.
- Published
- 2020
42. Deep Semantic Segmentation Feature-Based Radiomics for the Classification Tasks in Medical Image Analysis
- Author
-
Bingsheng Huang, Zhang Hongyuan, Junru Tian, Guo Dan, Yongjin Zhou, Hanwei Chen, Jing Qin, Shi-Ting Feng, Huang Chen, Chenglang Yuan, He Xueping, Luo Zixin, and Yanji Luo
- Subjects
Contextual image classification ,Semantic feature ,business.industry ,Computer science ,Deep learning ,Feature extraction ,Pattern recognition ,Feature selection ,Image segmentation ,Overfitting ,Computer Science Applications ,Semantics ,Health Information Management ,Feature (computer vision) ,Research Design ,Image Processing, Computer-Assisted ,Humans ,Artificial intelligence ,Neural Networks, Computer ,Electrical and Electronic Engineering ,business ,Algorithms ,Biotechnology - Abstract
Recently, an emerging trend in medical image classification is to combine radiomics framework with deep learning classification network in an integrated system. Although this combination is efficient in some tasks, the deep learning-based classification network is often difficult to capture an effective representation of lesion regions, and prone to face the challenge of overfitting, leading to unreliable features and inaccurate results, especially when the sizes of the lesions are small or the training dataset is small. In addition, these combinations mostly lack an effective feature selection mechanism, which makes it difficult to obtain the optimal feature selection. In this paper, we introduce a novel and effective deep semantic segmentation feature-based radiomics (DSFR) framework to overcome the above-mentioned challenges, which consists of two modules: the deep semantic feature extraction module and the feature selection module. Specifically, the extraction module is utilized to extract hierarchical semantic features of the lesions from a trained segmentation network. The feature selection module aims to select the most representative features by using a novel feature similarity adaptation algorithm. Experiments are extensively conducted to evaluate our method in two clinical tasks: the pathological grading prediction in pancreatic neuroendocrine neoplasms (pNENs), and the prediction of thrombolytic therapy efficacy in deep venous thrombosis (DVT). Experimental results on both tasks demonstrate that the proposed method consistently outperforms the state-of-the-art approaches by a large margin.
- Published
- 2020
43. Automl Systems for Medical Imaging
- Author
-
Jidney, Tasmia Tahmida, Biswas, Angona, Abdullah Al, Nasim Md, Hossain, Ismail, Alam, Md Jahangir, Talukder, Sajedul, Hossain, Mofazzal, Ullah, Md Azim, Zheng, Bin, editor, Andrei, Stefan, editor, Sarker, Md Kamruzzaman, editor, and Gupta, Kishor Datta, editor
- Published
- 2023
- Full Text
- View/download PDF
44. 3D Dense Separated Convolution Module for Volumetric Medical Image Analysis
- Author
-
Liang Zou, Lei Qu, and Changfeng Wu
- Subjects
medical diagnosis ,Computer science ,Overfitting ,Convolutional neural network ,lcsh:Technology ,030218 nuclear medicine & medical imaging ,lcsh:Chemistry ,03 medical and health sciences ,0302 clinical medicine ,convolutional neural networks ,General Materials Science ,Segmentation ,Instrumentation ,lcsh:QH301-705.5 ,image segmentation ,030304 developmental biology ,Fluid Flow and Transfer Processes ,0303 health sciences ,Contextual image classification ,business.industry ,lcsh:T ,Process Chemistry and Technology ,Deep learning ,General Engineering ,Pattern recognition ,Image segmentation ,lcsh:QC1-999 ,Computer Science Applications ,Nonlinear system ,Kernel (image processing) ,lcsh:Biology (General) ,lcsh:QD1-999 ,lcsh:TA1-2040 ,Artificial intelligence ,business ,lcsh:Engineering (General). Civil engineering (General) ,biomedical imaging ,lcsh:Physics - Abstract
With the thriving of deep learning, 3D convolutional neural networks have become a popular choice in volumetric image analysis due to their impressive 3D context mining ability. However, the 3D convolutional kernels will introduce a significant increase in the amount of trainable parameters. Considering the training data are often limited in biomedical tasks, a trade-off has to be made between model size and its representational power. To address this concern, in this paper, we propose a novel 3D Dense Separated Convolution (3D-DSC) module to replace the original 3D convolutional kernels. The 3D-DSC module is constructed by a series of densely connected 1D filters. The decomposition of 3D kernel into 1D filters reduces the risk of overfitting by removing the redundancy of 3D kernels in a topologically constrained manner, while providing the infrastructure for deepening the network. By further introducing nonlinear layers and dense connections between 1D filters, the network&rsquo, s representational power can be significantly improved while maintaining a compact architecture. We demonstrate the superiority of 3D-DSC on volumetric medical image classification and segmentation, which are two challenging tasks often encountered in biomedical image computing.
- Published
- 2020
- Full Text
- View/download PDF
45. 视觉Transformer 在医学图像分析中的应用研究综述.
- Author
-
石磊, 籍庆余, 陈清威, 赵恒毅, and 张俊星
- Subjects
IMAGE recognition (Computer vision) ,NATURAL language processing ,CONVOLUTIONAL neural networks ,COMPUTER vision ,IMAGE analysis ,IMAGE registration ,IMAGE segmentation - Abstract
Copyright of Journal of Computer Engineering & Applications is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
46. An Improvement of Spatial Fuzzy C-means Clustering Method for Noisy Medical Image Analysis
- Author
-
Jack-Gérard Postaire, Kaddour Hachemi, Boucif Beddad, Frantisek Jabloncik, and Oussama Messai
- Subjects
Computer science ,business.industry ,05 social sciences ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,k-means clustering ,Pattern recognition ,02 engineering and technology ,Image segmentation ,Fuzzy logic ,Image (mathematics) ,Computer-aided diagnosis ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Segmentation ,Noise (video) ,Artificial intelligence ,Cluster analysis ,business ,050203 business & management - Abstract
Medical image Segmentation plays a major role in MRI images processing; it’s performed before the analysis and decision-making stages in several medical processes. Many investigators have developed several Fuzzy C-means methods. In this work, a reliable automatic segmentation algorithm based on the spatial FCM clustering is developed to minimize the effect of noise and intensity inhomogeneities. This approach combines two properties of the Spatial FCM using neighbor’s statistical characteristics and pillar k-means. The proposed system has been implemented with simulink. Experimental results on brain MRI images show the improvement and increase the segmentation accuracy.The Current research work is compared with some wellknown existed methods to show the effectiveness that contributes to the development of tools needed in computer aided diagnosis systems aiming to assist specialists in making diagnosis decisions.
- Published
- 2019
47. MDU-Net: multi-scale densely connected U-Net for biomedical image segmentation
- Author
-
Zhang, Jiawei, Zhang, Yanchun, Jin, Yuzhen, Xu, Jilan, and Xu, Xiaowei
- Published
- 2023
- Full Text
- View/download PDF
48. Medical image analysis methods in MR/CT-imaged acute-subacute ischemic stroke lesion: Segmentation, prediction and insights into dynamic evolution simulation models. A critical appraisal.
- Author
-
Rekik, Islem, Allassonnière, Stéphanie, Carpenter, Trevor K., and Wardlaw, Joanna M.
- Subjects
DIAGNOSTIC imaging ,ISCHEMIA ,IMAGE segmentation ,SIMULATION methods & models ,THERAPEUTIC use of tomography ,MAGNETIC resonance imaging ,NECROSIS - Abstract
Abstract: Over the last 15years, basic thresholding techniques in combination with standard statistical correlation-based data analysis tools have been widely used to investigate different aspects of evolution of acute or subacute to late stage ischemic stroke in both human and animal data. Yet, a wave of biology-dependent and imaging-dependent issues is still untackled pointing towards the key question: “how does an ischemic stroke evolve?” Paving the way for potential answers to this question, both magnetic resonance (MRI) and CT (computed tomography) images have been used to visualize the lesion extent, either with or without spatial distinction between dead and salvageable tissue. Combining diffusion and perfusion imaging modalities may provide the possibility of predicting further tissue recovery or eventual necrosis. Going beyond these basic thresholding techniques, in this critical appraisal, we explore different semi-automatic or fully automatic 2D/3D medical image analysis methods and mathematical models applied to human, animal (rats/rodents) and/or synthetic ischemic stroke to tackle one of the following three problems: (1) segmentation of infarcted and/or salvageable (also called penumbral) tissue, (2) prediction of final ischemic tissue fate (death or recovery) and (3) dynamic simulation of the lesion core and/or penumbra evolution. To highlight the key features in the reviewed segmentation and prediction methods, we propose a common categorization pattern. We also emphasize some key aspects of the methods such as the imaging modalities required to build and test the presented approach, the number of patients/animals or synthetic samples, the use of external user interaction and the methods of assessment (clinical or imaging-based). Furthermore, we investigate how any key difficulties, posed by the evolution of stroke such as swelling or reperfusion, were detected (or not) by each method. In the absence of any imaging-based macroscopic dynamic model applied to ischemic stroke, we have insights into relevant microscopic dynamic models simulating the evolution of brain ischemia in the hope to further promising and challenging 4D imaging-based dynamic models. By depicting the major pitfalls and the advanced aspects of the different reviewed methods, we present an overall critique of their performances and concluded our discussion by suggesting some recommendations for future research work focusing on one or more of the three addressed problems. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
49. Carbon Footprint of Selecting and Training Deep Learning Models for Medical Image Analysis
- Author
-
Selvan, Raghavendra, Bhagwat, Nikhil, Wolff Anthony, Lasse F., Kanding, Benjamin, Dam, Erik B., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wang, Linwei, editor, Dou, Qi, editor, Fletcher, P. Thomas, editor, Speidel, Stefanie, editor, and Li, Shuo, editor
- Published
- 2022
- Full Text
- View/download PDF
50. Segment anything model for medical image analysis: An experimental study.
- Author
-
Mazurowski, Maciej A., Dong, Haoyu, Gu, Hanxue, Yang, Jichen, Konz, Nicholas, and Zhang, Yixin
- Subjects
- *
IMAGE analysis , *DIAGNOSTIC imaging , *IMAGE segmentation , *COMPUTED tomography , *BRAIN tumors - Abstract
Training segmentation models for medical images continues to be challenging due to the limited availability of data annotations. Segment Anything Model (SAM) is a foundation model trained on over 1 billion annotations, predominantly for natural images, that is intended to segment user-defined objects of interest in an interactive manner. While the model performance on natural images is impressive, medical image domains pose their own set of challenges. Here, we perform an extensive evaluation of SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies. In our experiments, we generated point and box prompts for SAM using a standard method that simulates interactive segmentation. We report the following findings: (1) SAM's performance based on single prompts highly varies depending on the dataset and the task, from IoU=0.1135 for spine MRI to IoU=0.8650 for hip X-ray. (2) Segmentation performance appears to be better for well-circumscribed objects with prompts with less ambiguity such as the segmentation of organs in computed tomography and poorer in various other scenarios such as the segmentation of brain tumors. (3) SAM performs notably better with box prompts than with point prompts. (4) SAM outperforms similar methods RITM, SimpleClick, and FocalClick in almost all single-point prompt settings. (5) When multiple-point prompts are provided iteratively, SAM's performance generally improves only slightly while other methods' performance improves to the level that surpasses SAM's point-based performance. We also provide several illustrations for SAM's performance on all tested datasets, iterative segmentation, and SAM's behavior given prompt ambiguity. We conclude that SAM shows impressive zero-shot segmentation performance for certain medical imaging datasets, but moderate to poor performance for others. SAM has the potential to make a significant impact in automated medical image segmentation in medical imaging, but appropriate care needs to be applied when using it. Code for evaluation SAM is made publicly available at https://github.com/mazurowski-lab/segment-anything-medical-evaluation. [Display omitted] • Segment Anything Model (SAM) is a new algorithm for interactive image segmentation. • Performance of SAM varies widely on the 19 evaluated medical imaging datasets. • SAM performs best when box annotations are provided for each component of object. • SAM outperforms other methods in interactive but non-iterative modes. • SAM is likely to be a valuable tool in medical image segmentation if used correctly. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.