77 results on '"Image texture"'
Search Results
2. Topological data analysis in medical imaging: current state of the art.
- Author
-
Singh, Yashbir, Farrelly, Colleen M., Hathaway, Quincy A., Leiner, Tim, Jagtap, Jaidip, Carlsson, Gunnar E., and Erickson, Bradley J.
- Subjects
COMPUTER-assisted image analysis (Medicine) ,IMAGE analysis ,DIAGNOSTIC imaging ,DATA analysis ,THREE-dimensional imaging - Abstract
Machine learning, and especially deep learning, is rapidly gaining acceptance and clinical usage in a wide range of image analysis applications and is regarded as providing high performance in detecting anatomical structures and identification and classification of patterns of disease in medical images. However, there are many roadblocks to the widespread implementation of machine learning in clinical image analysis, including differences in data capture leading to different measurements, high dimensionality of imaging and other medical data, and the black-box nature of machine learning, with a lack of insight into relevant features. Techniques such as radiomics have been used in traditional machine learning approaches to model the mathematical relationships between adjacent pixels in an image and provide an explainable framework for clinicians and researchers. Newer paradigms, such as topological data analysis (TDA), have recently been adopted to design and develop innovative image analysis schemes that go beyond the abilities of pixel-to-pixel comparisons. TDA can automatically construct filtrations of topological shapes of image texture through a technique known as persistent homology (PH); these features can then be fed into machine learning models that provide explainable outputs and can distinguish different image classes in a computationally more efficient way, when compared to other currently used methods. The aim of this review is to introduce PH and its variants and to review TDA's recent successes in medical imaging studies. Key points: Topological data analysis (TDA) provides information on the shape of data. In radiology, the shape of 2D and 3D images contains additional information. TDA can be combined with other applications, such as textural analysis. Persistent homology can provide a visual representation of extracted TDA data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Hahn-PCNN-CNN: an end-to-end multi-modal brain medical image fusion framework useful for clinical diagnosis.
- Author
-
Guo, Kai, Li, Xiongfei, Hu, Xiaohan, Liu, Jichen, and Fan, Tiehu
- Subjects
COMPUTER-assisted image analysis (Medicine) ,DIAGNOSTIC imaging ,BRAIN imaging ,DIAGNOSIS ,IMAGE fusion ,DEEP learning - Abstract
Background: In medical diagnosis of brain, the role of multi-modal medical image fusion is becoming more prominent. Among them, there is no lack of filtering layered fusion and newly emerging deep learning algorithms. The former has a fast fusion speed but the fusion image texture is blurred; the latter has a better fusion effect but requires higher machine computing capabilities. Therefore, how to find a balanced algorithm in terms of image quality, speed and computing power is still the focus of all scholars. Methods: We built an end-to-end Hahn-PCNN-CNN. The network is composed of feature extraction module, feature fusion module and image reconstruction module. We selected 8000 multi-modal brain medical images downloaded from the Harvard Medical School website to train the feature extraction layer and image reconstruction layer to enhance the network's ability to reconstruct brain medical images. In the feature fusion module, we use the moments of the feature map combined with the pulse-coupled neural network to reduce the information loss caused by convolution in the previous fusion module and save time. Results: We choose eight sets of registered multi-modal brain medical images in four diease to verify our model. The anatomical structure images are from MRI and the functional metabolism images are SPECT and 18F-FDG. At the same time, we also selected eight representative fusion models as comparative experiments. In terms of objective quality evaluation, we select six evaluation metrics in five categories to evaluate our model. Conclusions: The fusion image obtained by our model can retain the effective information in source images to the greatest extent. In terms of image fusion evaluation metrics, our model is superior to other comparison algorithms. In terms of time computational efficiency, our model also performs well. In terms of robustness, our model is very stable and can be generalized to multi-modal image fusion of other organs. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Preoperative prediction of perineural invasion and KRAS mutation in colon cancer using machine learning.
- Author
-
Li, Yu, Eresen, Aydin, Shangguan, Junjie, Yang, Jia, Benson III, Al B., Yaghmai, Vahid, and Zhang, Zhuoli
- Subjects
COLON cancer ,MACHINE learning ,FORECASTING ,COMPUTER-assisted image analysis (Medicine) ,SURGICAL excision - Abstract
Purpose: Preoperative prediction of perineural invasion (PNI) and Kirsten RAS (KRAS) mutation in colon cancer is critical for treatment planning and patient management. We developed machine learning models for diagnosis of PNI and KRAS mutation in colon cancer patients by interpreting preoperative CT. Methods: This retrospective study included 207 patients who received surgical resection in our institution. The underlying tumor characteristics were described by analyzing CT image texture quantitatively. The key radiomics features were determined with similarity analysis followed by RELIEFF method among 306 CT imaging features. Eight kernel-based support vector machines classifiers were constructed using individual (II, III, or IV) or multi-stage (II + III + IV) patient cohorts for predicting PNI and KRAS mutation. The model performances were evaluated using accuracy, receiver operating curve, and decision curve analyses. Results: Multi-stage classifiers obtained AUC of 0.793 and 0.862 for detecting PNI and KRAS mutation for test cohort. Moreover, individual-stage classifiers demonstrated significantly improved diagnostic performance at all stages (II
AUC : [0.86; 0.99], IIIAUC : [0.99; 0.99], and IVAUC : [1.00; 1.00], respectively, for PNI and KRAS mutation in test cohort). Besides, stage II tumor is better described with coarse texture features while more detailed features are required for better characterization of advanced-stage tumors (III and IV) for diagnoses of PNI or KRAS mutation. Conclusion: Machine learning models developed using preoperative CT data can predict PNI and KRAS mutation in colon cancer patients with satisfactory performance. Individual-stage models better-characterized the relationship between CT features and PNI or KRAS mutation than multi-stage models and demonstrated good prediction scores. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
5. MDC-RHT: Multi-Modal Medical Image Fusion via Multi-Dimensional Dynamic Convolution and Residual Hybrid Transformer.
- Author
-
Wang, Wenqing, He, Ji, Liu, Han, and Yuan, Wei
- Subjects
IMAGE fusion ,COMPUTER-assisted image analysis (Medicine) ,TRANSFORMER models ,DIAGNOSTIC imaging ,FEATURE extraction ,MULTIMODAL user interfaces ,STRUCTURAL models - Abstract
The fusion of multi-modal medical images has great significance for comprehensive diagnosis and treatment. However, the large differences between the various modalities of medical images make multi-modal medical image fusion a great challenge. This paper proposes a novel multi-scale fusion network based on multi-dimensional dynamic convolution and residual hybrid transformer, which has better capability for feature extraction and context modeling and improves the fusion performance. Specifically, the proposed network exploits multi-dimensional dynamic convolution that introduces four attention mechanisms corresponding to four different dimensions of the convolutional kernel to extract more detailed information. Meanwhile, a residual hybrid transformer is designed, which activates more pixels to participate in the fusion process by channel attention, window attention, and overlapping cross attention, thereby strengthening the long-range dependence between different modes and enhancing the connection of global context information. A loss function, including perceptual loss and structural similarity loss, is designed, where the former enhances the visual reality and perceptual details of the fused image, and the latter enables the model to learn structural textures. The whole network adopts a multi-scale architecture and uses an unsupervised end-to-end method to realize multi-modal image fusion. Finally, our method is tested qualitatively and quantitatively on mainstream datasets. The fusion results indicate that our method achieves high scores in most quantitative indicators and satisfactory performance in visual qualitative analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A machine learning model based on clinical features and ultrasound radiomics features for pancreatic tumor classification.
- Author
-
Shunhan Yao, Dunwei Yao, Yuanxiang Huang, Shanyu Qin, and Qingfeng Chen
- Subjects
MACHINE learning ,PANCREATIC tumors ,RADIOMICS ,TUMOR classification ,ENDOSCOPIC ultrasonography ,COMPUTER-assisted image analysis (Medicine) - Abstract
Objective: This study aimed to construct a machine learning model using clinical variables and ultrasound radiomics features for the prediction of the benign or malignant nature of pancreatic tumors. Methods: 242 pancreatic tumor patients who were hospitalized at the First Affiliated Hospital of Guangxi Medical University between January 2020 and June 2023 were included in this retrospective study. The patients were randomly divided into a training cohort (n=169) and a test cohort (n=73). We collected 28 clinical features from the patients. Concurrently, 306 radiomics features were extracted from the ultrasound images of the patients' tumors. Initially, a clinical model was constructed using the logistic regression algorithm. Subsequently, radiomics models were built using SVM, random forest, XGBoost, and KNN algorithms. Finally, we combined clinical features with a new feature RAD prob calculated by applying radiomics model to construct a fusion model, and developed a nomogram based on the fusion model. Results: The performance of the fusion model surpassed that of both the clinical and radiomics models. In the training cohort, the fusion model achieved an AUC of 0.978 (95% CI: 0.96-0.99) during 5-fold cross-validation and an AUC of 0.925 (95% CI: 0.86-0.98) in the test cohort. Calibration curve and decision curve analyses demonstrated that the nomogram constructed from the fusion model has high accuracy and clinical utility. Conclusion: The fusion model containing clinical and ultrasound radiomics features showed excellent performance in predicting the benign or malignant nature of pancreatic tumors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Feasibility of ultrasound radiomics based models for classification of liver fibrosis due to Schistosoma japonicum infection.
- Author
-
Guo, Zhaoyu, Zhao, Miaomiao, Liu, Zhenhua, Zheng, Jinxin, Gong, Yanfeng, Huang, Lulu, Xue, Jingbo, Zhou, Xiaonong, and Li, Shizhu
- Subjects
HEPATIC fibrosis ,SCHISTOSOMA japonicum ,RADIOMICS ,MACHINE learning ,STUNTED growth ,COMPUTER-assisted image analysis (Medicine) - Abstract
Background: Schistosomiasis japonica represents a significant public health concern in South Asia. There is an urgent need to optimize existing schistosomiasis diagnostic techniques. This study aims to develop models for the different stages of liver fibrosis caused by Schistosoma infection utilizing ultrasound radiomics and machine learning techniques. Methods: From 2018 to 2022, we retrospectively collected data on 1,531 patients and 5,671 B-mode ultrasound images from the Second People's Hospital of Duchang City, Jiangxi Province, China. The datasets were screened based on inclusion and exclusion criteria suitable for radiomics models. Liver fibrosis due to Schistosoma infection (LFSI) was categorized into four stages: grade 0, grade 1, grade 2, and grade 3. The data were divided into six binary classification problems, such as group 1 (grade 0 vs. grade 1) and group 2 (grade 0 vs. grade 2). Key radiomic features were extracted using Pyradiomics, the Mann-Whitney U test, and the Least Absolute Shrinkage and Selection Operator (LASSO). Machine learning models were constructed using Support Vector Machine (SVM), and the contribution of different features in the model was described by applying Shapley Additive Explanations (SHAP). Results: This study ultimately included 1,388 patients and their corresponding images. A total of 851 radiomics features were extracted for each binary classification problems. Following feature selection, 18 to 76 features were retained from each groups. The area under the receiver operating characteristic curve (AUC) for the validation cohorts was 0.834 (95% CI: 0.779–0.885) for the LFSI grade 0 vs. LFSI grade 1, 0.771 (95% CI: 0.713–0.835) for LFSI grade 1 vs. LFSI grade 2, and 0.830 (95% CI: 0.762–0.885) for LFSI grade 2 vs. LFSI grade 3. Conclusion: Machine learning models based on ultrasound radiomics are feasible for classifying different stages of liver fibrosis caused by Schistosoma infection. Author summary: Schistosomiasis is a devastating disease caused by parasitic worms, leading to stunting, reduced learning ability in children, and impaired work capacity in adults. Currently, there is no ideal staging system to assess schistosomiasis-related liver conditions. Advances in machine learning can help us understand and evaluate liver ultrasound images in entirely new dimensions. Regions with high infection rates are predominantly underdeveloped and characterized by a scarcity of medical resources, where B-mode ultrasound equipment serves as one of the primary diagnostic tools. This study aims to develop an intelligent recognition model based on ultrasound radiomics and machine learning to provide a basis for the ultrasound diagnosis of schistosomiasis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. A hybrid framework for glaucoma detection through federated machine learning and deep learning models.
- Author
-
Aljohani, Abeer and Aburasain, Rua Y.
- Subjects
DEEP learning ,MACHINE learning ,GLAUCOMA ,CONVOLUTIONAL neural networks ,VISION disorders ,COMPUTER-assisted image analysis (Medicine) - Abstract
Background: Glaucoma, the second leading cause of global blindness, demands timely detection due to its asymptomatic progression. This paper introduces an advanced computerized system, integrates Machine Learning (ML), convolutional neural networks (CNNs), and image processing for accurate glaucoma detection using medical imaging data, surpassing prior research efforts. Method: Developing a hybrid glaucoma detection framework using CNNs (ResNet50, VGG-16) and Random Forest. Models analyze pre-processed retinal images independently, and post-processing rules combine predictions for an overall glaucoma impact assessment. Result: The hybrid framework achieves a significant 95.41% accuracy, with precision and recall at 99.37% and 88.37%, respectively. The F1 score, balancing precision and recall, reaches a commendable 93.52%. These results highlight the robustness and effectiveness of the hybrid framework in accurate glaucoma diagnosis. Conclusion: In summary, our research presents an innovative hybrid framework combining CNNs and traditional ML models for glaucoma detection. Using ResNet50, VGG-16, and Random Forest in an ensemble approach yields remarkable accuracy, precision, recall, and F1 score. These results showcase the methodology's potential to enhance glaucoma diagnosis, emphasizing its promising role in early detection and preventing irreversible vision loss. The integration of ML and DNNs in medical imaging analysis suggests a valuable path for future advancements in ophthalmic healthcare. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A Critical Analysis of Deep Semi-Supervised Learning Approaches for Enhanced Medical Image Classification.
- Author
-
Shakya, Kaushlesh Singh, Alavi, Azadeh, Porteous, Julie, K, Priti, Laddi, Amit, and Jaiswal, Manojkumar
- Subjects
SUPERVISED learning ,IMAGE recognition (Computer vision) ,DEEP learning ,COMPUTER-assisted image analysis (Medicine) ,MEDICAL coding ,DIAGNOSTIC imaging - Abstract
Deep semi-supervised learning (DSSL) is a machine learning paradigm that blends supervised and unsupervised learning techniques to improve the performance of various models in computer vision tasks. Medical image classification plays a crucial role in disease diagnosis, treatment planning, and patient care. However, obtaining labeled medical image data is often expensive and time-consuming for medical practitioners, leading to limited labeled datasets. DSSL techniques aim to address this challenge, particularly in various medical image tasks, to improve model generalization and performance. DSSL models leverage both the labeled information, which provides explicit supervision, and the unlabeled data, which can provide additional information about the underlying data distribution. That offers a practical solution to resource-intensive demands of data annotation, and enhances the model's ability to generalize across diverse and previously unseen data landscapes. The present study provides a critical review of various DSSL approaches and their effectiveness and challenges in enhancing medical image classification tasks. The study categorized DSSL techniques into six classes: consistency regularization method, deep adversarial method, pseudo-learning method, graph-based method, multi-label method, and hybrid method. Further, a comparative analysis of performance for six considered methods is conducted using existing studies. The referenced studies have employed metrics such as accuracy, sensitivity, specificity, AUC-ROC, and F1 score to evaluate the performance of DSSL methods on different medical image datasets. Additionally, challenges of the datasets, such as heterogeneity, limited labeled data, and model interpretability, were discussed and highlighted in the context of DSSL for medical image classification. The current review provides future directions and considerations to researchers to further address the challenges and take full advantage of these methods in clinical practices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. An X-ray image classification method with fine-grained features for explainable diagnosis of pneumoconiosis.
- Author
-
Zhang, Chunmei, He, Jia, and Shang, Lin
- Subjects
IMAGE recognition (Computer vision) ,X-ray imaging ,DUST diseases ,COMPUTER-aided diagnosis ,COMPUTER-assisted image analysis (Medicine) - Abstract
Medical image classification has become popular in computer-aided diagnosis (CAD) of pneumoconiosis. However, most current work focuses on improving the accuracy of classification results and has overlooked the corresponding medical explanations. With the expectation to achieve these two sub-goals simultaneously, we propose an explainable X-ray image classification method with fine-grained features to diagnose pneumoconiosis. The proposed method consists of three consecutive stages. First, we generate a highlighted discriminative region by gradient-weighted class activation mapping (Grad-CAM) for each sample. Thus, we can give a visual explanation for the basis of classification. Then, we utilize selective convolutional descriptor aggregation (SCDA) to extract fine-grained features from the obtained discriminative region. After dimension reduction of obtained fine-grained features, we finally make a classification with these features to discover which samples are diseased. Extensive experiments on actual pneumoconiosis X-ray image datasets have shown the validity and superiority of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Texture analysis using short-tau inversion recovery magnetic resonance images to differentiate squamous cell carcinoma of the gingiva from medication-related osteonecrosis of the jaw.
- Author
-
Ito, Kotaro, Hirahara, Naohisa, Muraoka, Hirotaka, Sawada, Eri, Tokunaga, Satoshi, and Kaneda, Takashi
- Subjects
JAW diseases ,SQUAMOUS cell carcinoma ,GINGIVAL neoplasms ,COMPUTER-assisted image analysis (Medicine) ,QUALITATIVE research ,DIPHOSPHONATES ,RESEARCH funding ,COMPUTED tomography ,GINGIVA ,MAGNETIC resonance imaging ,CANCER patients ,RETROSPECTIVE studies ,DESCRIPTIVE statistics ,QUANTITATIVE research ,CASE-control method ,COMPUTER-aided diagnosis ,COMPARATIVE studies ,DATA analysis software ,DIGITAL image processing ,OSTEONECROSIS ,SENSITIVITY & specificity (Statistics) - Abstract
Objectives: Despite the difficulty in distinguishing between squamous cell carcinoma (SCC) and medication-related osteonecrosis of the jaw (MRONJ) on the basis of medical imaging examinations, the two conditions have completely different treatment methods and prognoses. Therefore, differentiation of SCC from MRONJ on imaging examinations is very important. This study aimed to distinguish SCC from MRONJ by performing texture analysis using magnetic resonance imaging (MRI) short-tau inversion recovery images. Methods: This retrospective case–control study included 14 patients with SCC of the lower gingiva and 35 with MRONJ of the mandible who underwent MRI and computed tomography (CT) for suspected SCC or MRONJ. SCC was identified by histopathological examination of tissues excised during surgery. The radiomics features of SCC and MRONJ were analyzed using the open-access software MaZda version 3.3 (Technical University of Lodz, Institute of Electronics, Poland). CT was used to evaluate the presence or absence of qualitative findings (sclerosis, sequestrum, osteolysis, periosteal reaction, and cellulitis) of SCC and MRONJ. Results: Among the 19 texture features selected using MaZda feature-reduction methods, SCC of the gingiva and MRONJ of the mandible revealed differences in two histogram features, one absolute gradient feature, and 16 Gy level co-occurrence matrix features. In particular, the percentile, angular second moment, entropy, and difference entropy exhibited excellent diagnostic performance. Conclusion: Non-contrast-enhanced MRI texture analysis revealed differences in texture parameters between mandibular SCC and mandibular MRONJ. MRI texture analysis can be a new noninvasive quantitative method for distinguishing between SCC and MRONJ. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Is Computer-Assisted Tissue Image Analysis the Future in Minimally Invasive Surgery? A Review on the Current Status of Its Applications.
- Author
-
Tanos, Vasilios, Neofytou, Marios, Soliman, Ahmed Samy Abdulhady, Tanos, Panayiotis, and Pattichis, Constantinos S.
- Subjects
COMPUTER-assisted image analysis (Medicine) ,IMAGE analysis ,SKIN cancer ,MINIMALLY invasive procedures ,TISSUE analysis ,BASAL cell carcinoma ,ENDOSCOPIC surgery ,EARLY diagnosis - Abstract
Purpose: Computer-assisted tissue image analysis (CATIA) enables an optical biopsy of human tissue during minimally invasive surgery and endoscopy. Thus far, it has been implemented in gastrointestinal, endometrial, and dermatologic examinations that use computational analysis and image texture feature systems. We review and evaluate the impact of in vivo optical biopsies performed by tissue image analysis on the surgeon's diagnostic ability and sampling precision and investigate how operation complications could be minimized. Methods: We performed a literature search in PubMed, IEEE, Xplore, Elsevier, and Google Scholar, which yielded 28 relevant articles. Our literature review summarizes the available data on CATIA of human tissues and explores the possibilities of computer-assisted early disease diagnoses, including cancer. Results: Hysteroscopic image texture analysis of the endometrium successfully distinguished benign from malignant conditions up to 91% of the time. In dermatologic studies, the accuracy of distinguishing nevi melanoma from benign disease fluctuated from 73% to 81%. Skin biopsies of basal cell carcinoma and melanoma exhibited an accuracy of 92.4%, sensitivity of 99.1%, and specificity of 93.3% and distinguished nonmelanoma and normal lesions from benign precancerous lesions with 91.9% and 82.8% accuracy, respectively. Gastrointestinal and endometrial examinations are still at the experimental phase. Conclusions: CATIA is a promising application for distinguishing normal from abnormal tissues during endoscopic procedures and minimally invasive surgeries. However, the efficacy of computer-assisted diagnostics in distinguishing benign from malignant states is still not well documented. Prospective and randomized studies are needed before CATIA is implemented in clinical practice. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. DRCM: a disentangled representation network based on coordinate and multimodal attention for medical image fusion.
- Author
-
Wanwan Huang, Han Zhang, Yu Cheng, and Xiongwen Quan
- Subjects
IMAGE fusion ,COMPUTER-assisted image analysis (Medicine) ,DIAGNOSTIC imaging ,DEEP learning ,ATTENTION - Abstract
Recent studies on medical image fusion based on deep learning have made remarkable progress, but the common and exclusive features of different modalities, especially their subsequent feature enhancement, are ignored. Since medical images of different modalities have unique information, special learning of exclusive features should be designed to express the unique information of different modalities so as to obtain a medical fusion image with more information and details. Therefore, we propose an attention mechanismbased disentangled representation network for medical image fusion, which designs coordinate attention and multimodal attention to extract and strengthen common and exclusive features. First, the common and exclusive features of each modality were obtained by the cross mutual information and adversarial objective methods, respectively. Then, coordinate attention is focused on the enhancement of the common and exclusive features of different modalities, and the exclusive features are weighted by multimodal attention. Finally, these two kinds of features are fused. The effectiveness of the three innovation modules is verified by ablation experiments. Furthermore, eight comparison methods are selected for qualitative analysis, and four metrics are used for quantitative comparison. The values of the four metrics demonstrate the effect of the DRCM. Furthermore, the DRCM achieved better results on SCD, Nabf, and MS-SSIM metrics, which indicates that the DRCM achieved the goal of further improving the visual quality of the fused image with more information from source images and less noise. Through the comprehensive comparison and analysis of the experimental results, it was found that the DRCM outperforms the comparison method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Multiparametric MRI in Era of Artificial Intelligence for Bladder Cancer Therapies.
- Author
-
Akin, Oguz, Lema-Dopico, Alfonso, Paudyal, Ramesh, Konar, Amaresha Shridhar, Chenevert, Thomas L., Malyarenko, Dariya, Hadjiiski, Lubomir, Al-Ahmadie, Hikmat, Goh, Alvin C., Bochner, Bernard, Rosenberg, Jonathan, Schwartz, Lawrence H., and Shukla-Dave, Amita
- Subjects
BLADDER tumors ,DIGITAL image processing ,CANCER invasiveness ,MAGNETIC resonance imaging ,ARTIFICIAL intelligence ,TUMOR classification ,COMPUTER-assisted image analysis (Medicine) ,TUMOR markers ,MUSCLE tumors ,ARTIFICIAL neural networks - Abstract
Simple Summary: Bladder cancer is the sixth most common cancer in the United States. The prognosis is excellent for localized forms, but the survival rates drop significantly when cancer invades the smooth muscle of the bladder. Imaging is essential for the accurate staging, prognosis, and assessment of therapeutic efficacy in bladder cancer and has the potential to guide personalized treatment strategies. Computed tomography has traditionally been the standard modality, but magnetic resonance imaging (MRI) is the emerging technique of choice for its superior soft tissue contrast without exposure to ionizing radiation. Multiparametric (mp)MRI provides physiological data interrogating the biology of the tumor, as well as high-resolution anatomical images. Advanced MRI techniques have enabled new imaging-based clinical endpoints, including novel scoring systems for tumor staging. Artificial intelligence (AI) holds the potential for the automated discovery of clinically relevant patterns in mpMRI images of the bladder. This review focuses on the principles, applications, and performance of mpMRI for bladder imaging. Quantitative imaging biomarkers (QIBs) derived from mpMRI are increasingly used in oncological applications, including tumor staging, prognosis, and assessment of treatment response. To standardize mpMRI acquisition and interpretation, an expert panel developed the Vesical Imaging–Reporting and Data System (VI-RADS). Many studies confirm the standardization and high degree of inter-reader agreement to discriminate muscle invasiveness in bladder cancer, supporting VI-RADS implementation in routine clinical practice. The standard MRI sequences for VI-RADS scoring are anatomical imaging, including T
2 w images, and physiological imaging with diffusion-weighted MRI (DW-MRI) and dynamic contrast-enhanced MRI (DCE-MRI). Physiological QIBs derived from analysis of DW- and DCE-MRI data and radiomic image features extracted from mpMRI images play an important role in bladder cancer. The current development of AI tools for analyzing mpMRI data and their potential impact on bladder imaging are surveyed. AI architectures are often implemented based on convolutional neural networks (CNNs), focusing on narrow/specific tasks. The application of AI can substantially impact bladder imaging clinical workflows; for example, manual tumor segmentation, which demands high time commitment and has inter-reader variability, can be replaced by an autosegmentation tool. The use of mpMRI and AI is projected to drive the field toward the personalized management of bladder cancer patients. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
15. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology.
- Author
-
Jost, Elena, Kosian, Philipp, Jimenez Cruz, Jorge, Albarqouni, Shadi, Gembruch, Ulrich, Strizek, Brigitte, and Recker, Florian
- Subjects
ULTRASONIC imaging ,ARTIFICIAL intelligence ,OBSTETRICS ,GYNECOLOGY ,COMPUTER-assisted image analysis (Medicine) - Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Anatomical Prior-Based Automatic Segmentation for Cardiac Substructures from Computed Tomography Images.
- Author
-
Wang, Xuefang, Li, Xinyi, Du, Ruxu, Zhong, Yong, Lu, Yao, and Song, Ting
- Subjects
COMPUTED tomography ,IMAGE segmentation ,CARDIOGRAPHIC tomography ,CARDIAC imaging ,COMPUTER-assisted image analysis (Medicine) ,PRIOR learning ,DIAGNOSTIC imaging - Abstract
Cardiac substructure segmentation is a prerequisite for cardiac diagnosis and treatment, providing a basis for accurate calculation, modeling, and analysis of the entire cardiac structure. CT (computed tomography) imaging can be used for a noninvasive qualitative and quantitative evaluation of the cardiac anatomy and function. Cardiac substructures have diverse grayscales, fuzzy boundaries, irregular shapes, and variable locations. We designed a deep learning-based framework to improve the accuracy of the automatic segmentation of cardiac substructures. This framework integrates cardiac anatomical knowledge; it uses prior knowledge of the location, shape, and scale of cardiac substructures and separately processes the structures of different scales. Through two successive segmentation steps with a coarse-to-fine cascaded network, the more easily segmented substructures were coarsely segmented first; then, the more difficult substructures were finely segmented. The coarse segmentation result was used as prior information and combined with the original image as the input for the model. Anatomical knowledge of the large-scale substructures was embedded into the fine segmentation network to guide and train the small-scale substructures, achieving efficient and accurate segmentation of ten cardiac substructures. Sixty cardiac CT images and ten substructures manually delineated by experienced radiologists were retrospectively collected; the model was evaluated using the DSC (Dice similarity coefficient), Recall, Precision, and the Hausdorff distance. Compared with current mainstream segmentation models, our approach demonstrated significantly higher segmentation accuracy, with accurate segmentation of ten substructures of different shapes and sizes, indicating that the segmentation framework fused with prior anatomical knowledge has superior segmentation performance and can better segment small targets in multi-target segmentation tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Medical image fusion based on multi-scale decomposition using hybrid deep learning network model.
- Author
-
Munawwar, Syed and Rao, P. V. Gopi Krishna
- Subjects
IMAGE fusion ,DEEP learning ,COMPUTER-assisted image analysis (Medicine) ,DIAGNOSTIC imaging ,DISCRETE wavelet transforms ,ARTIFICIAL intelligence ,DIAGNOSIS - Abstract
In medical applications, effective medical diagnosis is developed by combining medical images from various morphologies using medical image fusion techniques. An accurate diagnosis cannot be made from a single modality image, and the existing approaches need better efficiency due to poor image quality and inconsistent performance. To solve this problem, this research proposes an effective artificial intelligence model based on multi-modal medical image fusion. This research develops a new method based on medical images using deep residual neural networks (ResNet-50) and a DarkNet-19. The optimised discrete wavelet transform (ODWT) is used for image decomposing. The original image's high- and low-frequency coefficients are decomposed by using ODWT. The low-frequency coefficient is then fused using the Modified ResNet-50 model. After that, DarkNet-19 fuses the high-frequency coefficients, and the input stimulus for DarkNet-19 is a high-frequency coefficient's modified average gradient. The inverse ODWT is employed to reconstruct the fused image finally. The proposed fusion model's effectiveness is assessed using the various CT-MRI, CT-PET, and MRI-SPECT imaging datasets. The proposed medical image fusion approach attains a maximum performance for PSNR of 40.03 dB, Fusion Factor of 6.1025, Fusion Symmetry of 0.0869, and Visual Information Fidelity of 0.121. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Improved Breast Cancer Classification through Combining Transfer Learning and Attention Mechanism.
- Author
-
Ashurov, Asadulla, Chelloug, Samia Allaoua, Tselykh, Alexey, Muthanna, Mohammed Saleh Ali, Muthanna, Ammar, and Al-Gaashani, Mehdhar S. A. M.
- Subjects
IMAGE recognition (Computer vision) ,TUMOR classification ,BREAST cancer ,CANCER diagnosis ,MAMMOGRAMS ,COMPUTER-assisted image analysis (Medicine) ,MAGNETIC resonance mammography - Abstract
Breast cancer, a leading cause of female mortality worldwide, poses a significant health challenge. Recent advancements in deep learning techniques have revolutionized breast cancer pathology by enabling accurate image classification. Various imaging methods, such as mammography, CT, MRI, ultrasound, and biopsies, aid in breast cancer detection. Computer-assisted pathological image classification is of paramount importance for breast cancer diagnosis. This study introduces a novel approach to breast cancer histopathological image classification. It leverages modified pre-trained CNN models and attention mechanisms to enhance model interpretability and robustness, emphasizing localized features and enabling accurate discrimination of complex cases. Our method involves transfer learning with deep CNN models—Xception, VGG16, ResNet50, MobileNet, and DenseNet121—augmented with the convolutional block attention module (CBAM). The pre-trained models are finetuned, and the two CBAM models are incorporated at the end of the pre-trained models. The models are compared to state-of-the-art breast cancer diagnosis approaches and tested for accuracy, precision, recall, and F1 score. The confusion matrices are used to evaluate and visualize the results of the compared models. They help in assessing the models' performance. The test accuracy rates for the attention mechanism (AM) using the Xception model on the "BreakHis" breast cancer dataset are encouraging at 99.2% and 99.5%. The test accuracy for DenseNet121 with AMs is 99.6%. The proposed approaches also performed better than previous approaches examined in the related studies. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Recent Advancements in Deep Learning Using Whole Slide Imaging for Cancer Prognosis.
- Author
-
Lee, Minhyeok
- Subjects
CANCER prognosis ,DEEP learning ,CANCER treatment ,COMPUTER-assisted image analysis (Medicine) ,ARTIFICIAL intelligence ,PREDICTION models - Abstract
This review furnishes an exhaustive analysis of the latest advancements in deep learning techniques applied to whole slide images (WSIs) in the context of cancer prognosis, focusing specifically on publications from 2019 through 2023. The swiftly maturing field of deep learning, in combination with the burgeoning availability of WSIs, manifests significant potential in revolutionizing the predictive modeling of cancer prognosis. In light of the swift evolution and profound complexity of the field, it is essential to systematically review contemporary methodologies and critically appraise their ramifications. This review elucidates the prevailing landscape of this intersection, cataloging major developments, evaluating their strengths and weaknesses, and providing discerning insights into prospective directions. In this paper, a comprehensive overview of the field aims to be presented, which can serve as a critical resource for researchers and clinicians, ultimately enhancing the quality of cancer care outcomes. This review's findings accentuate the need for ongoing scrutiny of recent studies in this rapidly progressing field to discern patterns, understand breakthroughs, and navigate future research trajectories. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Lossy Image Compression in a Preclinical Multimodal Imaging Study.
- Author
-
Cunha, Francisco F., Blüml, Valentin, Zopf, Lydia M., Walter, Andreas, Wagner, Michael, Weninger, Wolfgang J., Thomaz, Lucas A., Tavora, Luís M. N., da Silva Cruz, Luis A., and Faria, Sergio M. M.
- Subjects
TUMOR diagnosis ,MAGNETIC resonance imaging ,RESEARCH funding ,COMPUTER-assisted image analysis (Medicine) ,COMPUTED tomography ,RESEARCH bias - Abstract
The growing use of multimodal high-resolution volumetric data in pre-clinical studies leads to challenges related to the management and handling of the large amount of these datasets. Contrarily to the clinical context, currently there are no standard guidelines to regulate the use of image compression in pre-clinical contexts as a potential alleviation of this problem. In this work, the authors study the application of lossy image coding to compress high-resolution volumetric biomedical data. The impact of compression on the metrics and interpretation of volumetric data was quantified for a correlated multimodal imaging study to characterize murine tumor vasculature, using volumetric high-resolution episcopic microscopy (HREM), micro-computed tomography (µCT), and micro-magnetic resonance imaging (µMRI). The effects of compression were assessed by measuring task-specific performances of several biomedical experts who interpreted and labeled multiple data volumes compressed at different degrees. We defined trade-offs between data volume reduction and preservation of visual information, which ensured the preservation of relevant vasculature morphology at maximum compression efficiency across scales. Using the Jaccard Index (JI) and the average Hausdorff Distance (HD) after vasculature segmentation, we could demonstrate that, in this study, compression that yields to a 256-fold reduction of the data size allowed to keep the error induced by compression below the inter-observer variability, with minimal impact on the assessment of the tumor vasculature across scales. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. HyperTDP-Net: A Hyper-densely Connected Compression-and-Decomposition Network Based on Trident Dilated Perception for PET and MRI Image Fusion.
- Author
-
Li, Bicao, Du, Yifan, Wang, Bei, Shao, Zhuhong, Huang, Jie, Wei, Miaomiao, and Lu, Jiaxi
- Subjects
IMAGE fusion ,POSITRON emission tomography ,MAGNETIC resonance imaging ,COMPUTER-assisted image analysis (Medicine) ,DIAGNOSTIC imaging ,TECHNOLOGICAL innovations ,MULTISPECTRAL imaging ,MEDICAL equipment - Abstract
Objective. Since medical images generated by medical devices have low spatial resolution and quality, fusion approaches on medical images can generate a fused image containing a more comprehensive range of different modal features to help physicians accurately diagnose diseases. Conventional methods based on deep learning for medical image fusion usually extract only local features without considering their global features, which often leads to the problem of unclear detail information in the final fused image. Therefore, medical image fusion is a challenging task of great relevance. Approach. This paper proposes a novel end-to-end medical image fusion model for PET and MRI images to achieve information interaction between different pathways, termed as hyper-densely connected compression-and-decomposition network based on trident dilated perception for PET and MRI image fusion (HyperTDP-Net). In particular, in the compression network, a dual residual hyper densely module is constructed to take full advantage of middle layer information. Moreover, we establish the trident dilated perception module to precisely determine the location information of features, and improve the feature representation capability of the network. In addition, we abandon the ordinary mean square error as the content loss function and propose a new content-aware loss consisting of structural similarity loss and gradient loss, so that the fused image not only contains rich texture details but also maintains sufficient structural similarity with the source images. Main results. The experimental dataset used in this paper is derived from multimodal medical images published by Harvard Medical School. Extensive experiments illustrate that our model contains more edge information and texture detail information in the fusion result than the 12 state-of-the-art fusion models and ablation study results demonstrate the effectiveness of three technical innovations. Significance. As medical images continue to be used in clinical diagnosis, our method is expected to be a tool that can effectively improve the accuracy of physician diagnosis and automatic machine detection. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Using 3D deep features from CT scans for cancer prognosis based on a video classification model: A multi‐dataset feasibility study.
- Author
-
Chen, Junhua, Wee, Leonard, Dekker, Andre, and Bermejo, Inigo
- Subjects
DEEP learning ,COMPUTED tomography ,CANCER prognosis ,COMPUTER-assisted image analysis (Medicine) ,THREE-dimensional imaging ,RADIOMICS - Abstract
Background: Cancer prognosis before and after treatment is key for patient management and decision making. Handcrafted imaging biomarkers—radiomics—have shown potential in predicting prognosis. Purpose: However, given the recent progress in deep learning, it is timely and relevant to pose the question: could deep learning based 3D imaging features be used as imaging biomarkers and outperform radiomics? Methods: Effectiveness, reproducibility in test/retest, across modalities, and correlation of deep features with clinical features such as tumor volume and TNM staging were tested in this study. Radiomics was introduced as the reference image biomarker. For deep feature extraction, we transformed the CT scans into videos, and we adopted the pre‐trained Inflated 3D ConvNet (I3D) video classification network as the architecture. We used four datasets—LUNG 1 (n = 422), LUNG 4 (n = 106), OPC (n = 605), and H&N 1 (n = 89)—with 1270 samples from different centers and cancer types—lung and head and neck cancer—to test deep features' predictiveness and two additional datasets to assess the reproducibility of deep features. Results: Support Vector Machine–Recursive Feature Elimination (SVM–RFE) selected top 100 deep features achieved a concordance index (CI) of 0.67 in survival prediction in LUNG 1, 0.87 in LUNG 4, 0.76 in OPC, and 0.87 in H&N 1, while SVM‐RFE selected top 100 radiomics achieved CIs of 0.64, 0.77, 0.73, and 0.74, respectively, all statistically significant differences (p < 0.01, Wilcoxon's test). Most selected deep features are not correlated with tumor volume and TNM staging. However, full radiomics features show higher reproducibility than full deep features in a test/retest setting (0.89 vs. 0.62, concordance correlation coefficient). Conclusion: The results show that deep features can outperform radiomics while providing different views for tumor prognosis compared to tumor volume and TNM staging. However, deep features suffer from lower reproducibility than radiomic features and lack the interpretability of the latter. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. A Novel Distant Domain Transfer Learning Framework for Thyroid Image Classification.
- Author
-
Tang, Fenghe, Ding, Jianrui, Wang, Lingtao, and Ning, Chunping
- Subjects
IMAGE recognition (Computer vision) ,IMAGE analysis ,THYROID gland ,COMPUTER-assisted image analysis (Medicine) ,ULTRASONIC imaging ,TECHNOLOGY transfer - Abstract
Medical ultrasound imaging technology is currently the preferred method for early diagnosis of thyroid nodules. Radiologists' analysis of ultrasound images is highly dependent on their clinical experience and is susceptible to intra- and inter-observer variability. Although end-to-end deep learning technique can address these limitations, the difficulty of acquiring annotated medical image makes it very challenging. Transfer learning can alleviate the problems, but the large gap between source and target domain will lead to negative transfer. In this paper, a novel transfer learning method with distant domain high-level feature fusion (DHFF) model is proposed. It reduces the distribution distance between the source domain and the target domain while maintaining the characteristics of respective domains, which can avoid excessive feature fusion while enabling the model to learn more valuable transfer knowledge. The DHFF is validated by multiple public source and private target datasets in experiments. The results show that the classification accuracy of DHFF is up to 88.92% with thyroid ultrasound auxiliary source domains, which is up to 8% higher than existing transfer and distant transfer algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Low-Dose CT Image Reconstruction using Vector Quantized Convolutional Autoencoder with Perceptual Loss.
- Author
-
Ramanathan, Shalini and Ramasundaram, Mohan
- Subjects
COMPUTED tomography ,COMPUTER-assisted image analysis (Medicine) ,IMAGE reconstruction ,VECTOR quantization ,DIAGNOSTIC imaging ,MEDICAL screening - Abstract
Computed Tomography (CT) has become a useful screening procedure to identify disease or injury within various regions of the human body. The human beings' health issues caused by CT radiation have attracted the interest of the researchers and academic community. Reducing the radiation dose is the solution, but the CT image generated with low-dose radiation results in excessive noise due to lower intensity and fewer angle measurements. Low-dose CT scan images reduce image quality and thus affect a doctor's diagnosis. Deep learning methods have become increasingly popular in recent years, many models have been proposed for Low-Dose CT image reconstruction. Low-Dose CT Image Reconstruction is an active area of modern medical imaging research. Deep learning-based medical image reconstruction methods will be helpful to reduce noise without compromising image quality. Therefore, this paper introduces a novel CT image reconstruction method based on the vector quantization technique utilized in the convolutional autoencoder network. The quality of the results is evaluated based on the perceptual loss function. Experimental evaluations are conducted on the LoDoPaB-CT benchmark dataset. Its result showed that the proposed network obtained better performance metric values and better noise elimination results, in terms of quantitative and visual evaluation, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Medical image classification for Alzheimer's using a deep learning approach.
- Author
-
Bamber, Sukhvinder Singh and Vishvakarma, Tanmya
- Subjects
DEEP learning ,IMAGE recognition (Computer vision) ,ARTIFICIAL neural networks ,COMPUTER-assisted image analysis (Medicine) ,ALZHEIMER'S disease ,CONVOLUTIONAL neural networks - Abstract
Medical image categorization is essential for a variety of medical assessments and education functions. The purpose of medical image classification is to organize medical images into useful categories for the purpose of illness diagnosis or study, making it one of the most pressing issues in the field of image recognition. On the other hand, traditional methods have plateaued in their effectiveness. Additionally, a substantial amount of time and energy is required when employing them to extract and choose categorization features. Alzheimer's disease is one of the most frequent sources of dementia in elderly patients. Metabolic diseases affect a huge population worldwide, and henceforth, there is a vast scope of applying machine learning to find treatments to these diseases. As a relatively new machine learning technique, deep neural networks have shown great promise for a variety of categorization problems. In this research, a model for diagnosing and tracking the development of Alzheimer's disease that is both accurate and easy to understand has been developed. By following the developed procedure, medical professionals may make deliberations with solid justification. Early diagnosis utilizing these machine learning algorithms has the potential to minimize mortality rates associated with Alzheimer's disease. This research work has developed a convolutional neural network using a shallow convolution layer to identify Alzheimer's disease in medical image patches. The total accuracy of proposed classifications is around 98%, which is greater than the accuracy of the most popular existing approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Artificial intelligence in thyroid ultrasound.
- Author
-
Chun-Li Cao, Qiao-Li Li, Jin Tong, Li-Nan Shi, Wen-Xiao Li, Ya Xu, Jing Cheng, Ting-Ting Du, Jun Li, and Xin-Wu Cui
- Subjects
THYROID cancer ,ARTIFICIAL intelligence ,MACHINE learning ,LYMPHATIC metastasis ,COMPUTER-assisted image analysis (Medicine) ,THYROID gland - Abstract
Artificial intelligence (AI), particularly deep learning (DL) algorithms, has demonstrated remarkable progress in image-recognition tasks, enabling the automatic quantitative assessment of complex medical images with increased accuracy and efficiency. AI is widely used and is becoming increasingly popular in the field of ultrasound. The rising incidence of thyroid cancer and the workload of physicians have driven the need to utilize AI to efficiently process thyroid ultrasound images. Therefore, leveraging AI in thyroid cancer ultrasound screening and diagnosis cannot only help radiologists achieve more accurate and efficient imaging diagnosis but also reduce their workload. In this paper, we aim to present a comprehensive overview of the technical knowledge of AI with a focus on traditional machine learning (ML) algorithms and DL algorithms. We will also discuss their clinical applications in the ultrasound imaging of thyroid diseases, particularly in differentiating between benign and malignant nodules and predicting cervical lymph node metastasis in thyroid cancer. Finally, we will conclude that AI technology holds great promise for improving the accuracy of thyroid disease ultrasound diagnosis and discuss the potential prospects of AI in this field. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Binary classification of non-specific low back pain condition based on the combination of B-mode ultrasound and shear wave elastography at multiple sites.
- Author
-
Xiaocheng Yu, Xiaohua Xu, Qinghua Huang, Guowen Zhu, Faying Xu, Zhenhua Liu, Lin Su, Haiping Zheng, Chen Zhou, Qiuming Chen, Fen Gao, Mengting Lin, Shuai Yang, Mou-Hsun Chiang, and Yongjin Zhou
- Subjects
LUMBAR pain ,SHEAR waves ,ELASTOGRAPHY ,COMPUTER-assisted image analysis (Medicine) ,VISUAL analog scale - Abstract
Introduction: Low back pain (LBP) is a prevalent and complex condition that poses significant medical, social, and economic burdens worldwide. The accurate and timely assessment and diagnosis of LBP, particularly non-specific LBP (NSLBP), are crucial to developing effective interventions and treatments for LBP patients. In this study, we aimed to investigate the potential of combining B-mode ultrasound image features with shear wave elastography (SWE) features to improve the classification of NSLBP patients. Methods: We recruited 52 subjects with NSLBP from the University of Hong Kong-Shenzhen Hospital and collected B-mode ultrasound images and SWE data from multiple sites. The Visual Analogue Scale (VAS) was used as the ground truth to classify NSLBP patients. We extracted and selected features from the data and employed a support vector machine (SVM) model to classify NSLBP patients. The performance of the SVM model was evaluated using five-fold cross-validation and the accuracy, precision, and sensitivity were calculated. Results: We obtained an optimal feature set of 48 features, among which the SWE elasticity feature had the most significant contribution to the classification task. The SVM model achieved an accuracy, precision, and sensitivity of 0.85, 0.89, and 0.86, respectively, which were higher than the previously reported values of MRI. Discussion: In this study, we aimed to investigate the potential of combining B-mode ultrasound image features with shear wave elastography (SWE) features to improve the classification of non-specific low back pain (NSLBP) patients. Our results showed that combining B-mode ultrasound image featureswith SWE features and employing an SVM model can improve the automatic classification of NSLBP patients. Our findings also suggest that the SWE elasticity feature is a crucial factor in classifying NSLBP patients, and the proposed method can identify the important site and position of the muscle in the NSLBP classification task. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Radiomics Based Diagnosis with Medical Imaging: A Comprehensive Study.
- Author
-
Saini, Sumindar Kaur, Thakur, Niharika, and Juneja, Mamta
- Subjects
COMPUTER-assisted image analysis (Medicine) ,RADIOMICS ,COMPUTER-aided diagnosis ,DIAGNOSTIC imaging ,DIAGNOSIS - Abstract
Radiomics is a domain of biomedical and bioengineering research that analyzes large-scale radiological images associated with biology. Radiomics is dependent on feature extraction and is one of the extensions in the applications of Computer aided diagnosis (CAD). The number of features used in diagnosis are comparatively more in radiomics, making it one of the best approaches to be used in image analysis. The features that are extracted can be used in multiple modalities making the models more feasible and increasing their overall utility. Thus, this paper analyzes and showcases recent work being done on radiomics that is emerging rapidly in the field of medical applications. This paper mainly focused on oncology and presents a detailed literature review of radiomics. Further, various gaps and challenges that this field of radiomics faces while emerging as one of the most used approaches for future development are also presented in this study. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Simultaneous Integrated Boost (SIB) vs. Sequential Boost in Head and Neck Cancer (HNC) Radiotherapy: A Radiomics-Based Decision Proof of Concept.
- Author
-
Mireștean, Camil Ciprian, Iancu, Roxana Irina, and Iancu, Dragoș Petru Teodor
- Subjects
CONE beam computed tomography ,COMPUTER-assisted image analysis (Medicine) ,RADIOMICS ,RADIOTHERAPY ,PROOF of concept - Abstract
Artificial intelligence (AI) and in particular radiomics has opened new horizons by extracting data from medical imaging that could be used not only to improve diagnostic accuracy, but also to be included in predictive models contributing to treatment stratification of cancer. Head and neck cancers (HNC) are associated with higher recurrence rates, especially in advanced stages of disease. It is considered that approximately 50% of cases will evolve with loco-regional recurrence, even if they will benefit from a current standard treatment consisting of definitive chemo-radiotherapy. Radiotherapy, the cornerstone treatment in locally advanced HNC, could be delivered either by the simultaneous integrated boost (SIB) technique or by the sequential boost technique, the decision often being a subjective one. The principles of radiobiology could be the basis of an optimal decision between the two methods of radiation dose delivery, but the heterogeneity of HNC radio-sensitivity makes this approach difficult. Radiomics has demonstrated the ability to non-invasively predict radio-sensitivity and the risk of relapse in HNC. Tumor heterogeneity evaluated with radiomics, the inclusion of coarseness, entropy and other first order features extracted from gross tumor volume (GTV) in multivariate models could identify pre-treatment cases that will benefit from one of the approaches (SIB or sequential boost radio-chemotherapy) considered the current standard of care for locally advanced HNC. Computer tomography (CT) simulation and daily cone beam CT (CBCT) could be chosen as imaging source for radiomic analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. The progress of radiomics in thyroid nodules.
- Author
-
XiaoFan Gao, Xuan Ran, and Wei Ding
- Subjects
RADIOMICS ,THYROID nodules ,THYROID cancer ,COMPUTER-assisted image analysis (Medicine) ,ARTIFICIAL intelligence - Abstract
Due to the development of Artificial Intelligence (AI), Machine Learning (ML), and the improvement of medical imaging equipment, radiomics has become a popular research in recent years. Radiomics can obtain various quantitative features from medical images, highlighting the invisible image traits and significantly enhancing the ability of medical imaging identification and prediction. The literature indicates that radiomics has a high potential in identifying and predicting thyroid nodules. So in this article, we explain the development, definition, and workflow of radiomics. And then, we summarize the applications of various imaging techniques in identifying benign and malignant thyroid nodules, predicting invasiveness and metastasis of thyroid lymph nodes, forecasting the prognosis of thyroid malignancies, and some new advances in molecular level and deep learning. The shortcomings of this technique are also summarized, and future development prospects are provided. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Multi-centre deep learning for placenta segmentation in obstetric ultrasound with multi-observer and cross-country generalization.
- Author
-
Andreasen, Lisbeth Anita, Feragen, Aasa, Christensen, Anders Nymark, Thybo, Jonathan Kistrup, Svendsen, Morten Bo S., Zepf, Kilian, Lekadir, Karim, and Tolsgaard, Martin Grønnebæk
- Subjects
DEEP learning ,FETAL ultrasonic imaging ,PLACENTA ,PLACENTA praevia ,CONVOLUTIONAL neural networks ,ULTRASONIC imaging ,COMPUTER-assisted image analysis (Medicine) - Abstract
The placenta is crucial to fetal well-being and it plays a significant role in the pathogenesis of hypertensive pregnancy disorders. Moreover, a timely diagnosis of placenta previa may save lives. Ultrasound is the primary imaging modality in pregnancy, but high-quality imaging depends on the access to equipment and staff, which is not possible in all settings. Convolutional neural networks may help standardize the acquisition of images for fetal diagnostics. Our aim was to develop a deep learning based model for classification and segmentation of the placenta in ultrasound images. We trained a model based on manual annotations of 7,500 ultrasound images to identify and segment the placenta. The model's performance was compared to annotations made by 25 clinicians (experts, trainees, midwives). The overall image classification accuracy was 81%. The average intersection over union score (IoU) reached 0.78. The model's accuracy was lower than experts' and trainees', but it outperformed all clinicians at delineating the placenta, IoU = 0.75 vs 0.69, 0.66, 0.59. The model was cross validated on 100 2nd trimester images from Barcelona, yielding an accuracy of 76%, IoU 0.68. In conclusion, we developed a model for automatic classification and segmentation of the placenta with consistent performance across different patient populations. It may be used for automated detection of placenta previa and enable future deep learning research in placental dysfunction. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Role of Machine Learning in Precision Oncology: Applications in Gastrointestinal Cancers.
- Author
-
Tabari, Azadeh, Chan, Shin Mei, Omar, Omar Mustafa Fathy, Iqbal, Shams I., Gee, Michael S., and Daye, Dania
- Subjects
DIGITAL image processing ,MACHINE learning ,MAGNETIC resonance imaging ,GASTROINTESTINAL tumors ,TUMOR classification ,COMPUTER-assisted image analysis (Medicine) ,ONCOLOGY - Abstract
Simple Summary: Worldwide gastrointestinal (GI) malignancies account for about 25% of the global cancer incidence. For some malignancies, screening programs, such as routine colon cancer screenings, have largely aided in the early diagnosis of those at risk. However, even after diagnosis, many GI malignancies lack robust biomarkers to serve as definitive staging and prognostic tools to aid in clinical decision-making. Radiomics uses high-throughput data to extract various features from medical images with the potential to aid personalized precision medicine. Machine learning is a technique for analyzing and predicting by learning from sample data, finding patterns in it, and applying it to new data. We reviewed the fundamental concepts of radiomics such as imaging data acquisition, lesion segmentation, feature design, and interpretation specific to GI cancer studies and assessed the clinical applications of radiomics and machine learning in diagnosis, staging, evaluation of tumor prognosis, and treatment response. Gastrointestinal (GI) cancers, consisting of a wide spectrum of pathologies, have become a prominent health issue globally. Despite medical imaging playing a crucial role in the clinical workflow of cancers, standard evaluation of different imaging modalities may provide limited information. Accurate tumor detection, characterization, and monitoring remain a challenge. Progress in quantitative imaging analysis techniques resulted in "radiomics", a promising methodical tool that helps to personalize diagnosis and treatment optimization. Radiomics, a sub-field of computer vision analysis, is a bourgeoning area of interest, especially in this era of precision medicine. In the field of oncology, radiomics has been described as a tool to aid in the diagnosis, classification, and categorization of malignancies and to predict outcomes using various endpoints. In addition, machine learning is a technique for analyzing and predicting by learning from sample data, finding patterns in it, and applying it to new data. Machine learning has been increasingly applied in this field, where it is being studied in image diagnosis. This review assesses the current landscape of radiomics and methodological processes in GI cancers (including gastric, colorectal, liver, pancreatic, neuroendocrine, GI stromal, and rectal cancers). We explain in a stepwise fashion the process from data acquisition and curation to segmentation and feature extraction. Furthermore, the applications of radiomics for diagnosis, staging, assessment of tumor prognosis and treatment response according to different GI cancer types are explored. Finally, we discussed the existing challenges and limitations of radiomics in abdominal cancers and investigate future opportunities. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Generative Deep Belief Model for Improved Medical Image Segmentation.
- Author
-
Balaji, Prasanalakshmi
- Subjects
DEEP learning ,COMPUTER-assisted image analysis (Medicine) ,IMAGE segmentation ,DIAGNOSTIC imaging ,ARTIFICIAL neural networks ,GOLDEN eagle - Abstract
Medical image assessment is based on segmentation at its fundamental stage. Deep neural networks have been more popular for segmentation work in recent years. However, the quality of labels has an impact on the training performance of these algorithms, particularly in the medical image domain, where both the interpretation cost and inter-observer variation are considerable. For this reason, a novel optimized deep learning approach is proposed for medical image segmentation. Optimization plays an important role in terms of resources used, accuracy, and the time taken. The noise in the raw medical image are processed using Quasi-Continuous Wavelet Transform (QCWT). Then, feature extraction and selection are done after the pre-processing of the image. The features are optimally selected by the Golden Eagle Optimization (GEO) method. Specifically, the processed image is segmented accurately using the proposed Generative Heap Belief Network (GHBN) technique. The execution of this research is done on MATLAB software. According to the results of the experiments, the proposed framework is superior to current techniques in terms of segmentation performance with a valid accuracy of 99%, which is comparable to the other methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. A Review of Radiomics and Artificial Intelligence and Their Application in Veterinary Diagnostic Imaging.
- Author
-
Bouhali, Othmane, Bensmail, Halima, Sheharyar, Ali, David, Florent, and Johnson, Jessica P.
- Subjects
RADIOMICS ,DIAGNOSTIC imaging ,ARTIFICIAL intelligence ,COMPUTER-assisted image analysis (Medicine) ,HOSPITAL administration ,SPACE surveillance - Abstract
Simple Summary: The goal of this paper is to provide an overview of current radiomic and AI applications in veterinary diagnostic imaging. We discuss the essential elements of AI for veterinary practitioners with the aim of helping them make informed decisions in applying AI technologies to their practices and that veterinarians will play an integral role in ensuring the appropriate uses and suitable curation of data. The expertise of veterinary professionals will be vital to ensuring suitable data and, subsequently, AI that meets the needs of the profession. Great advances have been made in human health care in the application of radiomics and artificial intelligence (AI) in a variety of areas, ranging from hospital management and virtual assistants to remote patient monitoring and medical diagnostics and imaging. To improve accuracy and reproducibility, there has been a recent move to integrate radiomics and AI as tools to assist clinical decision making and to incorporate it into routine clinical workflows and diagnosis. Although lagging behind human medicine, the use of radiomics and AI in veterinary diagnostic imaging is becoming more frequent with an increasing number of reported applications. The goal of this paper is to provide an overview of current radiomic and AI applications in veterinary diagnostic imaging. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Computer-Aided Diagnosis by Tissue Image Analysis as an Optical Biopsy in Hysteroscopy.
- Author
-
Tanos, Vasilios, Neofytou, Marios, Tanos, Panayiotis, Pattichis, Constantinos S., and Pattichis, Marios S.
- Subjects
IMAGE analysis ,TISSUE analysis ,OPTICAL images ,HYSTEROSCOPY ,COMPUTER-assisted image analysis (Medicine) ,ENDOMETRIUM ,TEXTURE analysis (Image processing) - Abstract
This review of our experience in computer-assisted tissue image analysis (CATIA) research shows that significant information can be extracted and used to diagnose and distinguish normal from abnormal endometrium. CATIA enabled the evaluation and differentiation between the benign and malignant endometrium during diagnostic hysteroscopy. The efficacy of texture analysis in the endometrium image during hysteroscopy was examined in 40 women, where 209 normal and 209 abnormal regions of interest (ROIs) were extracted. There was a significant difference between normal and abnormal endometrium for the statistical features (SF) features mean, variance, median, energy and entropy; for the spatial grey-level difference matrix (SGLDM) features contrast, correlation, variance, homogeneity and entropy; and for the gray-level difference statistics (GLDS) features homogeneity, contrast, energy, entropy and mean. We further evaluated 52 hysteroscopic images of 258 normal and 258 abnormal endometrium ROIs, and tissue diagnosis was verified by histopathology after biopsy. The YCrCb color system with SF, SGLDM and GLDS color texture features based on support vector machine (SVM) modeling correctly classified 81% of the cases with a sensitivity and a specificity of 78% and 81%, respectively, for normal and hyperplastic endometrium. New technical and computational advances may improve optical biopsy accuracy and assist in the precision of lesion excision during hysteroscopy. The exchange of knowledge, collaboration, identification of tasks and CATIA method selection strategy will further improve computer-aided diagnosis implementation in the daily practice of hysteroscopy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. Ultrasound Radiomics for the Detection of Early-Stage Liver Fibrosis.
- Author
-
Al-Hasani, Maryam, Sultan, Laith R., Sagreiya, Hersh, Cary, Theodore W., Karmacharya, Mrigendra B., and Sehgal, Chandra M.
- Subjects
HEPATIC fibrosis ,RADIOMICS ,COMPUTER-assisted image analysis (Medicine) ,ULTRASONIC imaging ,MACHINE learning - Abstract
Objective: The study evaluates quantitative ultrasound (QUS) texture features with machine learning (ML) to enhance the sensitivity of B-mode ultrasound (US) for the detection of fibrosis at an early stage and distinguish it from advanced fibrosis. Different ML methods were evaluated to determine the best diagnostic model. Methods: 233 B-mode images of liver lobes with early and advanced-stage fibrosis induced in a rat model were analyzed. Sixteen features describing liver texture were measured from regions of interest (ROIs) drawn on B-mode images. The texture features included a first-order statistics run length (RL) and gray-level co-occurrence matrix (GLCM). The features discriminating between early and advanced fibrosis were used to build diagnostic models with logistic regression (LR), naïve Bayes (nB), and multi-class perceptron (MLP). The diagnostic performances of the models were compared by ROC analysis using different train-test sampling approaches, including leave-one-out, 10-fold cross-validation, and varying percentage splits. METAVIR scoring was used for histological fibrosis staging of the liver. Results: 15 features showed a significant difference between the advanced and early liver fibrosis groups, p < 0.05. Among the individual features, first-order statics features led to the best classification with a sensitivity of 82.1–90.5% and a specificity of 87.1–89.8%. For the features combined, the diagnostic performances of nB and MLP were high, with the area under the ROC curve (AUC) approaching 0.95–0.96. LR also yielded high diagnostic performance (AUC = 0.91–0.92) but was lower than nB and MLP. The diagnostic variability between test-train trials, measured by the coefficient-of-variation (CV), was higher for LR (3–5%) than nB and MLP (1–2%). Conclusion: Quantitative ultrasound with machine learning differentiated early and advanced fibrosis. Ultrasound B-mode images contain a high level of information to enable accurate diagnosis with relatively straightforward machine learning methods like naïve Bayes and logistic regression. Implementing simple ML approaches with QUS features in clinical settings could reduce the user-dependent limitation of ultrasound in detecting early-stage liver fibrosis. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. A medical image segmentation method based on multi-dimensional statistical features.
- Author
-
Yang Xu, Xianyu He, Guofeng Xu, Guanqiu Qi, Kun Yu, Li Yin, Pan Yang, Yuehui Yin, and Hao Chen
- Subjects
COMPUTER-assisted image analysis (Medicine) ,IMAGE segmentation ,DIAGNOSTIC imaging ,CONVOLUTIONAL neural networks ,BRAIN tumors ,FEATURE extraction - Abstract
Medical image segmentation has important auxiliary significance for clinical diagnosis and treatment. Most of existing medical image segmentation solutions adopt convolutional neural networks (CNNs). Althought these existing solutions can achieve good image segmentation performance, CNNs focus on local information and ignore global image information. Since Transformer can encode the whole image, it has good global modeling ability and is effective for the extraction of global information. Therefore, this paper proposes a hybrid feature extraction network, into which CNNs and Transformer are integrated to utilize their advantages in feature extraction. To enhance low-dimensional texture features, this paper also proposes a multi-dimensional statistical feature extraction module to fully fuse the features extracted by CNNs and Transformer and enhance the segmentation performance of medical images. The experimental results confirm that the proposed method achieves better results in brain tumor segmentation and ventricle segmentation than state-of-the-art solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Automatic COVID-19 detection mechanisms and approaches from medical images: a systematic review.
- Author
-
Rahmani, Amir Masoud, Azhir, Elham, Naserbakht, Morteza, Mohammadi, Mokhtar, Aldalwie, Adil Hussein Mohammed, Majeed, Mohammed Kamal, Taher Karim, Sarkhel H., and Hosseinzadeh, Mehdi
- Subjects
COVID-19 ,COMPUTER-assisted image analysis (Medicine) ,COMPUTER-aided diagnosis ,DEEP learning ,SUPERVISED learning ,DIAGNOSTIC imaging ,MAGNETIC resonance imaging ,COVID-19 pandemic - Abstract
Since early 2020, Coronavirus Disease 2019 (COVID-19) has spread widely around the world. COVID-19 infects the lungs, leading to breathing difficulties. Early detection of COVID-19 is important for the prevention and treatment of pandemic. Numerous sources of medical images (e.g., Chest X-Rays (CXR), Computed Tomography (CT), and Magnetic Resonance Imaging (MRI)) are regarded as a desirable technique for diagnosing COVID-19 cases. Medical images of coronavirus patients show that the lungs are filled with sticky mucus that prevents them from inhaling. Today, Artificial Intelligence (AI) based algorithms have made a significant shift in the computer aided diagnosis due to their effective feature extraction capabilities. In this survey, a complete and systematic review of the application of Machine Learning (ML) methods for the detection of COVID-19 is presented, focused on works that used medical images. We aimed to evaluate various ML-based techniques in detecting COVID-19 using medical imaging. A total of 26 papers were extracted from ACM, ScienceDirect, Springerlink, Tech Science Press, and IEEExplore. Five different ML categories to review these mechanisms are considered, which are supervised learning-based, deep learning-based, active learning-based, transfer learning-based, and evolutionary learning-based mechanisms. A number of articles are investigated in each group. Also, some directions for further research are discussed to improve the detection of COVID-19 using ML techniques in the future. In most articles, deep learning is used as the ML method. Also, most of the researchers used CXR images to diagnose COVID-19. Most articles reported accuracy of the models to evaluate model performance. The accuracy of the studied models ranged from 0.84 to 0.99. The studies demonstrated the current status of AI techniques in using AI potentials in the fight against COVID-19. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Segmentation of Medical Image Using Novel Dilated Ghost Deep Learning Model.
- Author
-
Zambrano-Vizuete, Marcelo, Botto-Tobar, Miguel, Huerta-Suárez, Carmen, Paredes-Parada, Wladimir, Patiño Pérez, Darwin, Ahanger, Tariq Ahamed, and Gonzalez, Neilys
- Subjects
COMPUTER-assisted image analysis (Medicine) ,GHOST stories ,DIAGNOSTIC imaging ,DEEP learning ,CONVOLUTIONAL neural networks ,IMAGE segmentation ,COMPUTER vision - Abstract
Image segmentation and computer vision are becoming more important in computer-aided design. A computer algorithm extracts image borders, colours, and textures. It also depletes resources. Technical knowledge is required to extract information about distinctive features. There is currently no medical picture segmentation or recognition software available. The proposed model has 13 layers and uses dilated convolution and max-pooling to extract small features. Ghost model deletes the duplicated features, makes the process easier, and reduces the complexity. The Convolution Neural Network (CNN) generates a feature vector map and improves the accuracy of area or bounding box proposals. Restructuring is required for healing. As a result, convolutional neural networks segment medical images. It is possible to acquire the beginning region of a segmented medical image. The proposed model gives better results as compared to the traditional models, it gives an accuracy of 96.05, Precision 98.2, and recall 95.78. The first findings are improved by thickening and categorising the image's pixels. Morphological techniques may be used to segment medical images. Experiments demonstrate that the recommended segmentation strategy is effective. This study rethinks medical image segmentation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Artificial Intelligence-Based Feature Analysis of Ultrasound Images of Liver Fibrosis.
- Author
-
Xie, Youcheng, Chen, Shun, Jia, Dong, Li, Bin, Zheng, Ying, and Yu, Xiaohui
- Subjects
HEPATIC fibrosis ,IMAGE analysis ,ULTRASONIC imaging ,IMAGE quality analysis ,IMAGE recognition (Computer vision) ,COMPUTER-assisted image analysis (Medicine) ,CONVOLUTIONAL neural networks - Abstract
Liver fibrosis is a common liver disease that seriously endangers human health. Liver biopsy is the gold standard for diagnosing liver fibrosis, but its clinical use is limited due to its invasive nature. Ultrasound image examination is a widely used liver fibrosis examination method. Clinicians can diagnose the severity of liver fibrosis according to their own experience by observing the roughness of the texture of the ultrasound image, and this method is highly subjective. Under the premise that artificial intelligence technology is widely used in medical image analysis, this paper uses convolutional neural network analysis to extract the characteristics of ultrasound images of liver fibrosis and then classify the degree of liver fibrosis. Using neural network for image classification can avoid the subjectivity of manual classification and improve the accuracy of judging the degree of liver fibrosis, so as to complete the prevention and treatment of liver fibrosis. Therefore, the following work is done in this paper: (1) the research background, research significance, research status at home and abroad, and the impact of the development of medical imaging on the diagnosis of liver fibrosis are introduced; (2) the related technologies of deep learning and deep convolutional network are introduced, and the indicators of liver fibrosis degree assessment are constructed by using ultrasonic image extraction features; (3) using the collected liver fibrosis dataset to conduct model evaluation experiments, four classic CNN models are selected to compare and analyze the recognition rate. The experiments show that the GoogLeNet model has the best classification and recognition effect. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Fusion of B‐mode and shear wave elastography ultrasound features for automated detection of axillary lymph node metastasis in breast carcinoma.
- Author
-
Pham, The‐Hanh, Faust, Oliver, Koh, Joel En Wei, Ciaccio, Edward J., Barua, Prabal D., Omar, Norlia, Ng, Wei Lin, Ab Mumin, Nazimah, Rahmat, Kartini, and Acharya, U. Rajendra
- Subjects
AXILLA ,LYMPHATIC metastasis ,BREAST ,SHEAR waves ,DIAGNOSTIC ultrasonic imaging ,HILBERT-Huang transform ,COMPUTER-assisted image analysis (Medicine) ,ENDORECTAL ultrasonography - Abstract
In this study, we evaluate and compare the diagnostic performance of ultrasound for non‐invasive axillary lymph node (ALN) metastasis detection. The study was based on fusing shear wave elastography (SWE) and B‐mode ultrasonography (USG) images. These images were subjected to pre‐processing and feature extraction, based on bi‐dimensional empirical mode decomposition and higher order spectra methods. The resulting nonlinear features were ranked according to their p‐value, which was established with Student's t‐test. The ranked features were used to train and test six classification algorithms with 10‐fold cross‐validation. Initially, we considered B‐mode USG images in isolation. A probabilistic neural network (PNN) classifier was able to discriminate positive from negative cases with an accuracy of 74.77% using 15 features. Subsequently, only SWE images were used and as before, the PNN classifier delivered the best result with an accuracy of 87.85% based on 47 features. Finally, we combined SWE and B‐mode USG images. Again, the PNN classifier delivered the best result with an accuracy of 89.72% based on 71 features. These three tests indicate that SWE images contain more diagnostically relevant information when compared with B‐mode USG. Furthermore, there is scope in fusing SWE and B‐mode USG to improve non‐invasive ALN metastasis detection. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Artificial Intelligence in Quantitative Ultrasound Imaging: A Survey.
- Author
-
Zhou, Boran, Yang, Xiaofeng, Curran, Walter J., and Liu, Tian
- Subjects
ULTRASONIC imaging ,ARTIFICIAL intelligence ,MAGNETIC resonance imaging ,COMPUTER-assisted image analysis (Medicine) ,COMPUTED tomography - Abstract
Quantitative ultrasound (QUS) imaging is a safe, reliable, inexpensive, and real‐time technique to extract physically descriptive parameters for assessing pathologies. Compared with other major imaging modalities such as computed tomography and magnetic resonance imaging, QUS suffers from several major drawbacks: poor image quality and inter‐ and intra‐observer variability. Therefore, there is a great need to develop automated methods to improve the image quality of QUS. In recent years, there has been increasing interest in artificial intelligence (AI) applications in medical imaging, and a large number of research studies in AI in QUS have been conducted. The purpose of this review is to describe and categorize recent research into AI applications in QUS. We first introduce the AI workflow and then discuss the various AI applications in QUS. Finally, challenges and future potential AI applications in QUS are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Evaluating the Impact of High Intensity Interval Training on Axial Psoriatic Arthritis Based on MR Images.
- Author
-
Chronaiou, Ioanna, Giskeødegård, Guro Fanneløb, Neubert, Ales, Hoffmann-Skjøstad, Tamara Viola, Thomsen, Ruth Stoklund, Hoff, Mari, Bathen, Tone Frost, and Sitter, Beathe
- Subjects
MAGNETIC resonance imaging ,PSORIATIC arthritis ,INTERVAL training ,COMPUTER-assisted image analysis (Medicine) ,JOINT pain - Abstract
High intensity interval training (HIIT) has been shown to benefit patients with psoriatic arthritis (PsA). However, magnetic resonance (MR) imaging has uncovered bone marrow edema (BME) in healthy volunteers after vigorous exercise. The purpose of this study was to investigate MR images of the spine of PsA patients for changes in BME after HIIT. PsA patients went through 11 weeks of HIIT (N = 19, 4 men, median age 52 years) or no change in physical exercise habits (N = 20, 8 men, median age 45 years). We acquired scores for joint affection and pain and short tau inversion recovery (STIR) and T1-weighted MR images of the spine at baseline and after 11 weeks. MR images were evaluated for BME by a trained radiologist, by SpondyloArthritis Research Consortium of Canada (SPARCC) scoring, and by extraction of textural features. No significant changes of BME were detected in MR images of the spine after HIIT. This was consistent for MR image evaluation by a radiologist, by SPARCC, and by texture analysis. Values of textural features were significantly different in BME compared to healthy bone marrow. In conclusion, BME in spine was not changed after HIIT, supporting that HIIT is safe for PsA patients. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Machine learning-based automatic detection of novel coronavirus (COVID-19) disease.
- Author
-
Bhargava, Anuja, Bansal, Atul, and Goyal, Vishal
- Subjects
COVID-19 ,SARS-CoV-2 ,DEEP learning ,ARTIFICIAL neural networks ,MACHINE learning ,DISCRETE wavelet transforms ,COMPUTER-assisted image analysis (Medicine) ,IMAGE processing - Abstract
Abstract The pandemic was announced by the world health organization coronavirus (COVID-19) universal health dilemma. Any scientific appliance which contributes expeditious detection of coronavirus with a huge recognition rate may be excessively fruitful to doctors. In this environment, innovative automation like deep learning, machine learning, image processing and medical image like chest radiography (CXR), computed tomography (CT) has been refined promising solution contrary to COVID-19. Currently, a reverse transcription-polymerase chain reaction (RT-PCR) test has been used to detect the coronavirus. Due to the moratorium period is high on results tested and huge false negative estimates, substitute solutions are desired. Thus, an automated machine learning-based algorithm is proposed for the detection of COVID-19 and the grading of nine different datasets. This research impacts the grant of image processing and machine learning to expeditious and definite coronavirus detection using CXR and CT medical imaging. This results in early detection, diagnosis, and cure for the accomplishment of COVID-19 as early as possible. Firstly, images are preprocessed by normalization to enhance the quality of the image and removing of noise. Secondly, segmentation of images is done by fuzzy c-means clustering. Then various features namely, statistical, textural, histogram of gradients, and discrete wavelet transform are extracted (92) and selected from the feature vector by principle component analysis. Lastly, k-NN, SRC, ANN, and SVM are used to make decisions for normal, pneumonia, COVID-19 positive patients. The performance of the system has been validated by the k (5) fold cross-validation technique. The proposed algorithm achieves 91.70% (k-Nearest Neighbor), 94.40% (Sparse Representation Classifier), 96.16% (Artificial Neural Network), and 99.14% (Support Vector Machine) for COVID detection. The proposed results show feature combination and selection improves the performance in 14.34 s with machine learning and image processing techniques. Among k-NN, SRC, ANN, and SVM classifiers, SVM shows more efficient results that are promising and comparable with the literature. The proposed approach results in an improved recognition rate as compared to the literature review. Therefore, the algorithm proposed shows immense potential to benefit the radiologist for their findings. Also, fruitful in prior virus diagnosis and discriminate pneumonia between COVID-19 and other pandemics. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Automated System for Identifying COVID-19 Infections in Computed Tomography Images Using Deep Learning Models.
- Author
-
Abdulkareem, Karrar Hameed, Mostafa, Salama A., Al-Qudsy, Zainab N., Mohammed, Mazin Abed, Al-Waisy, Alaa S., Kadry, Seifedine, Lee, Jinseok, and Nam, Yunyoung
- Subjects
COVID-19 ,DEEP learning ,COMPUTED tomography ,NUCLEIC acid amplification techniques ,COVID-19 pandemic ,COMPUTER-assisted image analysis (Medicine) ,CONVOLUTIONAL neural networks - Abstract
Coronavirus disease 2019 (COVID-19) is a novel disease that affects healthcare on a global scale and cannot be ignored because of its high fatality rate. Computed tomography (CT) images are presently being employed to assist doctors in detecting COVID-19 in its early stages. In several scenarios, a combination of epidemiological criteria (contact during the incubation period), the existence of clinical symptoms, laboratory tests (nucleic acid amplification tests), and clinical imaging-based tests are used to diagnose COVID-19. This method can miss patients and cause more complications. Deep learning is one of the techniques that has been proven to be prominent and reliable in several diagnostic domains involving medical imaging. This study utilizes a convolutional neural network (CNN), stacked autoencoder, and deep neural network to develop a COVID-19 diagnostic system. In this system, classification undergoes some modification before applying the three CT image techniques to determine normal and COVID-19 cases. A large-scale and challenging CT image dataset was used in the training process of the employed deep learning model and reporting their final performance. Experimental outcomes show that the highest accuracy rate was achieved using the CNN model with an accuracy of 88.30%, a sensitivity of 87.65%, and a specificity of 87.97%. Furthermore, the proposed system has outperformed the current existing state-of-the-art models in detecting the COVID-19 virus using CT images. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Artificial Intelligence in Cardiovascular Atherosclerosis Imaging.
- Author
-
Zhang, Jia, Han, Ruijuan, Shao, Guo, Lv, Bin, and Sun, Kai
- Subjects
ARTIFICIAL intelligence ,COMPUTER engineering ,ATHEROSCLEROTIC plaque ,MYOCARDIAL perfusion imaging ,COMPUTER-assisted image analysis (Medicine) ,ATHEROSCLEROSIS - Abstract
At present, artificial intelligence (AI) has already been applied in cardiovascular imaging (e.g., image segmentation, automated measurements, and eventually, automated diagnosis) and it has been propelled to the forefront of cardiovascular medical imaging research. In this review, we presented the current status of artificial intelligence applied to image analysis of coronary atherosclerotic plaques, covering multiple areas from plaque component analysis (e.g., identification of plaque properties, identification of vulnerable plaque, detection of myocardial function, and risk prediction) to risk prediction. Additionally, we discuss the current evidence, strengths, limitations, and future directions for AI in cardiac imaging of atherosclerotic plaques, as well as lessons that can be learned from other areas. The continuous development of computer science and technology may further promote the development of this field. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Application of Artificial Intelligence in Cardiovascular Imaging.
- Author
-
Ma, Panjiang, Li, Qiang, and Li, Jianbin
- Subjects
ARTIFICIAL intelligence ,DEEP learning ,COMPUTER-assisted image analysis (Medicine) ,COMPUTER engineering ,CONVOLUTIONAL neural networks ,DIAGNOSTIC imaging - Abstract
During the last two decades, as computer technology has matured and business scenarios have diversified, the scale of application of computer systems in various industries has continued to expand, resulting in a huge increase in industry data. As for the medical industry, huge unstructured data has been accumulated, so exploring how to use medical image data more effectively to efficiently complete diagnosis has an important practical impact. For a long time, China has been striving to promote the process of medical informatization, and the combination of big data and artificial intelligence and other advanced technologies in the medical field has become a hot industry and a new development trend. This paper focuses on cardiovascular diseases and uses relevant deep learning methods to realize automatic analysis and diagnosis of medical images and verify the feasibility of AI-assisted medical treatment. We have tried to achieve a complete diagnosis of cardiovascular medical imaging and localize the vulnerable lesion area. (1) We tested the classical object based on a convolutional neural network and experiment, explored the region segmentation algorithm, and showed its application scenarios in the field of medical imaging. (2) According to the data and task characteristics, we built a network model containing classification nodes and regression nodes. After the multitask joint drill, the effect of diagnosis and detection was also enhanced. In this paper, a weighted loss function mechanism is used to improve the imbalance of data between classes in medical image analysis, and the effect of the model is enhanced. (3) In the actual medical process, many medical images have the label information of high-level categories but lack the label information of low-level lesions. The proposed system exposes the possibility of lesion localization under weakly supervised conditions by taking cardiovascular imaging data to resolve these issues. Experimental results have verified that the proposed deep learning-enabled model has the capacity to resolve the aforementioned issues with minimum possible changes in the underlined infrastructure. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. A review of intelligent medical imaging diagnosis for the COVID-19 infection.
- Author
-
Saurabh, Nikitha, Shetty, Jyothi, Phillips-Wren, Gloria, Mora, Manuel, Wang, Fen, and Gomez, Jorge Marx
- Subjects
COMPUTER-assisted image analysis (Medicine) ,DIAGNOSIS ,DIAGNOSTIC imaging ,COVID-19 testing ,COVID-19 - Abstract
Due to the unavailability of specific vaccines or drugs to treat COVID-19 infection, the world has witnessed a rise in the human mortality rate. Currently, real time RT-PCR technique is widely accepted to detect the presence of the virus, but it is time consuming and has a high rate of eliciting false positives/negatives results. This has opened research avenues to identify substitute strategies to diagnose the infection. Related works in this direction have shown promising results when RT-PCR diagnosis is complemented with Chest imaging results. Finally integrating intelligence and automating diagnostic systems can improve the speed and efficiency of the diagnosis process which is extremely essential in the present scenario. This paper reviews the use of CT scan, Chest X-ray, lung ultrasound images for COVID-19 diagnosis, discusses the automation of chest image analysis using machine learning and deep learning models, elucidates the achievements, challenges, and future directions in this domain. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. CacheTrack-YOLO: Real-Time Detection and Tracking for Thyroid Nodules and Surrounding Tissues in Ultrasound Videos.
- Author
-
Wu, Xiangqiong, Tan, Guanghua, Zhu, Ningbo, Chen, Zhilun, Yang, Yan, Wen, Huaxuan, and Li, Kenli
- Subjects
THYROID nodules ,ULTRASONIC imaging ,COMPUTER-aided diagnosis ,COMPUTER-assisted image analysis (Medicine) ,VIDEO surveillance ,VIDEOS ,THYROID gland ,DIAGNOSTIC ultrasonic imaging - Abstract
To accurately detect and track the thyroid nodules in a video is a crucial step in the thyroid screening for identification of benign and malignant nodules in computer-aided diagnosis (CAD) systems. Most existing methods just perform excellent on static frames selected manually from ultrasound videos. However, manual acquisition is labor-intensive work. To make the thyroid screening process in a more natural way with less labor operations, we develop a well-designed framework suitable for practical applications for thyroid nodule detection in ultrasound videos. Particularly, in order to make full use of the characteristics of thyroid videos, we propose a novel post-processing approach, called Cache-Track, which exploits the contextual relation among video frames to propagate the detection results into adjacent frames to refine the detection results. Additionally, our method can not only detect and count thyroid nodules, but also track and monitor surrounding tissues, which can greatly reduce the labor work and achieve computer-aided diagnosis. Experimental results show that our method performs better in balancing accuracy and speed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
50. Applying a Random Projection Algorithm to Optimize Machine Learning Model for Breast Lesion Classification.
- Author
-
Heidari, Morteza, Lakshmivarahan, Sivaramakrishnan, Mirniaharikandehei, Seyedehnafiseh, Danala, Gopichandh, Maryada, Sai Kiran R., Liu, Hong, and Zheng, Bin
- Subjects
COMPUTER-aided diagnosis ,MACHINE learning ,BREAST ,SUPPORT vector machines ,COMPUTER-assisted image analysis (Medicine) ,ALGORITHMS - Abstract
Objective: Since computer-aided diagnosis (CAD) schemes of medical images usually computes large number of image features, which creates a challenge of how to identify a small and optimal feature vector to build robust machine learning models, the objective of this study is to investigate feasibility of applying a random projection algorithm (RPA) to build an optimal feature vector from the initially CAD-generated large feature pool and improve performance of machine learning model. Methods: We assemble a retrospective dataset involving 1,487 cases of mammograms in which 644 cases have confirmed malignant mass lesions and 843 have benign lesions. A CAD scheme is first applied to segment mass regions and initially compute 181 features. Then, support vector machine (SVM) models embedded with several feature dimensionality reduction methods are built to predict likelihood of lesions being malignant. All SVM models are trained and tested using a leave-one-case-out cross-validation method. SVM generates a likelihood score of each segmented mass region depicting on one-view mammogram. By fusion of two scores of the same mass depicting on two-view mammograms, a case-based likelihood score is also evaluated. Results: Comparing with the principle component analyses, nonnegative matrix factorization, and Chi-squared methods, SVM embedded with RPA yielded a significantly higher case-based lesion classification performance with the area under ROC curve of 0.84 ± 0.01 (p<0.02). Conclusion: The study demonstrates that RPA is a promising method to generate optimal feature vectors and improve SVM performance. Significance: This study presents a new method to develop CAD schemes with significantly higher and robust performance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.