54 results on '"Leonardo, Rundo"'
Search Results
2. Comparative performance of fully-automated and semi-automated artificial intelligence methods for the detection of clinically significant prostate cancer on MRI: a systematic review
- Author
-
Nikita Sushentsev, Nadia Moreira Da Silva, Michael Yeung, Tristan Barrett, Evis Sala, Michael Roberts, and Leonardo Rundo
- Subjects
Prostate cancer ,MRI ,Artificial intelligence ,Deep learning ,Machine learning ,Medical physics. Medical radiology. Nuclear medicine ,R895-920 - Abstract
Abstract Objectives We systematically reviewed the current literature evaluating the ability of fully-automated deep learning (DL) and semi-automated traditional machine learning (TML) MRI-based artificial intelligence (AI) methods to differentiate clinically significant prostate cancer (csPCa) from indolent PCa (iPCa) and benign conditions. Methods We performed a computerised bibliographic search of studies indexed in MEDLINE/PubMed, arXiv, medRxiv, and bioRxiv between 1 January 2016 and 31 July 2021. Two reviewers performed the title/abstract and full-text screening. The remaining papers were screened by four reviewers using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) for DL studies and Radiomics Quality Score (RQS) for TML studies. Papers that fulfilled the pre-defined screening requirements underwent full CLAIM/RQS evaluation alongside the risk of bias assessment using QUADAS-2, both conducted by the same four reviewers. Standard measures of discrimination were extracted for the developed predictive models. Results 17/28 papers (five DL and twelve TML) passed the quality screening and were subject to a full CLAIM/RQS/QUADAS-2 assessment, which revealed a substantial study heterogeneity that precluded us from performing quantitative analysis as part of this review. The mean RQS of TML papers was 11/36, and a total of five papers had a high risk of bias. AUCs of DL and TML papers with low risk of bias ranged between 0.80–0.89 and 0.75–0.88, respectively. Conclusion We observed comparable performance of the two classes of AI methods and identified a number of common methodological limitations and biases that future studies will need to address to ensure the generalisability of the developed models.
- Published
- 2022
- Full Text
- View/download PDF
3. Integrating Artificial Intelligence Tools in the Clinical Research Setting: The Ovarian Cancer Use Case
- Author
-
Lorena Escudero Sanchez, Thomas Buddenkotte, Mohammad Al Sa’d, Cathal McCague, James Darcy, Leonardo Rundo, Alex Samoshkin, Martin J. Graves, Victoria Hollamby, Paul Browne, Mireia Crispin-Ortuzar, Ramona Woitek, Evis Sala, Carola-Bibiane Schönlieb, Simon J. Doran, and Ozan Öktem
- Subjects
artificial intelligence ,cancer research ,imaging ,clinical integration ,radiomics ,Medicine (General) ,R5-920 - Abstract
Artificial intelligence (AI) methods applied to healthcare problems have shown enormous potential to alleviate the burden of health services worldwide and to improve the accuracy and reproducibility of predictions. In particular, developments in computer vision are creating a paradigm shift in the analysis of radiological images, where AI tools are already capable of automatically detecting and precisely delineating tumours. However, such tools are generally developed in technical departments that continue to be siloed from where the real benefit would be achieved with their usage. Significant effort still needs to be made to make these advancements available, first in academic clinical research and ultimately in the clinical setting. In this paper, we demonstrate a prototype pipeline based entirely on open-source software and free of cost to bridge this gap, simplifying the integration of tools and models developed within the AI community into the clinical research setting, ensuring an accessible platform with visualisation applications that allow end-users such as radiologists to view and interact with the outcome of these AI tools.
- Published
- 2023
- Full Text
- View/download PDF
4. Predicting Survival in Patients with Brain Tumors: Current State-of-the-Art of AI Methods Applied to MRI
- Author
-
Christian di Noia, James T. Grist, Frank Riemer, Maria Lyasheva, Miriana Fabozzi, Mauro Castelli, Raffaele Lodi, Caterina Tonon, Leonardo Rundo, and Fulvio Zaccagna
- Subjects
brain tumors ,artificial intelligence ,machine learning ,survival prediction ,magnetic resonance imaging ,Medicine (General) ,R5-920 - Abstract
Given growing clinical needs, in recent years Artificial Intelligence (AI) techniques have increasingly been used to define the best approaches for survival assessment and prediction in patients with brain tumors. Advances in computational resources, and the collection of (mainly) public databases, have promoted this rapid development. This narrative review of the current state-of-the-art aimed to survey current applications of AI in predicting survival in patients with brain tumors, with a focus on Magnetic Resonance Imaging (MRI). An extensive search was performed on PubMed and Google Scholar using a Boolean research query based on MeSH terms and restricting the search to the period between 2012 and 2022. Fifty studies were selected, mainly based on Machine Learning (ML), Deep Learning (DL), radiomics-based methods, and methods that exploit traditional imaging techniques for survival assessment. In addition, we focused on two distinct tasks related to survival assessment: the first on the classification of subjects into survival classes (short and long-term or eventually short, mid and long-term) to stratify patients in distinct groups. The second focused on quantification, in days or months, of the individual survival interval. Our survey showed excellent state-of-the-art methods for the first, with accuracy up to ∼98%. The latter task appears to be the most challenging, but state-of-the-art techniques showed promising results, albeit with limitations, with C-Index up to ∼0.91. In conclusion, according to the specific task, the available computational methods perform differently, and the choice of the best one to use is non-univocal and dependent on many aspects. Unequivocally, the use of features derived from quantitative imaging has been shown to be advantageous for AI applications, including survival prediction. This evidence from the literature motivates further research in the field of AI-powered methods for survival prediction in patients with brain tumors, in particular, using the wealth of information provided by quantitative MRI techniques.
- Published
- 2022
- Full Text
- View/download PDF
5. Time series radiomics for the prediction of prostate cancer progression in patients on active surveillance
- Author
-
Nikita Sushentsev, Leonardo Rundo, Luis Abrego, Zonglun Li, Tatiana Nazarenko, Anne Y. Warren, Vincent J. Gnanapragasam, Evis Sala, Alexey Zaikin, Tristan Barrett, Oleg Blyuss, Sushentsev, Nikita [0000-0003-4500-9714], and Apollo - University of Cambridge Repository
- Subjects
Male ,Artificial intelligence ,Magnetic resonance imaging ,Time Factors ,Prostate ,Humans ,Prostatic Neoplasms ,Radiology, Nuclear Medicine and imaging ,General Medicine ,Prostate-Specific Antigen ,Watchful Waiting ,Retrospective Studies - Abstract
Abstract Serial MRI is an essential assessment tool in prostate cancer (PCa) patients enrolled on active surveillance (AS). However, it has only moderate sensitivity for predicting histopathological tumour progression at follow-up, which is in part due to the subjective nature of its clinical reporting and variation among centres and readers. In this study, we used a long short-term memory (LSTM) recurrent neural network (RNN) to develop a time series radiomics (TSR) predictive model that analysed longitudinal changes in tumour-derived radiomic features across 297 scans from 76 AS patients, 28 with histopathological PCa progression and 48 with stable disease. Using leave-one-out cross-validation (LOOCV), we found that an LSTM-based model combining TSR and serial PSA density (AUC 0.86 [95% CI: 0.78–0.94]) significantly outperformed a model combining conventional delta-radiomics and delta-PSA density (0.75 [0.64–0.87]; p = 0.048) and achieved comparable performance to expert-performed serial MRI analysis using the Prostate Cancer Radiologic Estimation of Change in Sequential Evaluation (PRECISE) scoring system (0.84 [0.76–0.93]; p = 0.710). The proposed TSR framework, therefore, offers a feasible quantitative tool for standardising serial MRI assessment in PCa AS. It also presents a novel methodological approach to serial image analysis that can be used to support clinical decision-making in multiple scenarios, from continuous disease monitoring to treatment response evaluation. Key Points •LSTM RNN can be used to predict the outcome of PCa AS using time series changes in tumour-derived radiomic features and PSA density. •Using all available TSR features and serial PSA density yields a significantly better predictive performance compared to using just two time points within the delta-radiomics framework. •The concept of TSR can be applied to other clinical scenarios involving serial imaging, setting out a new field in AI-driven radiology research.
- Published
- 2023
6. 3D DCE-MRI Radiomic Analysis for Malignant Lesion Prediction in Breast Cancer Patients
- Author
-
Tommaso Vincenzo Bartolotta, Ramona Woitek, Alessia Angela Maria Orlando, Giorgio Ivan Russo, Leonardo Rundo, Mariangela Dimarco, Carmelo Militello, Ildebrando D’Angelo, Militello, Carmelo, Rundo, Leonardo, Dimarco, Mariangela, Orlando, Alessia, Woitek, Ramona, D'Angelo, Ildebrando, Russo, Giorgio, Bartolotta, Tommaso Vincenzo, Rundo, Leonardo [0000-0003-3341-5483], Woitek, Ramona [0000-0002-9146-9159], and Apollo - University of Cambridge Repository
- Subjects
Breast cancer, Dynamic contrast-enhanced magnetic resonance imaging ,Support Vector Machine ,Computer science ,Normalization (image processing) ,Breast Neoplasms ,Feature selection ,Breast cancer ,Discriminative model ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Breast ,Retrospective Studies ,Dynamic contrast-enhanced magnetic resonance imaging ,Radiomics ,Support vector machines ,Receiver operating characteristic ,business.industry ,Pattern recognition ,medicine.disease ,Magnetic Resonance Imaging ,machine learning, Radiomics, unsupervised feature selection, Support vector machines ,Support vector machine ,machine learning ,ROC Curve ,Feature (computer vision) ,Test set ,Female ,Artificial intelligence ,Settore MED/36 - Diagnostica Per Immagini E Radioterapia ,business ,unsupervised feature selection - Abstract
Rationale and Objectives: To develop and validate a radiomic model, with radiomic features extracted from breast Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) from a 1.5T scanner, for predicting the malignancy of masses with enhancement. Images were acquired using an 8-channel breast coil in the axial plane. The rationale behind this study is to show the feasibility of a radio-mics-powered model that could be integrated into the clinical practice by exploiting only standard-of-care DCE-MRI with the goal of reducing the required image pre-processing (ie, normalization and quantitative imaging map generation).Materials and Methods: 107 radiomic features were extracted from a manually annotated dataset of 111 patients, which was split into discovery and test sets. A feature calibration and pre-processing step was performed to find only robust non-redundant features. An indepth discovery analysis was performed to define a predictive model: for this purpose, a Support Vector Machine (SVM) was trained in a nested 5-fold cross-validation scheme, by exploiting several unsupervised feature selection methods. The predictive model performance was evaluated in terms of Area Under the Receiver Operating Characteristic (AUROC), specificity, sensitivity, PPV and NPV. The test was performed on unseen held-out data.Results: The model combining Unsupervised Discriminative Feature Selection (UDFS) and SVMs on average achieved the best performance on the blinded test set: AUROC = 0.725 +/- 0.091, sensitivity = 0.709 +/- 0.176, specificity = 0.741 +/- 0.114, PPV = 0.72 +/- 0.093, and NPV = 0.75 +/- 0.114.Conclusion: In this study, we built a radiomic predictive model based on breast DCE-MRI, using only the strongest enhancement phase, with promising results in terms of accuracy and specificity in the differentiation of malignant from benign breast lesions.
- Published
- 2022
- Full Text
- View/download PDF
7. A quantum-inspired classifier for clonogenic assay evaluations
- Author
-
Giuseppe Sergioli 1, Carmelo Militello 2, Leonardo Rundo 3, 4, Luigi Minafra 2, Filippo Torrisi 5, Giorgio Russo 2, Keng Loon Chow 1, Roberto Giuntini 1, 6, Apollo - University of Cambridge Repository, and Rundo, Leonardo [0000-0003-3341-5483]
- Subjects
639/766/483/481 ,Quantum machine learning ,Quantum information ,Computer science ,Science ,639/705/117 ,Context (language use) ,02 engineering and technology ,Cellular imaging ,Machine learning ,computer.software_genre ,01 natural sciences ,5108 Quantum Physics ,Discriminative model ,46 Information and Computing Sciences ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,010306 general physics ,Time complexity ,Quantum ,631/80/2373 ,Multidisciplinary ,business.industry ,article ,Binary classification ,Feature (computer vision) ,Medicine ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Classifier (UML) ,computer ,51 Physical Sciences - Abstract
Recent advances in Quantum Machine Learning (QML) have provided benefits to several computational processes, drastically reducing the time complexity. Another approach of combining quantum information theory with machine learning—without involving quantum computers—is known as Quantum-inspired Machine Learning (QiML), which exploits the expressive power of the quantum language to increase the accuracy of the process (rather than reducing the time complexity). In this work, we propose a large-scale experiment based on the application of a binary classifier inspired by quantum information theory to the biomedical imaging context in clonogenic assay evaluation to identify the most discriminative feature, allowing us to enhance cell colony segmentation. This innovative approach offers a two-fold result: (1) among the extracted and analyzed image features, homogeneity is shown to be a relevant feature in detecting challenging cell colonies; and (2) the proposed quantum-inspired classifier is a novel and outstanding methodology, compared to conventional machine learning classifiers, for the evaluation of clonogenic assays.
- Published
- 2021
- Full Text
- View/download PDF
8. Reproducibility of CT-based radiomic features against image resampling and perturbations for tumour and healthy kidney in renal cancer patients
- Author
-
Iosif Mendichovszky, Alessandro Bevilacqua, Grant D. Stewart, Evis Sala, Tobias Klatte, Lorena Escudero Sanchez, Stephan Ursprung, Margherita Mottola, Leonardo Rundo, Apollo - University of Cambridge Repository, Ursprung, Stephan [0000-0003-2476-178X], Rundo, Leonardo [0000-0003-3341-5483], Stewart, Grant [0000-0003-3188-9140], Sala, Evis [0000-0002-5518-9360], and Margherita Mottola, Stephan Ursprung, Leonardo Rundo, Lorena Escudero Sanchez, Tobias Klatte, Iosif Mendichovszky, Grant D. Stewart, Evis Sala, Alessandro Bevilacqua
- Subjects
Computer science ,639/705/531 ,Science ,Kidney ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,692/53 ,Resampling ,medicine ,Image scaling ,Image Processing, Computer-Assisted ,Humans ,Carcinoma, Renal Cell ,Reproducibility ,Multidisciplinary ,business.industry ,Statistics ,article ,Cancer ,Reproducibility of Results ,Pattern recognition ,692/4022/272 ,medicine.disease ,Kidney Neoplasms ,radiomics, reproducibilty, feature robustness, image interpolation, reliability ,Lanczos resampling ,Feature (computer vision) ,030220 oncology & carcinogenesis ,639/166/985 ,Medicine ,Artificial intelligence ,Noise (video) ,business ,Tomography, X-Ray Computed ,Biomedical engineering ,Biomarkers ,Algorithms ,Interpolation - Abstract
Computed Tomography (CT) is widely used in oncology for morphological evaluation and diagnosis, commonly through visual assessments, often exploiting semi-automatic tools as well. Well-established automatic methods for quantitative imaging offer the opportunity to enrich the radiologist interpretation with a large number of radiomic features, which need to be highly reproducible to be used reliably in clinical practice. This study investigates feature reproducibility against noise, varying resolutions and segmentations (achieved by perturbing the regions of interest), in a CT dataset with heterogeneous voxel size of 98 renal cell carcinomas (RCCs) and 93 contralateral normal kidneys (CK). In particular, first order (FO) and second order texture features based on both 2D and 3D grey level co-occurrence matrices (GLCMs) were considered. Moreover, this study carries out a comparative analysis of three of the most commonly used interpolation methods, which need to be selected before any resampling procedure. Results showed that the Lanczos interpolation is the most effective at preserving original information in resampling, where the median slice resolution coupled with the native slice spacing allows the best reproducibility, with 94.6% and 87.7% of features, in RCC and CK, respectively. GLCMs show their maximum reproducibility when used at short distances.
- Published
- 2021
9. Impact of GAN-based lesion-focused medical image super-resolution on the robustness of radiomic features
- Author
-
Erick Costa de Farias, Changhee Han, Leonardo Rundo, Evis Sala, Christian di Noia, Mauro Castelli, Sala, Evis [0000-0002-5518-9360], Apollo - University of Cambridge Repository, NOVA Information Management School (NOVA IMS), Information Management Research Center (MagIC) - NOVA Information Management School, and NOVA IMS Research and Development Center (MagIC)
- Subjects
Lung Neoplasms ,Computer science ,Image Processing ,computer.software_genre ,Machine Learning ,Computer-Assisted ,Voxel ,Image Processing, Computer-Assisted ,692/4028/67/1612 ,Pyramid (image processing) ,Tomography ,Computed tomography (CT) ,Lung ,Cancer ,radiomic features ,Multidisciplinary ,Algorithms ,Humans ,Tomography, X-Ray Computed ,X-Ray Computed ,Generative Adversarial Network (GAN) ,Feature (computer vision) ,Principal component analysis ,639/166/985 ,Medicine ,Medical imaging ,692/700/1421 ,Lung cancer ,Biomedical engineering ,Science ,Feature extraction ,692/308/53 ,Article ,692/4028/67/2321 ,SDG 3 - Good Health and Well-being ,692/53 ,Robustness (computer science) ,General ,Quantization (image processing) ,business.industry ,Pattern recognition ,Clinica studies ,692/699/67 ,udc:659.2:004 ,Cancer imaging ,Artificial intelligence ,business ,computer ,Biomarkers - Abstract
Farias, E. C. D., Di Noia, C., Han, C., Sala, E., Castelli, M., & Rundo, L. (2021). Impact of GAN-based lesion-focused medical image super-resolution on the robustness of radiomic features. Scientific Reports, 11(21361), 1-12. [21361]. https://doi.org/10.1038/s41598-021-00898-z -----------------------------------------This work was partially supported by The Mark Foundation for Cancer Research and Cancer Research UK Cambridge Centre [C9685/A25177], the Wellcome Trust Innovator Award, UK [215733/Z/19/Z] and the CRUK National Cancer Imaging Translational Accelerator (NCITA) [C42780/A27066]. Additional support was also provided by the National Institute of Health Research (NIHR) Cambridge Biomedical Research Centre (BRC-1215-20014). The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care. This work was partially supported by national funds through the FCT (Fundação para a Ciência e a Tecnologia) by the Projects GADgET (DSAIPA/DS/0022/2018) and the financial support from the Slovenian Research Agency (research core funding no. P5-0410). Robust machine learning models based on radiomic features might allow for accurate diagnosis, prognosis, and medical decision-making. Unfortunately, the lack of standardized radiomic feature extraction has hampered their clinical use. Since the radiomic features tend to be affected by low voxel statistics in regions of interest, increasing the sample size would improve their robustness in clinical studies. Therefore, we propose a Generative Adversarial Network (GAN)-based lesion-focused framework for Computed Tomography (CT) image Super-Resolution (SR); for the lesion (i.e., cancer) patch-focused training, we incorporate Spatial Pyramid Pooling (SPP) into GAN-Constrained by the Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE). At 2× SR, the proposed model achieved better perceptual quality with less blurring than the other considered state-of-the-art SR methods, while producing comparable results at 4× SR. We also evaluated the robustness of our model’s radiomic feature in terms of quantization on a different lung cancer CT dataset using Principal Component Analysis (PCA). Intriguingly, the most important radiomic features in our PCA-based analysis were the most robust features extracted on the GAN-super-resolved images. These achievements pave the way for the application of GAN-based image Super-Resolution techniques for studies of radiomics for robust biomarker discovery. publishersversion published
- Published
- 2021
10. Semi-automated and interactive segmentation of contrast-enhancing masses on breast DCE-MRI using spatial fuzzy clustering
- Author
-
Leonardo Rundo, Giorgio Ivan Russo, Alessia Angela Maria Orlando, Vincenzo Conti, Tommaso Vincenzo Bartolotta, Ramona Woitek, Carmelo Militello, Mariangela Dimarco, Ildebrando D’Angelo, Woitek, Ramona [0000-0002-9146-9159], Apollo - University of Cambridge Repository, Carmelo Militello, Leonardo Rundo, Mariangela Dimarco, Alessia Orlando, Vincenzo Conti, Ramona Woitek, Ildebrando D???Angelo, Tommaso Vincenzo Bartolotta, and Giorgio Russo
- Subjects
Fuzzy clustering ,Unsupervised fuzzy clustering ,business.industry ,Computer science ,Biomedical Engineering ,Health Informatics ,Pattern recognition ,Image processing ,Context (language use) ,Image segmentation ,Computer-assisted lesion detection ,Magnetic Resonance Imaging ,Thresholding ,Convolutional neural network ,Breast cancer ,Semi-automated segmentation ,Spatial information ,Signal Processing ,Segmentation ,Artificial intelligence ,business ,Multiparametric Magnetic Resonance Imaging - Abstract
Multiparametric Magnetic Resonance Imaging (MRI) is the most sensitive imaging modality for breast cancer detection and is increasingly playing a key role in lesion characterization. In this context, accurate and reliable quantification of the shape and extent of breast cancer is crucial in clinical research environments. Since conventional lesion delineation procedures are still mostly manual, automated segmentation approaches can improve this time-consuming and operator-dependent task by annotating the regions of interest in a reproducible manner. In this work, a semi-automated and interactive approach based on the spatial Fuzzy C-Means (sFCM) algorithm is proposed, used to segment masses on dynamic contrast-enhanced (DCE) MRI of the breast. Our method was compared against existing approaches based on classic image processing, namely (i) Otsu’s method for thresholding-based segmentation, and (ii) the traditional FCM algorithm. A further comparison was performed against state-of-the-art Convolutional Neural Networks (CNNs) for medical image segmentation, namely SegNet and U-Net, in a 5-fold cross-validation scheme. The results showed the validity of the proposed approach, by significantly outperforming the competing methods in terms of the Dice similarity coefficient ( 84.47 ± 4.75 ). Moreover, a Pearson’s coefficient of ρ = 0.993 showed a high correlation between segmented volume and the gold standard provided by clinicians. Overall, the proposed method was confirmed to outperform the competing literature methods. The proposed computer-assisted approach could be deployed into clinical research environments by providing a reliable tool for volumetric and radiomics analyses.
- Published
- 2022
- Full Text
- View/download PDF
11. Focus U-Net: A novel dual attention-gated CNN for polyp segmentation during colonoscopy
- Author
-
Carola-Bibiane Schönlieb, Leonardo Rundo, Michael Yeung, Evis Sala, Sala, Evis [0000-0002-5518-9360], Rundo, Leonardo [0000-0003-3341-5483], and Apollo - University of Cambridge Repository
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Neural Networks ,Computer science ,Image Processing ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Colonoscopy ,Health Informatics ,Article ,Machine Learning (cs.LG) ,Attention mechanisms ,Colorectal cancer ,Computer-aided diagnosis ,Focus U-Net ,Loss function ,Polyp segmentation ,Attention ,Early Detection of Cancer ,Neural Networks, Computer ,Image Processing, Computer-Assisted ,Computer ,Computer-Assisted ,FOS: Electrical engineering, electronic engineering, information engineering ,medicine ,Segmentation ,I.2.10 ,medicine.diagnostic_test ,Artificial neural network ,business.industry ,Deep learning ,Image and Video Processing (eess.IV) ,Pattern recognition ,Image segmentation ,Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science Applications ,Test set ,Artificial intelligence ,Focus (optics) ,business - Abstract
Background Colonoscopy remains the gold-standard screening for colorectal cancer. However, significant miss rates for polyps have been reported, particularly when there are multiple small adenomas. This presents an opportunity to leverage computer-aided systems to support clinicians and reduce the number of polyps missed. Method In this work we introduce the Focus U-Net, a novel dual attention-gated deep neural network, which combines efficient spatial and channel-based attention into a single Focus Gate module to encourage selective learning of polyp features. The Focus U-Net incorporates several further architectural modifications, including the addition of short-range skip connections and deep supervision. Furthermore, we introduce the Hybrid Focal loss, a new compound loss function based on the Focal loss and Focal Tversky loss, designed to handle class-imbalanced image segmentation. For our experiments, we selected five public datasets containing images of polyps obtained during optical colonoscopy: CVC-ClinicDB, Kvasir-SEG, CVC-ColonDB, ETIS-Larib PolypDB and EndoScene test set. We first perform a series of ablation studies and then evaluate the Focus U-Net on the CVC-ClinicDB and Kvasir-SEG datasets separately, and on a combined dataset of all five public datasets. To evaluate model performance, we use the Dice similarity coefficient (DSC) and Intersection over Union (IoU) metrics. Results Our model achieves state-of-the-art results for both CVC-ClinicDB and Kvasir-SEG, with a mean DSC of 0.941 and 0.910, respectively. When evaluated on a combination of five public polyp datasets, our model similarly achieves state-of-the-art results with a mean DSC of 0.878 and mean IoU of 0.809, a 14% and 15% improvement over the previous state-of-the-art results of 0.768 and 0.702, respectively. Conclusions This study shows the potential for deep learning to provide fast and accurate polyp segmentation results for use during colonoscopy. The Focus U-Net may be adapted for future use in newer non-invasive colorectal cancer screening and more broadly to other biomedical image segmentation tasks similarly involving class imbalance and requiring efficiency., Graphical abstract Image 1, Highlights • Automatic polyp segmentation can support clinicians to reduce polyp miss rates. • Focus U-Net combines efficient spatial and channel attention into a Focus Gate. • Focus Gate uses tunable focal parameter to control degree of background suppression. • Deep supervision and new Hybrid Focal loss function further improve performance. • Focus U-Net outperforms state-of-the-art results across five public polyp datasets.
- Published
- 2021
- Full Text
- View/download PDF
12. 3D deformable registration of longitudinal abdominopelvic CT images using unsupervised deep learning
- Author
-
Felix Lucka, K. Joost Batenburg, Carola-Bibiane Schönlieb, Evis Sala, Leonardo Rundo, Emma Beddowes, Maureen van Eijnatten, Ramona Woitek, Carlos Caldas, Ferdia A. Gallagher, Medical Image Analysis, EAISI Health, Rundo, Leonardo [0000-0003-3341-5483], Beddowes, Emma [0000-0001-7649-2863], Caldas, Carlos [0000-0003-3547-1489], Gallagher, Ferdia [0000-0003-4784-5230], Sala, Evis [0000-0002-5518-9360], Woitek, Ramona [0000-0002-9146-9159], Apollo - University of Cambridge Repository, and Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands
- Subjects
FOS: Computer and information sciences ,Male ,Similarity (geometry) ,Computer science ,Incremental training ,Computer Vision and Pattern Recognition (cs.CV) ,Image Processing ,education ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image registration ,Deformable registration ,Health Informatics ,SDG 3 – Goede gezondheid en welzijn ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Software ,Computer-Assisted ,Deep Learning ,Abdominopelvic imaging ,SDG 3 - Good Health and Well-being ,medicine ,FOS: Electrical engineering, electronic engineering, information engineering ,Image Processing, Computer-Assisted ,Humans ,Computer vision ,Computed tomography ,Convolutional neural networks ,Displacement vector fields ,Magnetic Resonance Imaging ,Tomography, X-Ray Computed ,Tomography ,medicine.diagnostic_test ,business.industry ,Deep learning ,Image and Video Processing (eess.IV) ,Magnetic resonance imaging ,Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science Applications ,X-Ray Computed ,Benchmark (computing) ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Volume (compression) - Abstract
This study investigates the use of the unsupervised deep learning framework VoxelMorph for deformable registration of longitudinal abdominopelvic CT images acquired in patients with bone metastases from breast cancer. The CT images were refined prior to registration by automatically removing the CT table and all other extra-corporeal components. To improve the learning capabilities of VoxelMorph when only a limited amount of training data is available, a novel incremental training strategy is proposed based on simulated deformations of consecutive CT images. In a 4-fold cross-validation scheme, the incremental training strategy achieved significantly better registration performance compared to training on a single volume. Although our deformable image registration method did not outperform iterative registration using NiftyReg (considered as a benchmark) in terms of registration quality, the registrations were approximately 300 times faster. This study showed the feasibility of deep learning based deformable registration of longitudinal abdominopelvic CT images via a novel incremental training strategy based on simulated deformations., Background and Objectives: Deep learning is being increasingly used for deformable image registration and unsupervised approaches, in particular, have shown great potential. However, the registration of abdominopelvic Computed Tomography (CT) images remains challenging due to the larger displacements compared to those in brain or prostate Magnetic Resonance Imaging datasets that are typically considered as benchmarks. In this study, we investigate the use of the commonly used unsupervised deep learning framework VoxelMorph for the registration of a longitudinal abdominopelvic CT dataset acquired in patients with bone metastases from breast cancer. Methods: As a pre-processing step, the abdominopelvic CT images were refined by automatically removing the CT table and all other extra-corporeal components. To improve the learning capabilities of the VoxelMorph framework when only a limited amount of training data is available, a novel incremental training strategy is proposed based on simulated deformations of consecutive CT images in the longitudinal dataset. This devised training strategy was compared against training on simulated deformations of a single CT volume. A widely used software toolbox for deformable image registration called NiftyReg was used as a benchmark. The evaluations were performed by calculating the Dice Similarity Coefficient (DSC) between manual vertebrae segmentations and the Structural Similarity Index (SSIM). Results: The CT table removal procedure allowed both VoxelMorph and NiftyReg to achieve significantly better registration performance. In a 4-fold cross-validation scheme, the incremental training strategy resulted in better registration performance compared to training on a single volume, with a mean DSC of 0.929±0.037 and 0.883±0.033, and a mean SSIM of 0.984±0.009 and 0.969±0.007, respectively. Although our deformable image registration method did not outperform NiftyReg in terms of DSC (0.988±0.003) or SSIM (0.995±0.002), the registrations were approximately 300 times faster. Conclusions: This study showed the feasibility of deep learning based deformable registration of longitudinal abdominopelvic CT images via a novel incremental training strategy based on simulated deformations.
- Published
- 2021
13. Fingerprint classification based on deep learning approaches: Experimental findings and comparisons
- Author
-
Carmelo Militello, Salvatore Vitabile, Vincenzo Conti, Leonardo Rundo, Militello C., Rundo L., Vitabile S., Conti V., Rundo, Leonardo [0000-0003-3341-5483], and Apollo - University of Cambridge Repository
- Subjects
Physics and Astronomy (miscellaneous) ,Biometrics ,Computer science ,General Mathematics ,fingerprint features ,fingerprint classification ,deep learning ,convolutional neural networks ,Convolutional neural networks, Deep learning, Fingerprint classification, Fingerprint features ,Image processing ,02 engineering and technology ,Convolutional neural network ,Field (computer science) ,020204 information systems ,QA1-939 ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,Reliability (statistics) ,business.industry ,Deep learning ,Fingerprint (computing) ,Pattern recognition ,Identification (information) ,Chemistry (miscellaneous) ,Convolutional neural networks ,Fingerprint classification ,Fingerprint features ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Mathematics - Abstract
Biometric classification plays a key role in fingerprint characterization, especially in the identification process. In fact, reducing the number of comparisons in biometric recognition systems is essential when dealing with large-scale databases. The classification of fingerprints aims to achieve this target by splitting fingerprints into different categories. The general approach of fingerprint classification requires pre-processing techniques that are usually computationally expensive. Deep Learning is emerging as the leading field that has been successfully applied to many areas, such as image processing. This work shows the performance of pre-trained Convolutional Neural Networks (CNNs), tested on two fingerprint databases—namely, PolyU and NIST—and comparisons to other results presented in the literature in order to establish the type of classification that allows us to obtain the best performance in terms of precision and model efficiency, among approaches under examination, namely: AlexNet, GoogLeNet, and ResNet. We present the first study that extensively compares the most used CNN architectures by classifying the fingerprints into four, five, and eight classes. From the experimental results, the best performance was obtained in the classification of the PolyU database by all the tested CNN architectures due to the higher quality of its samples. To confirm the reliability of our study and the results obtained, a statistical analysis based on the McNemar test was performed.
- Published
- 2021
- Full Text
- View/download PDF
14. A multimodal retina-iris biometric system using the Levenshtein distance for spatial feature comparison
- Author
-
Salvatore Vitabile, Valerio Mario Salerno, Carmelo Militello, Vincenzo Conti, Sabato Marco Siniscalchi, Leonardo Rundo, Conti, V [0000-0002-8718-111X], Apollo - University of Cambridge Repository, Conti V., Rundo L., Militello C., Salerno V.M., Vitabile S., and Siniscalchi S.M.
- Subjects
Biometric system ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,spatial domain biometric features ,biometric authentication system ,4603 Computer Vision and Multimedia Computation ,46 Information and Computing Sciences ,medicine ,Iris (anatomy) ,multimodal system ,Retina ,business.industry ,multimodal retina-iris biometric system ,Levenshtein distance ,Pattern recognition ,biometric recognition system ,QA75.5-76.95 ,retina and iris features ,medicine.anatomical_structure ,Feature (computer vision) ,Electronic computers. Computer science ,Signal Processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
The recent developments of information technologies, and the consequent need for access to distributed services and resources, require robust and reliable authentication systems. Biometric systems can guarantee high levels of security and multimodal techniques, which combine two or more biometric traits, warranting constraints that are more stringent during the access phases. This work proposes a novel multimodal biometric system based on iris and retina combination in the spatial domain. The proposed solution follows the alignment and recognition approach commonly adopted in computational linguistics and bioinformatics; in particular, features are extracted separately for iris and retina, and the fusion is obtained relying upon the comparison score via the Levenshtein distance. We evaluated our approach by testing several combinations of publicly available biometric databases, namely one for retina images and three for iris images. To provide comprehensive results, detection error trade‐off‐based metrics, as well as statistical analyses for assessing the authentication performance, were considered. The best achieved False Acceptation Rate and False Rejection Rate indices were and 3.33%, respectively, for the multimodal retina‐iris biometric approach that overall outperformed the unimodal systems. These results draw the potential of the proposed approach as a multimodal authentication framework using multiple static biometric traits.
- Published
- 2021
- Full Text
- View/download PDF
15. Advanced computational methods for oncological image analysis
- Author
-
Fulvio Zaccagna, Leonardo Rundo, Changhee Han, Vincenzo Conti, Carmelo Militello, Rundo L., Militello C., Conti V., Zaccagna F., and Han C.
- Subjects
business.industry ,Computer science ,Deep learning ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Radiogenomics ,Ranging ,QA75.5-76.95 ,Machine learning ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,Original research ,Image (mathematics) ,n/a ,Editorial ,Electronic computers. Computer science ,Photography ,Computational Method, Oncological Imaging ,Radiology, Nuclear Medicine and imaging ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,TR1-1050 ,business ,computer - Abstract
The Special Issue “Advanced Computational Methods for Oncological Image Analysis”, published for the Journal of Imaging, covered original research papers about state-of-the-art and novel algorithms and methodologies, as well as applications of computational methods for oncological image analysis, ranging from radiogenomics to deep learning [...]
- Published
- 2021
16. AI applications to medical images: From machine learning to deep learning
- Author
-
Leonardo Rundo, Christian Salvatore, Marina Codari, Isabella Castiglioni, Matteo Interlenghi, Andrea Cozzi, Giovanni Di Leo, Francesca Gallivanone, Francesco Sardanelli, and Natascha Claudia D'Amico
- Subjects
Diagnostic Imaging ,Computer science ,Biophysics ,Artificial intelligence ,Deep learning ,Machine learning ,Medical imaging ,Radiomics ,General Physics and Astronomy ,Context (language use) ,computer.software_genre ,Clinical decision support system ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,Machine Learning ,03 medical and health sciences ,0302 clinical medicine ,Deep Learning ,Artificial Intelligence ,Radiology, Nuclear Medicine and imaging ,Interpretability ,Data curation ,business.industry ,General Medicine ,Automatic image annotation ,030220 oncology & carcinogenesis ,Applications of artificial intelligence ,Neural Networks, Computer ,business ,computer - Abstract
Purpose Artificial intelligence (AI) models are playing an increasing role in biomedical research and healthcare services. This review focuses on challenges points to be clarified about how to develop AI applications as clinical decision support systems in the real-world context. Methods A narrative review has been performed including a critical assessment of articles published between 1989 and 2021 that guided challenging sections. Results We first illustrate the architectural characteristics of machine learning (ML)/radiomics and deep learning (DL) approaches. For ML/radiomics, the phases of feature selection and of training, validation, and testing are described. DL models are presented as multi-layered artificial/convolutional neural networks, allowing us to directly process images. The data curation section includes technical steps such as image labelling, image annotation (with segmentation as a crucial step in radiomics), data harmonization (enabling compensation for differences in imaging protocols that typically generate noise in non-AI imaging studies) and federated learning. Thereafter, we dedicate specific sections to: sample size calculation, considering multiple testing in AI approaches; procedures for data augmentation to work with limited and unbalanced datasets; and the interpretability of AI models (the so-called black box issue). Pros and cons for choosing ML versus DL to implement AI applications to medical imaging are finally presented in a synoptic way. Conclusions Biomedicine and healthcare systems are one of the most important fields for AI applications and medical imaging is probably the most suitable and promising domain. Clarification of specific challenging points facilitates the development of such systems and their translation to clinical practice.
- Published
- 2021
- Full Text
- View/download PDF
17. MADGAN: unsupervised medical anomaly detection GAN using multiple adjacent brain MRI slice reconstruction
- Author
-
Tomoyuki Noguchi, Saori Koshino, Evis Sala, Changhee Han, Hideki Nakayama, Kohei Murao, Yuki Shimahara, Leonardo Rundo, Zoltán Ádám Milacski, Shin'ichi Satoh, Han, Changhee [0000-0002-4429-3859], and Apollo - University of Cambridge Repository
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Generative adversarial networks ,QH301-705.5 ,Feature vector ,Computer Vision and Pattern Recognition (cs.CV) ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Computer Science - Computer Vision and Pattern Recognition ,Various disease diagnosis ,Self-attention ,Biochemistry ,030218 nuclear medicine & medical imaging ,Machine Learning (cs.LG) ,Brain MRI reconstruction ,Unsupervised anomaly detection ,03 medical and health sciences ,0302 clinical medicine ,Imaging, Three-Dimensional ,Structural Biology ,Alzheimer Disease ,Brain mri ,FOS: Electrical engineering, electronic engineering, information engineering ,Medicine ,Humans ,Cognitive Dysfunction ,Stage (cooking) ,Biology (General) ,Cognitive impairment ,Molecular Biology ,medicine.diagnostic_test ,business.industry ,Applied Mathematics ,Research ,Image and Video Processing (eess.IV) ,Healthy subjects ,Brain ,Magnetic resonance imaging ,Pattern recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Magnetic Resonance Imaging ,3. Good health ,Computer Science Applications ,Unsupervised learning ,Anomaly detection ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Unsupervised learning can discover various unseen abnormalities, relying on large-scale unannotated medical images of healthy subjects. Towards this, unsupervised methods reconstruct a 2D/3D single medical image to detect outliers either in the learned feature space or from high reconstruction loss. However, without considering continuity between multiple adjacent slices, they cannot directly discriminate diseases composed of the accumulation of subtle anatomical anomalies, such as Alzheimer's Disease (AD). Moreover, no study has shown how unsupervised anomaly detection is associated with either disease stages, various (i.e., more than two types of) diseases, or multi-sequence Magnetic Resonance Imaging (MRI) scans. Therefore, we propose unsupervised Medical Anomaly Detection Generative Adversarial Network (MADGAN), a novel two-step method using GAN-based multiple adjacent brain MRI slice reconstruction to detect brain anomalies at different stages on multi-sequence structural MRI: (Reconstruction) Wasserstein loss with Gradient Penalty + 100 L1 loss-trained on 3 healthy brain axial MRI slices to reconstruct the next 3 ones-reconstructs unseen healthy/abnormal scans; (Diagnosis) Average L2 loss per scan discriminates them, comparing the ground truth/reconstructed slices. For training, we use two different datasets composed of 1,133 healthy T1-weighted (T1) and 135 healthy contrast-enhanced T1 (T1c) brain MRI scans for detecting AD and brain metastases/various diseases, respectively. Our Self-Attention MADGAN can detect AD on T1 scans at a very early stage, Mild Cognitive Impairment (MCI), with Area Under the Curve (AUC) 0.727, and AD at a late stage with AUC 0.894, while detecting brain metastases on T1c scans with AUC 0.921., 23 pages, 11 figures, submitted to BMC Bioinformatics. Extended version of arXiv:1906.06114
- Published
- 2021
18. A Survey on Nature-Inspired Medical Image Analysis: A Step Further in Biomedical Data Integration
- Author
-
Giorgio Ivan Russo, Maria Carla Gilardi, Evis Sala, Salvatore Vitabile, Carmelo Militello, Leonardo Rundo, Rundo, Leonardo [0000-0003-3341-5483], Sala, Evis [0000-0002-5518-9360], Apollo - University of Cambridge Repository, Rundo L., Militello C., Vitabile S., Russo G., Sala E., Gilardi M.C., Rundo, L, Militello, C, Vitabile, S, Russo, G, Sala, E, and Gilardi, M
- Subjects
Algebra and Number Theory ,medical image analysi ,business.industry ,Computer science ,Nature-inspired computing ,artificial intelligence ,biomedical data integration ,medical image analysis ,Theoretical Computer Science ,Image (mathematics) ,artificial intelligence, biomedical data integration, medical image analysis, Nature-inspired computing ,Computational Theory and Mathematics ,Biomedical data ,Artificial intelligence ,Nature inspired ,business ,Information Systems - Abstract
Natural phenomena and mechanisms have always intrigued humans, inspiring the design of effective solutions for real-world problems. Indeed, fascinating processes occur in nature, giving rise to an ever-increasing scientific interest. In everyday life, the amount of heterogeneous biomedical data is increasing more and more thanks to the advances in image acquisition modalities and high-throughput technologies. The automated analysis of these large-scale datasets creates new compelling challenges for data-driven and model-based computational methods. The application of intelligent algorithms, which mimic natural phenomena, is emerging as an effective paradigm for tackling complex problems, by considering the unique challenges and opportunities pertaining to biomedical images. Therefore, the principal contribution of computer science research in life sciences concerns the proper combination of diverse and heterogeneous datasets-i.e., medical imaging modalities (considering also radiomics approaches), Electronic Health Record engines, multi-omics studies, and real-time monitoring-to provide a comprehensive clinical knowledge. In this paper, the state-of-the-art of nature-inspired medical image analysis methods is surveyed, aiming at establishing a common platform for beneficial exchanges among computer scientists and clinicians. In particular, this review focuses on the main natureinspired computational techniques applied to medical image analysis tasks, namely: physical processes, bio-inspired mathematical models, Evolutionary Computation, Swarm Intelligence, and neural computation. These frameworks, tightly coupled with Clinical Decision Support Systems, can be suitably applied to every phase of the clinical workflow. We show that the proper combination of quantitative imaging and healthcare informatics enables an in-depth understanding of molecular processes that can guide towards personalised patient care.
- Published
- 2020
- Full Text
- View/download PDF
19. Enhancing classification performance of convolutional neural networks for prostate cancer detection on magnetic resonance images: A study with the semantic learning machine
- Author
-
Ivo Gonçalves, Paulo Lapa, Leonardo Rundo, and Mauro Castelli
- Subjects
Computer science ,Multispectral image ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Convolutional neural network ,Multiparamet-ric Magnetic Resonance Imaging ,0202 electrical engineering, electronic engineering, information engineering ,Multiparametric Magnetic Resonance Imaging ,Modality (human–computer interaction) ,Neuroevolution ,Contextual image classification ,business.industry ,Deep learning ,Semantic Learning Machine ,Convolutional Neural Networks ,Pattern recognition ,Classification ,Backpropagation ,010201 computation theory & mathematics ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Prostate cancer detection - Abstract
Prostate cancer (PCa) is the most common oncological disease in Western men. Even though a significant effort has been carried out by the scientific community, accurate and reliable automated PCa detection methods are still a compelling issue. In this clinical scenario, high-resolution multiparametric Magnetic Resonance Imaging (MRI) is becoming the most used modality, also enabling quantitative studies. Recently, deep learning techniques have achieved outstanding results in prostate MRI analysis tasks, in particular with regard to image classification. This paper studies the feasibility of using the Semantic Learning Machine (SLM) neuroevolution algorithm to replace the fully-connected architecture commonly used in the last layers of Convolutional Neural Networks (CNNs). The experimental phase considered the PROSTATEx dataset composed of multispectral MRI sequences. The achieved results show that, on the same non-contrast-enhanced MRI series, SLM outperforms with statistical significance a state-of-the-art CNN trained with backpropagation. The SLM performance is achieved without pre-training the underlying CNN with backpropagation. Furthermore, on average the SLM training time is approximately 14 times faster than the backpropagation-based approach.
- Published
- 2020
- Full Text
- View/download PDF
20. 105 Machine learning and carotid artery CT radiomics identify significant differences between culprit and non-culprit lesions in patients with stroke and transient ischaemic attack
- Author
-
Jason M. Tarkin, Carola-Bibiane Schönlieb, Nick Evans, Leonardo Rundo, Mohammed M. Chowdhury, Elizabeth P.V. Le, Evis Sala, Jonathan R. Weir-McCall, James H.F. Rudd, Christopher Wall, Elizabeth A. Warburton, Yuan Huang, Holly Pavey, and Fulvio Zaccagna
- Subjects
medicine.diagnostic_test ,business.industry ,Carotid arteries ,030204 cardiovascular system & hematology ,Machine learning ,computer.software_genre ,medicine.disease ,Asymptomatic ,Culprit ,03 medical and health sciences ,0302 clinical medicine ,Carotid artery disease ,Angiography ,medicine ,Mann–Whitney U test ,In patient ,030212 general & internal medicine ,Artificial intelligence ,medicine.symptom ,business ,computer ,Stroke - Abstract
Introduction Carotid atherosclerosis is the main cause of ischaemic stroke. Texture analysis is a radiomic approach used to quantify image heterogeneity which can predict tumour aggression in oncology. We investigated whether this method could be applied to carotid artery disease to differentiate symptomatic from asymptomatic patients and culprit from non-culprit plaques, and then whether machine learning (ML) could correctly classify plaques based on these features. Methods CT angiography (CTA) images from symptomatic patients with carotid artery-related cerebrovascular accidents (CVAs) and from asymptomatic (ASX) patients were studied. Regions-of-interest (ROIs) were drawn on 14 consecutive carotid artery CTA slices with 3mm slice thickness. PyRadiomics was used for isotropic image (1x1x1) resampling and normalisation prior to texture feature extraction from 6 different classes (Table 1). Asymptomatic carotids were compared to culprit carotids (CC), and non-culprit (NC) carotids using the Mann Whitney U test or Wilcoxon signed-rank tests as appropriate, with a p-value Results The dataset comprised 82 carotid arteries from 41 symptomatic patients (41 culprit; 41 non-culprit) and 50 carotid arteries from 25 asymptomatic patients. CC and NC carotids showed significant differences in both first- and second-order features (IH Median: CC 618 (61); NC 646 (97), p Conclusions Textural analysis combined with machine learning on carotid CT scans reveals highly significant differences between symptomatic and asymptomatic patients, and between culprit and non-culprit carotid arteries within symptomatic patients. This approach could help identify patients at high-risk of stroke for aggressive medical therapy and surveillance. Conflict of Interest None
- Published
- 2020
- Full Text
- View/download PDF
21. Recent advances of HCI in decision-making tasks for optimized clinical workflows and precision medicine
- Author
-
Roberto Pirrone, Leonardo Rundo, Orazio Gambino, Evis Sala, Salvatore Vitabile, Rundo L., Pirrone R., Vitabile S., Sala E., Gambino O., Rundo, Leonardo [0000-0003-3341-5483], Sala, Evis [0000-0002-5518-9360], and Apollo - University of Cambridge Repository
- Subjects
Computer science ,Decision Support Systems ,Health Informatics ,Clinical decision support system ,Workflow ,03 medical and health sciences ,Clinical workflows, Decision-making tasks, Human-Computer Interaction, Physician-centered design, Precision medicine ,Clinical ,0302 clinical medicine ,Artificial Intelligence ,Humans ,Clinical workflows ,030212 general & internal medicine ,Precision Medicine ,030304 developmental biology ,0303 health sciences ,business.industry ,Human intelligence ,Computers ,Physician-centered design ,Usability ,Cognition ,Precision medicine ,Decision Support Systems, Clinical ,Data science ,Computer Science Applications ,Visualization ,Human-Computer Interaction ,Decision-making tasks ,Domain knowledge ,business - Abstract
The ever-increasing amount of biomedical data is enabling new large-scale studies, even though ad hoc computational solutions are required. The most recent Machine Learning (ML) and Artificial Intelligence (AI) techniques have been achieving outstanding performance and an important impact in clinical research, aiming at precision medicine, as well as improving healthcare workflows. However, the inherent heterogeneity and uncertainty in the healthcare information sources pose new compelling challenges for clinicians in their decision-making tasks. Only the proper combination of AI and human intelligence capabilities, by explicitly taking into account effective and safe interaction paradigms, can permit the delivery of care that outperforms what either can do separately. Therefore, Human-Computer Interaction (HCI) plays a crucial role in the design of software oriented to decision-making in medicine. In this work, we systematically review and discuss several research fields strictly linked to HCI and clinical decision-making, by subdividing the articles into six themes, namely: Interfaces, Visualization, Electronic Health Records, Devices, Usability, and Clinical Decision Support Systems. However, these articles typically present overlaps among the themes, revealing that HCI inter-connects multiple topics. With the goal of focusing on HCI and design aspects, the articles under consideration were grouped into four clusters. The advances in AI can effectively support the physicians' cognitive processes, which certainly play a central role in decision-making tasks because the human mental behavior cannot be completely emulated and captured; the human mind might solve a complex problem even without a statistically significant amount of data by relying upon domain knowledge. For this reason, technology must focus on interactive solutions for supporting the physicians effectively in their daily activities, by exploiting their unique knowledge and evidence-based reasoning, as well as improving the various aspects highlighted in this review.
- Published
- 2020
22. MF2C3: Multi-feature fuzzy clustering to enhance cell colony detection in automated clonogenic assay evaluation
- Author
-
Vincenzo Conti, Carmelo Militello, Marco Calvaruso, Francesco Paolo Cammarata, Giorgio Ivan Russo, Leonardo Rundo, Luigi Minafra, Rundo, Leonardo [0000-0003-3341-5483], Apollo - University of Cambridge Repository, Militello, Carmelo [0000-0003-2249-9538], Minafra, Luigi [0000-0003-3112-4519], Cammarata, Francesco Paolo [0000-0002-0554-6649], Calvaruso, Marco [0000-0003-2387-8273], Conti, Vincenzo [0000-0002-8718-111X], and Russo, Giorgio [0000-0003-1493-1087]
- Subjects
Fuzzy clustering ,Physics and Astronomy (miscellaneous) ,Computer science ,General Mathematics ,02 engineering and technology ,clonogenic assay ,automatic cell colony detection ,spatial fuzzy c-means clustering ,multi-feature clustering ,entropy ,standard deviation ,Fuzzy logic ,Standard deviation ,03 medical and health sciences ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,Entropy (information theory) ,Cluster analysis ,Clonogenic assay ,030304 developmental biology ,0303 health sciences ,Pixel ,business.industry ,lcsh:Mathematics ,Pattern recognition ,lcsh:QA1-939 ,Thresholding ,Automatic cell colony detection ,Entropy ,Multi-feature clustering ,Spatial fuzzy c-means clustering ,Chemistry (miscellaneous) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
© 2020 by the authors. A clonogenic assay is a biological technique for calculating the Surviving Fraction (SF) that quantifies the anti-proliferative effect of treatments on cell cultures: this evaluation is often performed via manual counting of cell colony-forming units. Unfortunately, this procedure is error-prone and strongly affected by operator dependence. Besides, conventional assessment does not deal with the colony size, which is generally correlated with the delivered radiation dose or administered cytotoxic agent. Relying upon the direct proportional relationship between the Area Covered by Colony (ACC) and the colony count and size, along with the growth rate, we propose MF2C3, a novel computational method leveraging spatial Fuzzy C-Means clustering on multiple local features (i.e., entropy and standard deviation extracted from the input color images acquired by a general-purpose flat-bed scanner) for ACC-based SF quantification, by considering only the covering percentage. To evaluate the accuracy of the proposed fully automatic approach, we compared the SFs obtained by MF2C3 against the conventional counting procedure on four different cell lines. The achieved results revealed a high correlation with the ground-truth measurements based on colony counting, by outperforming our previously validated method using local thresholding on L*u*v* color well images. In conclusion, the proposed multi-feature approach, which inherently leverages the concept of symmetry in the pixel local distributions, might be reliably used in biological studies.
- Published
- 2020
- Full Text
- View/download PDF
23. Computational Intelligence for Life Sciences
- Author
-
Stefano Ruberto, Simone Spolaor, Leonardo Rundo, Paolo Cazzaniga, Andrea Tangherloni, Leonardo Vanneschi, Daniela Besozzi, Mauro Castelli, Luca Manzoni, Marco S. Nobile, Rundo, Leonardo [0000-0003-3341-5483], Apollo - University of Cambridge Repository, Besozzi, D, Manzoni, L, Nobile, M, Spolaor, S, Castelli, M, Vanneschi, L, Cazzaniga, P, Ruberto, S, Rundo, L, Tangherloni, A, Besozzi, Daniela, Manzoni, Luca, Nobile, Marco S., Spolaor, Simone, Castelli, Mauro, Vanneschi, Leonardo, Cazzaniga, Paolo, Ruberto, Stefano, Rundo, Leonardo, Tangherloni, Andrea, Information Systems IE&IS, NOVA Information Management School (NOVA IMS), NOVA IMS Research and Development Center (MagIC), and Information Management Research Center (MagIC) - NOVA Information Management School
- Subjects
Computational Intelligence ,Evolutionary Computation ,Genetic Algorithm ,Genetic Programming ,Haplotype Assembly ,Parameter Estimation ,Particle Swarm Optimization ,Protein Folding ,Swarm Intelligence ,Optimization problem ,Computer science ,Computational intelligence ,Genetic programming ,0102 computer and information sciences ,01 natural sciences ,Swarm intelligence ,Evolutionary computation ,Theoretical Computer Science ,SDG 3 - Good Health and Well-being ,Genetic algorithm ,Algebra and Number Theory ,Settore INF/01 - Informatica ,business.industry ,Particle swarm optimization ,Computational Theory and Mathematics ,010201 computation theory & mathematics ,Artificial intelligence ,Computational problem ,business ,Information Systems - Abstract
Besozzi, D., Manzoni, L., Nobile, M. S., Spolaor, S., Castelli, M., Vanneschi, L., ... Tangherloni, A. (2020). Computational Intelligence for Life Sciences. Fundamenta Informaticae, 171(1-4), 57-80. https://doi.org/10.3233/FI-2020-1872 Computational Intelligence (CI) is a computer science discipline encompassing the theory, design, development and application of biologically and linguistically derived computational paradigms. Traditionally, the main elements of CI are Evolutionary Computation, Swarm Intelligence, Fuzzy Logic, and Neural Networks. CI aims at proposing new algorithms able to solve complex computational problems by taking inspiration from natural phenomena. In an intriguing turn of events, these nature-inspired methods have been widely adopted to investigate a plethora of problems related to nature itself. In this paper we present a variety of CI methods applied to three problems in life sciences, highlighting their effectiveness: we describe how protein folding can be faced by exploiting Genetic Programming, the inference of haplotypes can be tackled using Genetic Algorithms, and the estimation of biochemical kinetic parameters can be performed by means of Swarm Intelligence. We show that CI methods can generate very high quality solutions, providing a sound methodology to solve complex optimization problems in life sciences. authorsversion published
- Published
- 2020
24. GAN-based multiple adjacent brain MRI slice reconstruction for unsupervised alzheimer’s disease diagnosis
- Author
-
Kazuki Umemoto, Kohei Murao, Zoltán Ádám Milacski, Shin'ichi Satoh, Changhee Han, Leonardo Rundo, Hideki Nakayama, and Evis Sala
- Subjects
FOS: Computer and information sciences ,Aging ,Generative adversarial networks ,Computer science ,Feature vector ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,Disease ,Neurodegenerative ,4603 Computer Vision and Multimedia Computation ,Alzheimer's Disease ,03 medical and health sciences ,0302 clinical medicine ,46 Information and Computing Sciences ,4611 Machine Learning ,Acquired Cognitive Impairment ,0202 electrical engineering, electronic engineering, information engineering ,Brain mri ,FOS: Electrical engineering, electronic engineering, information engineering ,Alzheimer’s disease diagnosis ,Ground truth ,business.industry ,FOS: Clinical medicine ,Image (category theory) ,Image and Video Processing (eess.IV) ,Neurosciences ,Alzheimer's Disease including Alzheimer's Disease Related Dementias (AD/ADRD) ,Brain MRI reconstruction ,Unsupervised anomaly detection ,Pattern recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Brain Disorders ,4.1 Discovery and preclinical testing of markers and technologies ,Neurological ,Outlier ,Biomedical Imaging ,Unsupervised learning ,Dementia ,020201 artificial intelligence & image processing ,Anomaly detection ,Artificial intelligence ,4 Detection, screening and diagnosis ,business ,030217 neurology & neurosurgery - Abstract
Unsupervised learning can discover various unseen diseases, relying on large-scale unannotated medical images of healthy subjects. Towards this, unsupervised methods reconstruct a single medical image to detect outliers either in the learned feature space or from high reconstruction loss. However, without considering continuity between multiple adjacent slices, they cannot directly discriminate diseases composed of the accumulation of subtle anatomical anomalies, such as Alzheimer's Disease (AD). Moreover, no study has shown how unsupervised anomaly detection is associated with disease stages. Therefore, we propose a two-step method using Generative Adversarial Network-based multiple adjacent brain MRI slice reconstruction to detect AD at various stages: (Reconstruction) Wasserstein loss with Gradient Penalty + L1 loss---trained on 3 healthy slices to reconstruct the next 3 ones---reconstructs unseen healthy/AD cases; (Diagnosis) Average/Maximum loss (e.g., L2 loss) per scan discriminates them, comparing the reconstructed/ground truth images. The results show that we can reliably detect AD at a very early stage with Area Under the Curve (AUC) 0.780 while also detecting AD at a late stage much more accurately with AUC 0.917; since our method is fully unsupervised, it should also discover and alert any anomalies including rare disease., Comment: 10 pages, 4 figures, Accepted to Lecture Notes in Bioinformatics (LNBI) as a volume in the Springer series
- Published
- 2020
25. A Hybrid End-to-End Approach Integrating Conditional Random Fields into CNNs for Prostate Cancer Detection on MRI
- Author
-
Leonardo Rundo, Ivo Gonçalves, Evis Sala, Mauro Castelli, Paulo Lapa, NOVA Information Management School (NOVA IMS), Information Management Research Center (MagIC) - NOVA Information Management School, NOVA IMS Research and Development Center (MagIC), Castelli, Mauro [0000-0002-8793-1451], Apollo - University of Cambridge Repository, Castelli, M [0000-0002-8793-1451], and Gonçalves, I [0000-0002-5336-7768]
- Subjects
Conditional random field ,Computer science ,02 engineering and technology ,Convolutional neural network ,lcsh:Technology ,030218 nuclear medicine & medical imaging ,lcsh:Chemistry ,0302 clinical medicine ,convolutional neural networks ,0202 electrical engineering, electronic engineering, information engineering ,magnetic resonance imaging ,General Materials Science ,recurrent neural networks ,Graphical model ,Structured prediction ,CRFS ,Instrumentation ,lcsh:QH301-705.5 ,Engineering(all) ,Fluid Flow and Transfer Processes ,General Engineering ,lcsh:QC1-999 ,Computer Science Applications ,020201 artificial intelligence & image processing ,Convolutional neural networks ,udc:004:78 ,Feature extraction ,prostate cancer detection ,Conditional random fields ,03 medical and health sciences ,Magnetic resonance imaging ,conditional random fields ,Materials Science(all) ,SDG 3 - Good Health and Well-being ,Prostate cancer detection ,Recurrent neural networks ,business.industry ,lcsh:T ,Process Chemistry and Technology ,Probabilistic logic ,Pattern recognition ,Recurrent neural network ,lcsh:Biology (General) ,lcsh:QD1-999 ,lcsh:TA1-2040 ,Artificial intelligence ,business ,lcsh:Engineering (General). Civil engineering (General) ,lcsh:Physics - Abstract
Lapa, P., Castelli, M., Gonçalves, I., Sala, E., & Rundo, L. (2020). A hybrid end-to-end approach integrating conditional random fields into CNNs for prostate cancer detection on MRI. Applied Sciences (Switzerland), 10(1), [338]. [Special Issue: Deep Learning and Neuro-Evolution Methods in Biomedicine and Bioinformatics)]. Doi: https://doi.org/10.3390/app10010338 Prostate Cancer (PCa) is the most common oncological disease inWestern men. Even though a growing effort has been carried out by the scientific community in recent years, accurate and reliable automated PCa detection methods on multiparametric Magnetic Resonance Imaging (mpMRI) are still a compelling issue. In this work, a Deep Neural Network architecture is developed for the task of classifying clinically significant PCa on non-contrast-enhanced MR images. In particular, we propose the use of Conditional Random Fields as a Recurrent Neural Network (CRF-RNN) to enhance the classificationperformance of XmasNet, a Convolutional Neural Network (CNN) architecture specifically tailored to the PROSTATEx17 Challenge. The devised approach builds a hybrid end-to-end trainable network, CRF-XmasNet, composed of an initial CNN component performing feature extraction and a CRF-based probabilistic graphical model component for structured prediction, without the need for two separate training procedures. Experimental results show the suitability of this method in terms ofclassification accuracy and training time, even though the high-variability of the observed results must be reduced before transferring the resulting architecture to a clinical environment. Interestingly, the use of CRFs as a separate postprocessing method achieves significantly lower performance with respect to the proposed hybrid end-to-end approach. The proposed hybrid end-to-end CRF-RNN approach yields excellent peak performance for all the CNN architectures taken into account, but it shows a high-variability, thus requiring future investigation on the integration of CRFs into a CNN. publishersversion published
- Published
- 2020
26. CNN-Based Prostate Zonal Segmentation on T2-Weighted MR Images: A Cross-Dataset Study
- Author
-
Yudai Nagano, Claudio Ferretti, Carmelo Militello, Leonardo Rundo, Ryuichiro Hataya, Salvatore Vitabile, Giancarlo Mauri, Marco S. Nobile, Changhee Han, Maria Carla Gilardi, Jin Zhang, Andrea Tangherloni, Hideki Nakayama, Rundo L., Han C., Zhang J., Hataya R., Nagano Y., Militello C., Ferretti C., Nobile M.S., Tangherloni A., Gilardi M.C., Vitabile S., Nakayama H., Mauri G., Esposito, A., Faundez-Zanuy, M., Morabito, F.C., Pasero, E., Rundo, L, Han, C, Zhang, J, Hataya, R, Nagano, Y, Militello, C, Ferretti, C, Nobile, M, Tangherloni, A, Gilardi, M, Vitabile, S, Nakayama, H, and Mauri, G
- Subjects
Urologic Diseases ,Computer science ,Context (language use) ,32 Biomedical and Clinical Sciences ,Convolutional neural network ,Deep convolutional neural networks, Prostate zonal segmentation, Cross-dataset generalization ,Prostate cancer ,46 Information and Computing Sciences ,Prostate ,Deep convolutional neural networks ,medicine ,Anatomical MRI ,Segmentation ,Prostate zonal segmentation ,Cross-dataset generalization ,3202 Clinical Sciences ,Cancer ,Settore ING-INF/05 - Sistemi Di Elaborazione Delle Informazioni ,Settore INF/01 - Informatica ,medicine.diagnostic_test ,business.industry ,Deep learning ,INF/01 - INFORMATICA ,Magnetic resonance imaging ,Pattern recognition ,medicine.disease ,3211 Oncology and Carcinogenesis ,medicine.anatomical_structure ,Biomedical Imaging ,Artificial intelligence ,Deep convolutional neural network ,business ,T2 weighted - Abstract
Prostate canceris the most common cancer among US men. However, prostateimaging is still challenging despite the advances in multi-parametric magnetic resonance imaging (MRI), which provides both morphologic and functional information pertaining to the pathological regions. Alongwith whole prostate gland segmentation, distinguishingbetween the centralgland (CG) and peripheralzone (PZ) can guidetoward differential diagnosis, sincethe frequency andseverity of tumors differin these regions; however, theirboundary is often weakand fuzzy. This work presentsa preliminary study on deep learning to automatically delineate the CG and PZ, aiming at evaluating the generalization ability of convolutional neural networks (CNNs) on two multi-centric MRI prostate datasets. Especially, we compared three CNN-based architectures: SegNet, U-Net, and pix2pix. In such a context, the segmentation performances achieved with/without pre-training were compared in 4-fold cross-validation. In general, U-Net outperforms the other methods, especially when training and testing are performed on multiple datasets.
- Published
- 2020
27. ACDC: Automated Cell Detection and Counting for time-lapse fluorescence microscopy
- Author
-
Daniela Besozzi, Leonardo Rundo, Marco S. Nobile, Carlos F. Lopez, Darren R. Tyson, Vito Quaranta, Carmelo Militello, Riccardo Betta, Giancarlo Mauri, Andrea Tangherloni, Alexander L. R. Lubbock, Simone Spolaor, Paolo Cazzaniga, Rundo, L, Tangherloni, A, Tyson, D, Betta, R, Militello, C, Spolaor, S, Nobile, M, Besozzi, D, Lubbock, A, Quaranta, V, Mauri, G, Lopez, C, Cazzaniga, P, Information Systems IE&IS, Rundo, Leonardo [0000-0003-3341-5483], Tangherloni, Andrea [0000-0002-5856-4453], Tyson, Darren R [0000-0002-3272-4308], Militello, Carmelo [0000-0003-3177-9398], Spolaor, Simone [0000-0002-3383-367X], Nobile, Marco S [0000-0002-7692-7203], Besozzi, Daniela [0000-0001-5532-3059], Lubbock, Alexander L R [0000-0002-6950-8908], Quaranta, Vito [0000-0001-7491-8672], Mauri, Giancarlo [0000-0003-3520-4022], Lopez, Carlos F [0000-0003-3668-7468], Cazzaniga, Paolo [0000-0001-7780-0434], Apollo - University of Cambridge Repository, Tyson, Darren R. [0000-0002-3272-4308], Militello, Carmelo [0000-0003-2249-9538], Nobile, Marco S. [0000-0002-7692-7203], Lubbock, Alexander L. R. [0000-0002-6950-8908], and Lopez, Carlos F. [0000-0003-3668-7468]
- Subjects
Computer science ,02 engineering and technology ,lcsh:Technology ,fluorescence microscopy ,Fluorescence imaging ,lcsh:Chemistry ,0302 clinical medicine ,Computer cluster ,Microscopy ,0202 electrical engineering, electronic engineering, information engineering ,Fluorescence microscope ,General Materials Science ,Instrumentation ,lcsh:QH301-705.5 ,Cell counting ,Fluid Flow and Transfer Processes ,0303 health sciences ,Settore INF/01 - Informatica ,General Engineering ,Time-lapse microscopy ,INF/01 - INFORMATICA ,Fluorescence ,lcsh:QC1-999 ,Computer Science Applications ,Nuclei segmentation ,020201 artificial intelligence & image processing ,Bilateral filter ,Similarity (geometry) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Automated Cell Detection and Counting ,Article ,ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI ,03 medical and health sciences ,medicine ,Leverage (statistics) ,Bioimage informatic ,Bioimage informatics ,030304 developmental biology ,lcsh:T ,business.industry ,Process Chemistry and Technology ,ACDC ,Pattern recognition ,medicine.disease ,Visualization ,lcsh:Biology (General) ,lcsh:QD1-999 ,lcsh:TA1-2040 ,Artificial intelligence ,lcsh:Engineering (General). Civil engineering (General) ,business ,Bioimage Informatics ,030217 neurology & neurosurgery ,lcsh:Physics - Abstract
Advances in microscopy imaging technologies have enabled the visualization of live-cell dynamic processes using time-lapse microscopy imaging. However, modern methods exhibit several limitations related to the training phases and to time constraints, hindering their application in the laboratory practice. In this work, we present a novel method, named Automated Cell Detection and Counting (ACDC), designed for activity detection of fluorescent labeled cell nuclei in time-lapse microscopy. ACDC overcomes the limitations of the literature methods, by first applying bilateral filtering on the original image to smooth the input cell images while preserving edge sharpness, and then by exploiting the watershed transform and morphological filtering. Moreover, ACDC represents a feasible solution for the laboratory practice, as it can leverage multi-core architectures in computer clusters to efficiently handle large-scale imaging datasets. Indeed, our Parent-Workers implementation of ACDC allows to obtain up to a 3.7×, speed-up compared to the sequential counterpart. ACDC was tested on two distinct cell imaging datasets to assess its accuracy and effectiveness on images with different characteristics. We achieved an accurate cell-count and nuclei segmentation without relying on large-scale annotated datasets, a result confirmed by the average Dice Similarity Coefficients of 76.84 and 88.64 and the Pearson coefficients of 0.99 and 0.96, calculated against the manual cell counting, on the two tested datasets.
- Published
- 2020
- Full Text
- View/download PDF
28. Infinite Brain MR Images: PGGAN-Based Data Augmentation for Tumor Detection
- Author
-
Giancarlo Mauri, Leonardo Rundo, Changhee Han, Ryosuke Araki, Hideaki Hayashi, Yujiro Furukawa, Hideki Nakayama, Esposito, A., Faundez-Zanuy, M., Morabito, F.C., Pasero, E., Han, C, Rundo, L, Araki, R, Furukawa, Y, Mauri, G, Nakayama, H, and Hayashi, H
- Subjects
Tumor detection ,Generative adversarial networks ,medicine.diagnostic_test ,Data augmentation ,business.industry ,Computer science ,INF/01 - INFORMATICA ,Magnetic resonance imaging ,Pattern recognition ,Training methods ,Real image ,Convolutional neural network ,ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI ,Brain MRI ,Medical imaging ,medicine ,Artificial intelligence ,Synthetic medical image generation ,Performance improvement ,Mr images ,business ,Generative adversarial network - Abstract
Due to the lackof available annotated medical images, accurate computer-assisteddiagnosis requires intensive data augmentation (DA) techniques, such as geometric/intensity transformationsof original images; however, those transformed images intrinsically havea similar distribution to the original ones, leading to limited performance improvement. To fill the data lack in the real imagedistribution, we synthesize brain contrast-enhanced magnetic resonance (MR) images—realistic but completely different fromthe original ones—using generative adversarial networks (GANs). This study exploits progressive growing of GANs (PGGANs), amultistage generative training method, to generate original-sized 256 × 256 MR images for convolutional neural network-based brain tumor detection, which is challenging via conventional GANs; difficulties arise due to unstable GAN training with high resolution and a variety of tumors in size, location, shape, and contrast. Our preliminary results show that this novel PGGAN-based DA method can achieve a promising performance improvement, when combined with classical DA, in tumor detection and also in other medical imaging tasks.
- Published
- 2020
29. A CUDA-powered method for the feature extraction and unsupervised analysis of medical images
- Author
-
Paolo Cazzaniga, Matteo Mistri, Simone Galimberti, Giancarlo Mauri, Leonardo Rundo, Andrea Tangherloni, Evis Sala, Marco S. Nobile, Ramona Woitek, Information Systems IE&IS, Rundo, L [0000-0003-3341-5483], Tangherloni, A [0000-0002-5856-4453], Cazzaniga, P [0000-0001-7780-0434], Woitek, R [0000-0002-9146-9159], Sala, E [0000-0002-5518-9360], Mauri, G [0000-0003-3520-4022], Nobile, MS [0000-0002-7692-7203], Apollo - University of Cambridge Repository, Rundo, L, Tangherloni, A, Cazzaniga, P, Mistri, M, Galimberti, S, Woitek, R, Sala, E, Mauri, G, and Nobile, M
- Subjects
Self-organizing map ,Haralick features ,Self-organizing maps ,GPU computing ,Medical imaging ,Radiomics ,Unsupervised learning ,Computer science ,Feature extraction ,Graphics processing unit ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Context (language use) ,02 engineering and technology ,SDG 3 – Goede gezondheid en welzijn ,ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI ,030218 nuclear medicine & medical imaging ,Theoretical Computer Science ,03 medical and health sciences ,CUDA ,0302 clinical medicine ,Image texture ,SDG 3 - Good Health and Well-being ,Color depth ,0202 electrical engineering, electronic engineering, information engineering ,Pixel ,Settore INF/01 - Informatica ,business.industry ,INF/01 - INFORMATICA ,Pattern recognition ,Hardware and Architecture ,020201 artificial intelligence & image processing ,Artificial intelligence ,Radiomic ,General-purpose computing on graphics processing units ,business ,Haralick feature ,Software ,Information Systems - Abstract
Funder: Università degli Studi di Milano - Bicocca, Image texture extraction and analysis are fundamental steps in computer vision. In particular, considering the biomedical field, quantitative imaging methods are increasingly gaining importance because they convey scientifically and clinically relevant information for prediction, prognosis, and treatment response assessment. In this context, radiomic approaches are fostering large-scale studies that can have a significant impact in the clinical practice. In this work, we present a novel method, called CHASM (Cuda, HAralick & SoM), which is accelerated on the graphics processing unit (GPU) for quantitative imaging analyses based on Haralick features and on the self-organizing map (SOM). The Haralick features extraction step relies upon the gray-level co-occurrence matrix, which is computationally burdensome on medical images characterized by a high bit depth. The downstream analyses exploit the SOM with the goal of identifying the underlying clusters of pixels in an unsupervised manner. CHASM is conceived to leverage the parallel computation capabilities of modern GPUs. Analyzing ovarian cancer computed tomography images, CHASM achieved up to $$\sim 19.5\times $$ ∼ 19.5 × and $$\sim 37\times $$ ∼ 37 × speed-up factors for the Haralick feature extraction and for the SOM execution, respectively, compared to the corresponding C++ coded sequential versions. Such computational results point out the potential of GPUs in the clinical research.
- Published
- 2021
- Full Text
- View/download PDF
30. HaraliCU: GPU-powered Haralick feature extraction on medical images exploiting the full dynamics of gray-scale levels
- Author
-
Leonardo Rundo, Paolo Cazzaniga, Marco S. Nobile, Andrea Tangherloni, Giancarlo Mauri, Evis Sala, Ramona Woitek, Simone Galimberti, Malyshkin, V, Rundo, L, Tangherloni, A, Galimberti, S, Cazzaniga, P, Woitek, R, Sala, E, Nobile, M, and Mauri, G
- Subjects
050101 languages & linguistics ,Full gray-scale range ,Computer science ,Feature extraction ,Context (language use) ,CUDA ,Bioengineering ,02 engineering and technology ,4603 Computer Vision and Multimedia Computation ,Grayscale ,Field (computer science) ,Image texture ,46 Information and Computing Sciences ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Graphics ,Haralick features ,GPU computing ,Medical imaging ,Radiomics ,Settore INF/01 - Informatica ,business.industry ,05 social sciences ,INF/01 - INFORMATICA ,Pattern recognition ,020201 artificial intelligence & image processing ,Artificial intelligence ,General-purpose computing on graphics processing units ,Radiomic ,business ,Haralick feature - Abstract
Image texture extraction and analysis are fundamental steps in Computer Vision. In particular, considering the biomedical field, quantitative imaging methods are increasingly gaining importance since they convey scientifically and clinically relevant information for prediction, prognosis, and treatment response assessment. In this context, radiomic approaches are fostering large-scale studies that can have a significant impact in the clinical practice. In this work, we focus on Haralick features, the most common and clinically relevant descriptors. These features are based on the Gray-Level Co-occurrence Matrix (GLCM), whose computation is considerably intensive on images characterized by a high bit-depth (e.g., 16 bits), as in the case of medical images that convey detailed visual information. We propose here HaraliCU, an efficient strategy for the computation of the GLCM and the extraction of an exhaustive set of the Haralick features. HaraliCU was conceived to exploit the parallel computation capabilities of modern Graphics Processing Units (GPUs), allowing us to achieve up to \(\sim \!20\times \) speed-up with respect to the corresponding C++ coded sequential version. Our GPU-powered solution highlights the promising capabilities of GPUs in the clinical research.
- Published
- 2019
- Full Text
- View/download PDF
31. Semantic learning machine improves the CNN-Based detection of prostate cancer in non-contrast-enhanced MRI
- Author
-
Ivo Gonçalves, Leonardo Rundo, Paulo Lapa, and Mauro Castelli
- Subjects
Computer science ,Multispectral image ,0102 computer and information sciences ,02 engineering and technology ,Non-contrast-enhanced MRI ,01 natural sciences ,Convolutional neural network ,Prostate cancer ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Multiparametric Magnetic Resonance Imaging ,Modality (human–computer interaction) ,Neuroevolution ,Convolutional Neural Networks ,Semantic Learning Machine ,business.industry ,Deep learning ,Pattern recognition ,medicine.disease ,Backpropagation ,010201 computation theory & mathematics ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Diffusion MRI - Abstract
Considering that Prostate Cancer (PCa) is the most frequently diagnosed tumor in Western men, considerable attention has been devoted in computer-assisted PCa detection approaches. However, this task still represents an open research question. In the clinical practice, multiparametric Magnetic Resonance Imaging (MRI) is becoming the most used modality, aiming at defining biomarkers for PCa. In the latest years, deep learning techniques have boosted the performance in prostate MR image analysis and classification. This work explores the use of the Semantic Learning Machine (SLM) neuroevolution algorithm to replace the backpropagation algorithm commonly used in the last fully-connected layers of Convolutional Neural Networks (CNNs). We analyzed the non-contrast-enhanced multispectral MRI sequences included in the PROSTATEx dataset, namely: T2-weighted, Proton Density weighted, Diffusion Weighted Imaging. The experimental results show that the SLM significantly outperforms XmasNet, a state-of-the-art CNN. In particular, with respect to XmasNet, the SLM achieves higher classification accuracy (without neither pre-training the underlying CNN nor relying on backprogation) as well as a speed-up of one order of magnitude.
- Published
- 2019
- Full Text
- View/download PDF
32. Learning More with Less: Conditional PGGAN-based Data Augmentation for Brain Metastases Detection Using Highly-Rough Annotation on MR Images
- Author
-
Changhee Han, Hideki Nakayama, Tomoyuki Noguchi, Leonardo Rundo, Kohei Murao, Fumiya Uchiyama, Shin'ichi Satoh, and Yusuke Kawata
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,Convolutional neural network ,Machine Learning (cs.LG) ,Medical Image Augmentation ,03 medical and health sciences ,Annotation ,0302 clinical medicine ,Minimum bounding box ,0202 electrical engineering, electronic engineering, information engineering ,Medical imaging ,Conditional PGGANs ,Generative Adversarial Networks ,business.industry ,Pattern recognition ,Real image ,Brain Tumor Detection ,020201 artificial intelligence & image processing ,Artificial intelligence ,MRI ,business ,030217 neurology & neurosurgery - Abstract
Accurate Computer-Assisted Diagnosis, associated with proper data wrangling, can alleviate the risk of overlooking the diagnosis in a clinical environment. Towards this, as a Data Augmentation (DA) technique, Generative Adversarial Networks (GANs) can synthesize additional training data to handle the small/fragmented medical imaging datasets collected from various scanners; those images are realistic but completely different from the original ones, filling the data lack in the real image distribution. However, we cannot easily use them to locate disease areas, considering expert physicians' expensive annotation cost. Therefore, this paper proposes Conditional Progressive Growing of GANs (CPGGANs), incorporating highly-rough bounding box conditions incrementally into PGGANs to place brain metastases at desired positions/sizes on 256 X 256 Magnetic Resonance (MR) images, for Convolutional Neural Network-based tumor detection; this first GAN-based medical DA using automatic bounding box annotation improves the training robustness. The results show that CPGGAN-based DA can boost 10% sensitivity in diagnosis with clinically acceptable additional False Positives. Surprisingly, further tumor realism, achieved with additional normal brain MR images for CPGGAN training, does not contribute to detection performance, while even three physicians cannot accurately distinguish them from the real ones in Visual Turing Test., 9 pages, 7 figures, accepted to CIKM 2019 (acceptance rate: 19%)
- Published
- 2019
33. A novel framework for MR image segmentation and quantification by using MedGA
- Author
-
Leonardo Rundo a, b, c, d, Andrea Tangherloni 1 a, e, f, Paolo Cazzaniga g, h, Marco S. Nobile a, Giorgio Russo b, Maria Carla Gilardi b, Salvatore Vitabile i, Giancarlo Mauri a, Daniela Besozzi a, Carmelo Militello b, Rundo L., Tangherloni A., Cazzaniga P., Nobile M.S., Russo G., Gilardi M.C., Vitabile S., Mauri G., Besozzi D., Militello C., Rundo, L, Tangherloni, A, Cazzaniga, P, Nobile, M, Russo, G, Gilardi, M, Vitabile, S, Mauri, G, Besozzi, D, Militello, C, Rundo, Leonardo [0000-0003-3341-5483], and Apollo - University of Cambridge Repository
- Subjects
ING-INF/06 - BIOINGEGNERIA ELETTRONICA E INFORMATICA ,Adaptive thresholding ,Bimodal intensity distribution ,Evolutionary computation ,Image pre-processing ,Magnetic Resonance imaging ,Quantitative medical imaging ,Computer science ,Image Processing ,Decision Making ,Neurosurgery ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Health Informatics ,Context (language use) ,Algorithms ,Brain Neoplasms ,Computer Simulation ,Female ,Humans ,Image Processing, Computer-Assisted ,Leiomyoma ,Radiosurgery ,Software ,Magnetic Resonance Imaging ,ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Computer-Assisted ,0302 clinical medicine ,Histogram ,medicine ,Segmentation ,Histogram equalization ,medicine.diagnostic_test ,Settore INF/01 - Informatica ,business.industry ,INF/01 - INFORMATICA ,Magnetic resonance imaging ,Pattern recognition ,Image segmentation ,Thresholding ,Computer Science Applications ,Transformation (function) ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
BACKGROUND AND OBJECTIVES: Image segmentation represents one of the most challenging issues in medical image analysis to distinguish among different adjacent tissues in a body part. In this context, appropriate image pre-processing tools can improve the result accuracy achieved by computer-assisted segmentation methods. Taking into consideration images with a bimodal intensity distribution, image binarization can be used to classify the input pictorial data into two classes, given a threshold intensity value. Unfortunately, adaptive thresholding techniques for two-class segmentation work properly only for images characterized by bimodal histograms. We aim at overcoming these limitations and automatically determining a suitable optimal threshold for bimodal Magnetic Resonance (MR) images, by designing an intelligent image analysis framework tailored to effectively assist the physicians during their decision-making tasks. METHODS: In this work, we present a novel evolutionary framework for image enhancement, automatic global thresholding, and segmentation, which is here applied to different clinical scenarios involving bimodal MR image analysis: (i) uterine fibroid segmentation in MR guided Focused Ultrasound Surgery, and (ii) brain metastatic cancer segmentation in neuro-radiosurgery therapy. Our framework exploits MedGA as a pre-processing stage. MedGA is an image enhancement method based on Genetic Algorithms that improves the threshold selection, obtained by the efficient Iterative Optimal Threshold Selection algorithm, between the underlying sub-distributions in a nearly bimodal histogram. RESULTS: The results achieved by the proposed evolutionary framework were quantitatively evaluated, showing that the use of MedGA as a pre-processing stage outperforms the conventional image enhancement methods (i.e., histogram equalization, bi-histogram equalization, Gamma transformation, and sigmoid transformation), in terms of both MR image enhancement and segmentation evaluation metrics. CONCLUSIONS: Thanks to this framework, MR image segmentation accuracy is considerably increased, allowing for measurement repeatability in clinical workflows. The proposed computational solution could be well-suited for other clinical contexts requiring MR image analysis and segmentation, aiming at providing useful insights for differential diagnosis and prognosis.
- Published
- 2019
- Full Text
- View/download PDF
34. USE-Net: Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets
- Author
-
Claudio Ferretti, Carmelo Militello, Yudai Nagano, Changhee Han, Hideki Nakayama, Paolo Cazzaniga, Giancarlo Mauri, Salvatore Vitabile, Jin Zhang, Maria Carla Gilardi, Ryuichiro Hataya, Marco S. Nobile, Andrea Tangherloni, Daniela Besozzi, Leonardo Rundo, Rundo L., Han C., Nagano Y., Zhang J., Hataya R., Militello C., Tangherloni A., Nobile M.S., Ferretti C., Besozzi D., Gilardi M.C., Vitabile S., Mauri G., Nakayama H., Cazzaniga P., Rundo, L, Han, C, Nagano, Y, Zhang, J, Hataya, R, Militello, C, Tangherloni, A, Nobile, M, Ferretti, C, Besozzi, D, Gilardi, M, Vitabile, S, Mauri, G, Nakayama, H, Cazzaniga, P, Rundo, L [0000-0003-3341-5483], Militello, C [0000-0003-2249-9538], Nobile, MS [0000-0002-7692-7203], Vitabile, S [0000-0002-2673-8551], Mauri, G [0000-0003-3520-4022], Nakayama, H [0000-0001-8726-2780], Cazzaniga, P [0000-0001-7780-0434], and Apollo - University of Cambridge Repository
- Subjects
FOS: Computer and information sciences ,0209 industrial biotechnology ,Computer Science - Machine Learning ,Generalization ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Cognitive Neuroscience ,Computer Science - Computer Vision and Pattern Recognition ,Convolutional neural network ,02 engineering and technology ,Machine Learning (cs.LG) ,Image (mathematics) ,Prostate cancer ,020901 industrial engineering & automation ,Artificial Intelligence ,Prostate ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Medical imaging ,Anatomical MRI ,Segmentation ,Block (data storage) ,medicine.diagnostic_test ,Settore INF/01 - Informatica ,business.industry ,Convolutional neural networks ,Cross-dataset generalization ,Prostate zonal segmentation ,USE-Net ,INF/01 - INFORMATICA ,Magnetic resonance imaging ,Pattern recognition ,medicine.disease ,Computer Science Applications ,medicine.anatomical_structure ,Feature (computer vision) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Encoder - Abstract
Prostate cancer is the most common malignant tumors in men but prostate Magnetic Resonance Imaging (MRI) analysis remains challenging. Besides whole prostate gland segmentation, the capability to differentiate between the blurry boundary of the Central Gland (CG) and Peripheral Zone (PZ) can lead to differential diagnosis, since tumor's frequency and severity differ in these regions. To tackle the prostate zonal segmentation task, we propose a novel Convolutional Neural Network (CNN), called USE-Net, which incorporates Squeeze-and-Excitation (SE) blocks into U-Net. Especially, the SE blocks are added after every Encoder (Enc USE-Net) or Encoder-Decoder block (Enc-Dec USE-Net). This study evaluates the generalization ability of CNN-based architectures on three T2-weighted MRI datasets, each one consisting of a different number of patients and heterogeneous image characteristics, collected by different institutions. The following mixed scheme is used for training/testing: (i) training on either each individual dataset or multiple prostate MRI datasets and (ii) testing on all three datasets with all possible training/testing combinations. USE-Net is compared against three state-of-the-art CNN-based architectures (i.e., U-Net, pix2pix, and Mixed-Scale Dense Network), along with a semi-automatic continuous max-flow model. The results show that training on the union of the datasets generally outperforms training on each dataset separately, allowing for both intra-/cross-dataset generalization. Enc USE-Net shows good overall generalization under any training condition, while Enc-Dec USE-Net remarkably outperforms the other methods when trained on all datasets. These findings reveal that the SE blocks' adaptive feature recalibration provides excellent cross-dataset generalization when testing is performed on samples of the datasets used during training., 44 pages, 6 figures, Accepted to Neurocomputing, Co-first authors: Leonardo Rundo and Changhee Han
- Published
- 2019
- Full Text
- View/download PDF
35. Synthesizing Diverse Lung Nodules Wherever Massively: 3D Multi-Conditional GAN-based CT Image Augmentation for Object Detection
- Author
-
Changhee Han, Kazuki Umemoto, Akimichi Ichinose, Hideki Nakayama, Yoshiro Kitamura, Leonardo Rundo, Yujiro Furukawa, Akira Kudo, and Yuanzhong Li
- Subjects
FOS: Computer and information sciences ,3D Lung Nodule Detection ,Discriminator ,Multi Conditional GAN ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Context (language use) ,010501 environmental sciences ,01 natural sciences ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Computed Tomography ,46 Information and Computing Sciences ,Data Augmentation ,Minimum bounding box ,4611 Machine Learning ,Medical imaging ,FOS: Electrical engineering, electronic engineering, information engineering ,Sensitivity (control systems) ,Lung ,0105 earth and related environmental sciences ,Cancer ,business.industry ,Lung Cancer ,Image and Video Processing (eess.IV) ,Pattern recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Object detection ,Biomedical Imaging ,Noise (video) ,Artificial intelligence ,business - Abstract
Accurate Computer-Assisted Diagnosis, relying on large-scale annotated pathological images, can alleviate the risk of overlooking the diagnosis. Unfortunately, in medical imaging, most available datasets are small/fragmented. To tackle this, as a Data Augmentation (DA) method, 3D conditional Generative Adversarial Networks (GANs) can synthesize desired realistic/diverse 3D images as additional training data. However, no 3D conditional GAN-based DA approach exists for general bounding box-based 3D object detection, while it can locate disease areas with physicians' minimum annotation cost, unlike rigorous 3D segmentation. Moreover, since lesions vary in position/size/attenuation, further GAN-based DA performance requires multiple conditions. Therefore, we propose 3D Multi-Conditional GAN (MCGAN) to generate realistic/diverse 32 X 32 X 32 nodules placed naturally on lung Computed Tomography images to boost sensitivity in 3D object detection. Our MCGAN adopts two discriminators for conditioning: the context discriminator learns to classify real vs synthetic nodule/surrounding pairs with noise box-centered surroundings; the nodule discriminator attempts to classify real vs synthetic nodules with size/attenuation conditions. The results show that 3D Convolutional Neural Network-based detection can achieve higher sensitivity under any nodule size/attenuation at fixed False Positive rates and overcome the medical data paucity with the MCGAN-generated realistic nodules---even expert physicians fail to distinguish them from the real ones in Visual Turing Test., Comment: 9 pages, 6 figures, accepted to 3DV 2019
- Published
- 2019
- Full Text
- View/download PDF
36. MedGA: A novel evolutionary method for image enhancement in medical imaging systems
- Author
-
Leonardo Rundo a, b, Andrea Tangherloni a, Marco S. Nobile a, c, Carmelo Militello b, Daniela Besozzi a, Giancarlo Mauri a, Paolo Cazzaniga d, Rundo, L, Tangherloni, A, Nobile, M, Militello, C, Besozzi, D, Mauri, G, Cazzaniga, P, Rundo, L [0000-0003-3341-5483], Militello, C [0000-0003-2249-9538], Mauri, G [0000-0003-3520-4022], Cazzaniga, P [0000-0001-7780-0434], and Apollo - University of Cambridge Repository
- Subjects
Computer science ,Image quality ,Equalization (audio) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Medical imaging systems ,Image processing ,Signal ,ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI ,Engineering (all) ,Magnetic resonance imaging ,Artificial Intelligence ,Histogram ,Medical imaging system ,Medical imaging ,medicine ,Computer vision ,Uterine fibroid ,Histogram equalization ,Genetic Algorithm ,medicine.diagnostic_test ,Settore INF/01 - Informatica ,business.industry ,Bimodal image histogram ,Genetic Algorithms ,Image enhancement ,Uterine fibroids ,General Engineering ,Computer Science Applications1707 Computer Vision and Pattern Recognition ,INF/01 - INFORMATICA ,Computer Science Applications ,Intensity histogram ,Artificial intelligence ,business - Abstract
Medical imaging systems often require the application of image enhancement techniques to help physicians in anomaly/abnormality detection and diagnosis, as well as to improve the quality of images that undergo automated image processing. In this work we introduce MedGA, a novel image enhancement method based on Genetic Algorithms that is able to improve the appearance and the visual quality of images characterized by a bimodal gray level intensity histogram, by strengthening their two underlying sub-distributions. MedGA can be exploited as a pre-processing step for the enhancement of images with a nearly bimodal histogram distribution, to improve the results achieved by downstream image processing techniques. As a case study, we use MedGA as a clinical expert system for contrast-enhanced Magnetic Resonance image analysis, considering Magnetic Resonance guided Focused Ultrasound Surgery for uterine fibroids. The performances of MedGA are quantitatively evaluated by means of various image enhancement metrics, and compared against the conventional state-of-the-art image enhancement techniques, namely, histogram equalization, bi-histogram equalization, encoding and decoding Gamma transformations, and sigmoid transformations. We show that MedGA considerably outperforms the other approaches in terms of signal and perceived image quality, while preserving the input mean brightness. MedGA may have a significant impact in real healthcare environments, representing an intelligent solution for Clinical Decision Support Systems in radiology practice for image enhancement, to visually assist physicians during their interactive decision-making tasks, as well as for the improvement of downstream automated processing pipelines in clinically useful measurements.
- Published
- 2019
- Full Text
- View/download PDF
37. Combining noise-to-image and image-to-image GANs: Brain MR image augmentation for tumor detection
- Author
-
Giancarlo Mauri, Yudai Nagano, Hideki Nakayama, Ryosuke Araki, Leonardo Rundo, Hideaki Hayashi, Changhee Han, Yujiro Furukawa, Han, C, Rundo, L, Araki, R, Nagano, Y, Furukawa, Y, Mauri, G, Nakayama, H, Hayashi, H, Han, C [0000-0002-4429-3859], Hayashi, H [0000-0002-4800-1761], and Apollo - University of Cambridge Repository
- Subjects
FOS: Computer and information sciences ,Generative adversarial networks ,Boosting (machine learning) ,brain MRI ,General Computer Science ,Data augmentation ,Computer Science - Artificial Intelligence ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,synthetic image generation ,Convolutional neural network ,ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Medical imaging ,FOS: Electrical engineering, electronic engineering, information engineering ,Training ,General Materials Science ,Brain magnetic resonance imaging ,tumor detection ,Tumors ,Training set ,Medical diagnostic imaging ,business.industry ,Image and Video Processing (eess.IV) ,General Engineering ,INF/01 - INFORMATICA ,Pattern recognition ,Gallium nitride ,Image synthesis ,Electrical Engineering and Systems Science - Image and Video Processing ,Real image ,GAN ,Tumor detection ,Artificial Intelligence (cs.AI) ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,Mr images ,GANs ,business ,lcsh:TK1-9971 - Abstract
Convolutional Neural Networks (CNNs) achieve excellent computer-assisted diagnosis with sufficient annotated training data. However, most medical imaging datasets are small and fragmented. In this context, Generative Adversarial Networks (GANs) can synthesize realistic/diverse additional training images to fill the data lack in the real image distribution; researchers have improved classification by augmenting data with noise-to-image (e.g., random noise samples to diverse pathological images) or image-to-image GANs (e.g., a benign image to a malignant one). Yet, no research has reported results combining noise-to-image and image-to-image GANs for further performance boost. Therefore, to maximize the DA effect with the GAN combinations, we propose a two-step GAN-based DA that generates and refines brain Magnetic Resonance (MR) images with/without tumors separately: (i) Progressive Growing of GANs (PGGANs), multi-stage noise-to-image GAN for high-resolution MR image generation, first generates realistic/diverse 256 X 256 images; (ii) Multimodal UNsupervised Image-to-image Translation (MUNIT) that combines GANs/Variational AutoEncoders or SimGAN that uses a DA-focused GAN loss, further refines the texture/shape of the PGGAN-generated images similarly to the real ones. We thoroughly investigate CNN-based tumor classification results, also considering the influence of pre-training on ImageNet and discarding weird-looking GAN-generated images. The results show that, when combined with classic DA, our two-step GAN-based DA can significantly outperform the classic DA alone, in tumor detection (i.e., boosting sensitivity 93.67% to 97.48%) and also in other medical imaging tasks., Comment: 12 pages, 7 figures, accepted to IEEE ACCESS
- Published
- 2019
38. Tissue-specific and interpretable sub-segmentation of whole tumour burden on CT images by unsupervised fuzzy clustering
- Author
-
Mireia Crispin-Ortuzar, Paula Martin-Gonzalez, Evis Sala, James D. Brenton, Stephan Ursprung, Ramona Woitek, Leonardo Rundo, Lucian Beer, Florian Markowetz, Rundo, Leonardo [0000-0003-3341-5483], Beer, Lucian [0000-0003-4388-7580], Ursprung, Stephan [0000-0003-2476-178X], Markowetz, Florian [0000-0002-2784-5308], Brenton, James [0000-0002-5738-6683], Sala, Evis [0000-0002-5518-9360], Woitek, Ramona [0000-0002-9146-9159], and Apollo - University of Cambridge Repository
- Subjects
0301 basic medicine ,Fuzzy clustering ,Tumour heterogeneity ,Unsupervised fuzzy clustering ,Computer science ,Image Processing ,Health Informatics ,Fuzzy logic ,Article ,Otsu's method ,03 medical and health sciences ,symbols.namesake ,Computer-Assisted ,0302 clinical medicine ,Fuzzy Logic ,Ovarian cancer ,Image Processing, Computer-Assisted ,Cluster Analysis ,Segmentation ,Cluster analysis ,Tomography ,Computed tomography ,Spatial analysis ,Radiomics ,Renal cell carcinoma ,Tissue-specific segmentation ,Tomography, X-Ray Computed ,Tumor Burden ,Algorithms ,business.industry ,Pattern recognition ,X-Ray Computed ,Computer Science Applications ,Determining the number of clusters in a data set ,030104 developmental biology ,symbols ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Background: Cancer typically exhibits genotypic and phenotypic heterogeneity, which can have prognostic significance and influence therapy response. Computed Tomography (CT)-based radiomic approaches calculate quantitative features of tumour heterogeneity at a mesoscopic level, regardless of macroscopic areas of hypo-dense (i.e., cystic/necrotic), hyper-dense (i.e., calcified), or intermediately dense (i.e., soft tissue) portions. Method: With the goal of achieving the automated sub-segmentation of these three tissue types, we present here a two-stage computational framework based on unsupervised Fuzzy C-Means Clustering (FCM) techniques. No existing approach has specifically addressed this task so far. Our tissue-specific image sub-segmentation was tested on ovarian cancer (pelvic/ovarian and omental disease) and renal cell carcinoma CT datasets using both overlap-based and distance-based metrics for evaluation. Results: On all tested sub-segmentation tasks, our two-stage segmentation approach outperformed conventional segmentation techniques: fixed multi-thresholding, the Otsu method, and automatic cluster number selection heuristics for the K-means clustering algorithm. In addition, experiments showed that the integration of the spatial information into the FCM algorithm generally achieves more accurate segmentation results, whilst the kernelised FCM versions are not beneficial. The best spatial FCM configuration achieved average Dice similarity coefficient values starting from 81.94±4.76 and 83.43±3.81 for hyper-dense and hypo-dense components, respectively, for the investigated sub-segmentation tasks. Conclusions: The proposed intelligent framework could be readily integrated into clinical research environments and provides robust tools for future radiomic biomarker validation., Graphical abstract, Highlights • Intelligent two-stage method for CT tissue-specific image segmentation of whole tumours. • Unsupervised fuzzy clustering framework for interpretable sub-segmentation. • Sub-segmentation results achieved on ovarian and renal cancer are accurate and reliable. • The proposed computational framework considerably outperforms conventional approaches. • Insights gained about intra-tumoural heterogeneity evaluation for radiomics.
- Published
- 2020
- Full Text
- View/download PDF
39. GTVcut for neuro-radiosurgery treatment planning: an MRI brain cancer seeded image segmentation method based on a cellular automata model
- Author
-
Leonardo Rundo 1, 2, Carmelo Militello 2, Giorgio Russo 2, 3, 4, Salvatore Vitabile 5, Maria Carla Gilardi 2, Giancarlo Mauri 1, Rundo, L, Militello, C, Russo, G, Vitabile, S, Gilardi, M, Mauri, G, Rundo, Leonardo, Militello, Carmelo, Russo, Giorgio, Vitabile, Salvatore, Gilardi, Maria Carla, and Mauri, Giancarlo
- Subjects
Cellular automata ,Brain cancers ,ING-INF/06 - BIOINGEGNERIA ELETTRONICA E INFORMATICA ,Computer-assisted segmentation ,Gamma Knife neuro-radiosurgery ,MR imaging ,Computer science ,medicine.medical_treatment ,02 engineering and technology ,Brain cancer ,Radiosurgery ,ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Segmentation ,Radiation treatment planning ,Modality (human–computer interaction) ,medicine.diagnostic_test ,business.industry ,Computer Science Application ,INF/01 - INFORMATICA ,Magnetic resonance imaging ,Pattern recognition ,Computer Science Applications1707 Computer Vision and Pattern Recognition ,Image segmentation ,Cellular automaton ,Computer Science Applications ,Radiation therapy ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business - Abstract
Despite of the development of advanced segmentation techniques, achieving accurate and reproducible gross tumor volume (GTV) segmentation results is still an important challenge in neuro-radiosurgery. Nowadays, magnetic resonance imaging (MRI) is the most prominent modality in radiation therapy for soft-tissue anatomical districts. Gamma Knife stereotactic neuro-radiosurgery is a minimally invasive technology for dealing with inaccessible or insufficiently treated tumors with traditional surgery or radiotherapy. During a treatment planning phase, the GTV is generally contoured by experienced neurosurgeons and radiation oncologists using fully manual segmentation procedures on MR images. Unfortunately, this operative methodology is definitely time-expensive and operator-dependent. Delineation result repeatability, in terms of both intra- and inter-operator reliability, can be achieved only by using computer-assisted approaches. In this paper a novel semi-automatic seeded image segmentation method, based on a cellular automata model, for MRI brain cancer detection and delineation is proposed. This approach, called GTVcut, employs an adaptive seed selection strategy and helps to segment the GTV, by identifying the target volume to be treated using the Gamma Knife device. The accuracy of GTVcut was evaluated on a dataset composed of 32 brain cancers, using both spatial overlap-based and distance-based metrics. The achieved experimental results are very reproducible, showing the effectiveness and the clinical feasibility of the proposed approach.
- Published
- 2018
- Full Text
- View/download PDF
40. Computational Intelligence for Parameter Estimation of Biochemical Systems
- Author
-
Giancarlo Mauri, Andrea Tangherloni, Simone Spolaor, Marco S. Nobile, Leonardo Rundo, Daniela Besozzi, Paolo Cazzaniga, Nobile, M, Tangherloni, A, Rundo, L, Spolaor, S, Besozzi, D, Mauri, G, and Cazzaniga, P
- Subjects
Control and Optimization ,Optimization problem ,Computer science ,0206 medical engineering ,MathematicsofComputing_NUMERICALANALYSIS ,Computational intelligence ,02 engineering and technology ,Machine learning ,computer.software_genre ,Swarm intelligence ,Fuzzy logic ,Evolutionary computation ,ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI ,Artificial Intelligence ,Computational Intelligence, Systems Biology, Parameter Estimation, Evolutionary Computation, Swarm Intelligence ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,CMA-ES ,Settore INF/01 - Informatica ,business.industry ,Particle swarm optimization ,INF/01 - INFORMATICA ,Estimation of distribution algorithm ,Differential evolution ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Evolution strategy ,computer ,020602 bioinformatics - Abstract
In the field of Systems Biology, simulating the dynamics of biochemical models represents one of the most effective methodologies to understand the functioning of cellular processes in normal or altered conditions. However, the lack of kinetic rates, necessary to perform accurate simulations, strongly limits the scope of these analyses. Parameter Estimation (PE), which consists in identifying a proper model parameterization, is a non-linear, non-convex and multi-modal optimization problem, typically tackled by means of Computational Intelligence techniques, such as Evolutionary Computation and Swarm Intelligence. In this work, we perform a thorough investigation of the most widespread methods for PE-namely, Artificial Bee Colony (ABC), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), Differential Evolution (DE), Estimation of Distribution Algorithm (EDA), Genetic Algorithms (GAs), Particle Swarm Optimization (PSO), and Fuzzy Self-Tuning PSO (FST-PSO)-comparing their performances on a set of synthetic (yet realistic) biochemical models of increasing size and complexity. Our results show that a variant of the settings-free FST-PSO algorithm can consistently outperform all other methods; ABC and GAs represent the most performing alternatives, while methods based on multivariate normal distributions (e.g., CMA-ES, EDA) struggle to keep pace with the other approaches. Index Terms-Computational Intelligence, Systems Biology, Parameter Estimation, Evolutionary Computation, Swarm Intelligence
- Published
- 2018
41. GAN-based synthetic brain MR image generation
- Author
-
Changhee Han, Giancarlo Mauri, Wataru Shimoda, Leonardo Rundo, Hideki Nakayama, Ryosuke Araki, Hideaki Hayashi, Yujiro Furukawa, Shinichi Muramatsu, Han, C, Hayashi, H, Rundo, L, Araki, R, Shimoda, W, Muramatsu, S, Furukawa, Y, Mauri, G, and Nakayama, H
- Subjects
Radiology, Nuclear Medicine and Imaging ,Generative Adversarial Networks ,Synthetic Medical Image Generation ,business.industry ,Computer science ,Biomedical Engineering ,Pattern recognition ,02 engineering and technology ,Data Augmentation ,Visual Turing Test ,Generative Adversarial Network ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Brain MRI ,0202 electrical engineering, electronic engineering, information engineering ,Medical imaging ,020201 artificial intelligence & image processing ,Brain magnetic resonance imaging ,Artificial intelligence ,Mr images ,business ,Focus (optics) ,Physician Training - Abstract
In medical imaging, it remains a challenging and valuable goal how to generate realistic medical images completely different from the original ones; the obtained synthetic images would improve diagnostic reliability, allowing for data augmentation in computer-assisted diagnosis as well as physician training. In this paper, we focus on generating synthetic multi-sequence brain Magnetic Resonance (MR) images using Generative Adversarial Networks (GANs). This involves difficulties mainly due to low contrast MR images, strong consistency in brain anatomy, and intra-sequence variability. Our novel realistic medical image generation approach shows that GANs can generate 128 χ 128 brain MR images avoiding artifacts. In our preliminary validation, even an expert physician was unable to accurately distinguish the synthetic images from the real samples in the Visual Turing Test.
- Published
- 2018
42. Fully automatic multispectral MR image segmentation of prostate gland based on the fuzzy C-means clustering algorithm
- Author
-
Giorgio Ivan Russo, Salvatore Vitabile, Davide D’Urso, Giancarlo Mauri, L. M. Valastro, Antonio Garufi, Carmelo Militello, Leonardo Rundo, Maria Carla Gilardi, Esposito, A, Faundez-Zanuy, M, Morabito, FC, Rundo, L, Militello, C, Russo, G, D’Urso, D, Valastro, L, Garufi, A, Mauri, G, Vitabile, S, Gilardi, M, Esposito, A., Faundez-Zanuy, M., Morabito, F.C., Pasero, E., Rundo, Leonardo, Militello, Carmelo, Russo, Giorgio, D’Urso, Davide, Valastro, Lucia Maria, Garufi, Antonio, Mauri, Giancarlo, Vitabile, Salvatore, and Gilardi, Maria Carla
- Subjects
Computer science ,Multispectral image ,Fully automatic segmentation ,Multispectral MR imaging ,Prostate cancer ,Prostate gland ,Unsupervised fuzzy C-means clustering ,Fuzzy logic ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Prostate ,medicine ,Segmentation ,Computer vision ,Cluster analysis ,medicine.diagnostic_test ,business.industry ,INF/01 - INFORMATICA ,Magnetic resonance imaging ,fully automatic segmentation ,Image segmentation ,medicine.disease ,prostate cancer ,multispectral MR imaging ,unsupervised Fuzzy C-Means clustering ,medicine.anatomical_structure ,Artificial intelligence ,business ,prostate gland ,030217 neurology & neurosurgery - Abstract
Prostate imaging is a very critical issue in the clinical practice, especially for diagnosis, therapy, and staging of prostate cancer. Magnetic Resonance Imaging (MRI) can provide both morphologic and complementary functional information of tumor region. Manual detection and segmentation of prostate gland and carcinoma on multispectral MRI data is not easily practicable in the clinical routine because of the long times required by experienced radiologists to analyze several types of imaging data. In this paper, a fully automatic image segmentation method, exploiting an unsupervised Fuzzy C-Means (FCM) clustering technique for multispectral T1-weighted and T2-weighted MRI data processing, is proposed. This approach enables prostate segmentation and automatic gland volume calculation. Segmentation trials have been performed on a dataset composed of 7 patients affected by prostate cancer, using both area-based and distance-based metrics for its evaluation. The achieved experimental results are encouraging, showing good segmentation accuracy.
- Published
- 2017
43. Reboot strategies in particle swarm optimization and their impact on parameter estimation of biochemical systems
- Author
-
Simone Spolaor, Leonardo Rundo, Marco S. Nobile, Paolo Cazzaniga, Andrea Tangherloni, Spolaor, S, Tangherloni, A, Rundo, L, Nobile, M, and Cazzaniga, P
- Subjects
Mathematical optimization ,Reboot strategie ,Computer science ,Reboot strategies ,Parameter Estimation ,0206 medical engineering ,Health Informatics ,GPU Computing ,Particle Swarm Optimizatian ,Systems Bialagy ,02 engineering and technology ,Computational Mathematics ,Modeling and Simulation ,Agricultural and Biological Sciences (miscellaneous) ,Artificial Intelligence ,Computer Science Applications1707 Computer Vision and Pattern Recognition ,1707 ,Modeling and simulation ,Local optimum ,0202 electrical engineering, electronic engineering, information engineering ,Health Informatic ,Settore INF/01 - Informatica ,Estimation theory ,Particle swarm optimization ,Computational mathematics ,Swarm behaviour ,Computational Mathematic ,020201 artificial intelligence & image processing ,General-purpose computing on graphics processing units ,020602 bioinformatics ,Reboot - Abstract
Computational methods adopted in the field of Systems Biology require the complete knowledge of reaction kinetic constants to perform simulations of the dynamics and understand the emergent behavior of biochemical systems. However, kinetic parameters of biochemical reactions are often difficult or impossible to measure, thus they are generally inferred from experimental data, in a process known as Parameter Estimation (PE). We consider here a PE methodology that exploits Particle Swarm Optimization (PSO) to estimate an appropriate kinetic parameterization, by comparing experimental time-series target data with in silica dynamics, simulated by using the parameterization encoded by each particle. In this work we present three different reboot strategies for PSO, whose aim is to reinitialize particle positions to avoid particles to get trapped in local optima, and we compare the performance of PSO coupled with the reboot strategies with respect to standard PSO in the case of the PE of two biochemical systems. Since the PE requires a huge number of simulations at each iteration, in this work we exploit a GPU-powered deterministic simulator, cupSODA, which performs in a parallel fashion all simulations and fitness evaluations. Finally, we show that the performances of our implementation scale sublinearly with respect to the swarm size, even on outdated GPUs.
- Published
- 2017
44. A fully automatic approach for multimodal PET and MR image segmentation in gamma knife treatment planning
- Author
-
Corrado D’Arrigo, Giancarlo Mauri, Maria Carla Gilardi, Francesco Marletta, Giorgio Ivan Russo, Massimo Ippolito, Maria Gabriella Sabini, Alessandro Stefano, Salvatore Vitabile, Carmelo Militello, Leonardo Rundo, Rundo, L, Stefano, A, Militello, C, Russo, G, Sabini, M, D'Arrigo, C, Marletta, F, Ippolito, M, Mauri, G, Vitabile, S, Gilardi, M, Rundo, L., Stefano, A., Militello, C., Russo, G., Sabini, M., D'Arrigo, C., Marletta, F., Ippolito, M., Mauri, G., Vitabile, S., and Gilardi, M.
- Subjects
Radiotherapy Planning ,Brain tumor ,Health Informatics ,02 engineering and technology ,Fuzzy C-means clustering ,Radiosurgery ,Brain tumors ,Multimodal Imaging ,ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Computer-Assisted ,0302 clinical medicine ,Random walker algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Medicine ,Segmentation ,Computer vision ,Radiation treatment planning ,Cluster analysis ,Image resolution ,PET/MR imaging ,Modality (human–computer interaction) ,Brain Neoplasms ,business.industry ,Radiotherapy Planning, Computer-Assisted ,INF/01 - INFORMATICA ,Multimodal therapy ,medicine.disease ,Random Walker algorithm ,Magnetic Resonance Imaging ,Computer Science Applications ,Gamma knife treatment ,Positron-Emission Tomography ,020201 artificial intelligence & image processing ,Multimodal image segmentation ,Gamma knife treatments ,Artificial intelligence ,business ,Software - Abstract
The aim of this study is to combine Biological Target Volume (BTV) segmentation and Gross Target Volume (GTV) segmentation in stereotactic neurosurgery.Our goal is to enhance Clinical Target Volume (CTV) definition, including metabolic and morphologic information, for treatment planning and patient follow-up.We propose a fully automatic approach for multimodal PET and MR image segmentation. This method is based on the Random Walker (RW) and Fuzzy C-Means clustering (FCM) algorithms. A total of 19 brain metastatic tumors, undergone stereotactic neuro-radiosurgery, were retrospectively analyzed. A framework for the evaluation of multimodal PET/MRI segmentation is presented, considering volume-based, overlap-based and spatial distance-based metrics. Statistics was also included to measure correlation. Finally, a five-point Likert scale was used to provide a qualitative evaluation about the clinical value of the proposed multimodal approach. Background and objectives: Nowadays, clinical practice in Gamma Knife treatments is generally based on MRI anatomical information alone. However, the joint use of MRI and PET images can be useful for considering both anatomical and metabolic information about the lesion to be treated. In this paper we present a co-segmentation method to integrate the segmented Biological Target Volume (BTV), using [11C]-Methionine-PET (MET-PET) images, and the segmented Gross Target Volume (GTV), on the respective co-registered MR images. The resulting volume gives enhanced brain tumor information to be used in stereotactic neuro-radiosurgery treatment planning. GTV often does not match entirely with BTV, which provides metabolic information about brain lesions. For this reason, PET imaging is valuable and it could be used to provide complementary information useful for treatment planning. In this way, BTV can be used to modify GTV, enhancing Clinical Target Volume (CTV) delineation.Methods: A novel fully automatic multimodal PET/MRI segmentation method for Leksell Gamma Knife treatments is proposed. This approach improves and combines two computer-assisted and operator-independent single modality methods, previously developed and validated, to segment BTV and GTV from PET and MR images, respectively. In addition, the GTV is utilized to combine the superior contrast of PET images with the higher spatial resolution of MRI, obtaining a new BTV, called BTVMRI. A total of 19 brain metastatic tumors, undergone stereotactic neuro-radiosurgery, were retrospectively analyzed. A framework for the evaluation of multimodal PET/MRI segmentation is also presented. Overlap-based and spatial distance-based metrics were considered to quantify similarity concerning PET and MRI segmentation approaches. Statistics was also included to measure correlation among the different segmentation processes. Since it is not possible to define a gold-standard CTV according to both MRI and PET images without treatment response assessment, the feasibility and the clinical value of BTV integration in Gamma Knife treatment planning were considered. Therefore, a qualitative evaluation was carried out by three experienced clinicians.Results: The achieved experimental results showed that GTV and BTV segmentations are statistically correlated (Spearman's rank correlation coefficient: 0.898) but they have low similarity degree (average Dice Similarity Coefficient: 61.8714.64). Therefore, volume measurements as well as evaluation metrics values demonstrated that MRI and PET convey different but complementary imaging information. GTV and BTV could be combined to enhance treatment planning. In more than 50% of cases the CTV was strongly or moderately conditioned by metabolic imaging. Especially, BTVMRI enhanced the CTV more accurately than BTV in 25% of cases.Conclusions: The proposed fully automatic multimodal PET/MRI segmentation method is a valid operator-independent methodology helping the clinicians to define a CTV that includes both metabolic and morphologic information. BTVMRI and GTV should be considered for a comprehensive treatment planning. Display Omitted
- Published
- 2017
- Full Text
- View/download PDF
45. Proactive Particles in Swarm Optimization: A settings-free algorithm for real-parameter single objective optimization problems
- Author
-
Marco S. Nobile, Andrea Tangherloni, Leonardo Rundo, Tangherloni, A, Rundo, L, and Nobile, M
- Subjects
Mathematical optimization ,Proactive Particles in Swarm Optimization ,Optimization problem ,Fitness landscape ,MathematicsofComputing_NUMERICALANALYSIS ,0211 other engineering and technologies ,02 engineering and technology ,Swarm intelligence ,Fuzzy logic ,Fuzzy Logic ,0202 electrical engineering, electronic engineering, information engineering ,CEC 2017 competition ,Multi-swarm optimization ,Mathematics ,021103 operations research ,Settore INF/01 - Informatica ,business.industry ,Particle swarm optimization ,Swarm behaviour ,INF/01 - INFORMATICA ,Particle Swarm Optimization ,Real-parameter single objective optimization ,Settings-free algorithms ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Algorithm - Abstract
Particle Swarm Optimization (PSO) is an effective Swarm Intelligence technique for the optimization of non-linear and complex high-dimensional problems. Since PSO's performance is strongly dependent on the choice of its functioning settings, in this work we consider a self-tuning version of PSO, called Proactive Particles in Swarm Optimization (PPSO). PPSO leverages Fuzzy Logic to dynamically determine the best settings for the inertia weight, cognitive factor and social factor. The PPSO algorithm significantly differs from other versions of PSO relying on Fuzzy Logic, because specific settings are assigned to each particle according to its history, instead of being globally assigned to the whole swarm. In such a way, PPSO's particles gain a limited autonomous and proactive intelligence with respect to the reactive agents proposed by PSO. Our results show that PPSO achieves overall good optimization performances on the benchmark functions proposed in the CEC 2017 test suite, with the exception of those based on the Schwefel function, whose fitness landscape seems to mislead the fuzzy reasoning. Moreover, with many benchmark functions, PPSO is characterized by a higher speed of convergence than PSO in the case of high-dimensional problems.
- Published
- 2017
46. Automated prostate gland segmentation based on an unsupervised fuzzy C-means clustering technique using multispectral T1w and T2w MR imaging
- Author
-
Giorgio Ivan Russo, Salvatore Vitabile, Giancarlo Mauri, Maria Carla Gilardi, Carmelo Militello, Leonardo Rundo, Antonio Garufi, Rundo, L, Militello, C, Russo, G, Garufi, A, Vitabile, S, Gilardi, M, Mauri, G, Rundo, L., Militello, C., Russo, G., Garufi, A., Vitabile, S., Gilardi, M., and Mauri, G.
- Subjects
Computer science ,Automated segmentation ,Fuzzy C-Means clustering ,Multispectral MR imaging ,Prostate cancer ,Prostate gland ,Unsupervised machine learning ,Multispectral image ,02 engineering and technology ,automated segmentation ,multispectral MR imaging ,prostate gland ,prostate cancer ,unsupervised Machine Learning ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Prostate ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Computer vision ,Segmentation ,Cluster analysis ,Settore ING-INF/05 - Sistemi Di Elaborazione Delle Informazioni ,medicine.diagnostic_test ,business.industry ,INF/01 - INFORMATICA ,Magnetic resonance imaging ,medicine.disease ,medicine.anatomical_structure ,Unsupervised learning ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Information Systems ,Multispectral segmentation - Abstract
Prostate imaging analysis is difficult in diagnosis, therapy, and staging of prostate cancer. In clinical practice, Magnetic Resonance Imaging (MRI) is increasingly used thanks to its morphologic and functional capabilities. However, manual detection and delineation of prostate gland on multispectral MRI data is currently a time-expensive and operator-dependent procedure. Efficient computer-assisted segmentation approaches are not yet able to address these issues, but rather have the potential to do so. In this paper, a novel automatic prostate MR image segmentation method based on the Fuzzy C-Means (FCM) clustering algorithm, which enables multispectral T1-weighted (T1w) and T2-weighted (T2w) MRI anatomical data processing, is proposed. This approach, using an unsupervised Machine Learning technique, helps to segment the prostate gland effectively. A total of 21 patients with suspicion of prostate cancer were enrolled in this study. Volume-based metrics, spatial overlap-based metrics and spatial distance-based metrics were used to quantitatively evaluate the accuracy of the obtained segmentation results with respect to the gold-standard boundaries delineated manually by an expert radiologist. The proposed multispectral segmentation method was compared with the same processing pipeline applied on either T2w or T1w MR images alone. The multispectral approach considerably outperforms the monoparametric ones, achieving an average Dice Similarity Coefficient 90.77 ± 1.75, with respect to 81.90 ± 6.49 and 82.55 ± 4.93 by processing T2w and T1w imaging alone, respectively. Combining T2w and T1w MR image structural information significantly enhances prostate gland segmentation by exploiting the uniform gray appearance of the prostate on T1w MRI.
- Published
- 2017
47. Area-based cell colony surviving fraction evaluation: A novel fully automatic approach using general-purpose acquisition hardware
- Author
-
Carmelo Militello a, Leonardo Rundo a, b, Vincenzo Conti c, Luigi Minafra a, Francesco Paolo Cammarata a, Giancarlo Mauri b, Maria Carla Gilardi a, Nunziatina Porcino a, Militello, C, Rundo, L, Conti, V, Minafra, L, Cammarata, F, Mauri, G, Gilardi, M, and Porcino, N
- Subjects
0301 basic medicine ,ING-INF/06 - BIOINGEGNERIA ELETTRONICA E INFORMATICA ,Clonogenic assays ,Computer science ,Cell Survival ,Cell Culture Techniques ,Health Informatics ,Local adaptive thresholding ,Cell Count ,Breast Neoplasms ,02 engineering and technology ,ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI ,Hough transform ,law.invention ,Circle Hough transform ,03 medical and health sciences ,law ,Area covered by colony ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Fraction (mathematics) ,Clonogenic assay ,Delivered radiation dose ,Colony-forming unit ,business.industry ,INF/01 - INFORMATICA ,Pattern recognition ,Thresholding ,Computer Science Applications ,Fully automatic surviving fraction evaluation ,General-purpose acquisition hardware ,Female ,MCF-7 Cells ,Neoplastic Stem Cells ,030104 developmental biology ,General purpose ,Fully automatic ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
Background The current methodology for the Surviving Fraction (SF) measurement in clonogenic assay, which is a technique to study the anti-proliferative effect of treatments on cell cultures, involves manual counting of cell colony forming units. This procedure is operator-dependent and error-prone. Moreover, the identification of the exact colony number is often not feasible due to the high growth rate leading to the adjacent colony merging. As a matter of fact, conventional assessment does not deal with the colony size, which is generally correlated with the delivered radiation dose or the administered cytotoxic agent. Method Considering that the Area Covered by Colony (ACC) is proportional to the colony number and size as well as to the growth rate, we propose a novel fully automatic approach exploiting Circle Hough Transform, to automatically detect the wells in the plate, and local adaptive thresholding, which calculates the percentage of ACC for the SF quantification. This measurement relies just on this covering percentage and does not consider the colony number, preventing inconsistencies due to intra- and inter-operator variability. Results To evaluate the accuracy of the proposed approach, we compared the SFs obtained by our automatic ACC-based method against the conventional counting procedure. The achieved results (r = 0.9791 and r = 0.9682 on MCF7 and MCF10A cells, respectively) showed values highly correlated with the measurements using the traditional approach based on colony number alone. Conclusions The proposed computer-assisted methodology could be integrated in laboratory practice as an expert system for the SF evaluation in clonogenic assays.
- Published
- 2017
- Full Text
- View/download PDF
48. A semi-automatic approach for epicardial adipose tissue segmentation and quantification on cardiac CT scans
- Author
-
Carmelo Militello a, Leonardo Rundo b, c, a, Patrizia Toia d, Vincenzo Conti e, Giorgio Russo a, Clarissa Filorizzo d, Erica Maffei f, Filippo Cademartiri gì, Ludovico La Grutta h, Massimo Midiri d, Salvatore Vitabile d, Militello C., Rundo L., Toia P., Conti V., Russo G., Filorizzo C., Maffei E., Cademartiri F., La Grutta L., Midiri M., Vitabile S., Rundo, Leonardo [0000-0003-3341-5483], and Apollo - University of Cambridge Repository
- Subjects
Adult ,Male ,0301 basic medicine ,Computer science ,Adipose tissue ,Health Informatics ,Calcium score scans ,Cardiac adipose tissue quantification ,Coronary computed tomography angiography scans ,Epicardial fat volume ,Fat density quartiles ,Semi-automatic segmentation ,Correlation ,03 medical and health sciences ,Computer-Assisted ,Deep Learning ,0302 clinical medicine ,Fat density quartile ,Region of interest ,Image Interpretation, Computer-Assisted ,Humans ,Segmentation ,Adipose Tissue ,Algorithms ,Female ,Middle Aged ,Pericardium ,Tomography, X-Ray Computed ,Image Interpretation ,Tomography ,business.industry ,Calcium score scan ,Pattern recognition ,Repeatability ,Coronary computed tomography angiography scan ,X-Ray Computed ,Computer Science Applications ,030104 developmental biology ,Quartile ,Epicardial adipose tissue ,Semi automatic ,Artificial intelligence ,Settore MED/36 - Diagnostica Per Immagini E Radioterapia ,business ,030217 neurology & neurosurgery - Abstract
Many studies have shown that epicardial fat is associated with a higher risk of heart diseases. Accurate epicardial adipose tissue quantification is still an open research issue. Considering that manual approaches are generally user-dependent and time-consuming, computer-assisted tools can considerably improve the result repeatability as well as reduce the time required for performing an accurate segmentation. Unfortunately, fully automatic strategies might not always identify the Region of Interest (ROI) correctly. Moreover, they could require user interaction for handling unexpected events. This paper proposes a semi-automatic method for Epicardial Fat Volume (EFV) segmentation and quantification. Unlike supervised Machine Learning approaches, the method does not require any initial training or modeling phase to set up the system. As a further key novelty, the method also yields a subdivision into quartiles of the adipose tissue density. Quartile-based analysis conveys information about fat densities distribution, enabling an in-depth study towards a possible correlation between fat amounts, fat distribution, and heart diseases. Experimental tests were performed on 50 Calcium Score (CaSc) series and 95 Coronary Computed Tomography Angiography (CorCTA) series. Area-based and distance-based metrics were used to evaluate the segmentation accuracy, by obtaining Dice Similarity Coefficient (DSC) = 93.74% and Mean Absolute Distance (MAD) = 2.18 for CaSc, as well as DSC = 92.48% and MAD = 2.87 for CorCTA. Moreover, the Pearson and Spearman coefficients were computed for quantifying the correlation between the ground-truth EFV and the corresponding automated measurement, by obtaining 0.9591 and 0.9490 for CaSc, and 0.9513 and 0.9319 for CorCTA, respectively. In conclusion, the proposed EFV quantification and analysis method represents a clinically useable tool assisting the cardiologist to gain insights into a specific clinical scenario and leading towards personalized diagnosis and therapy.
- Published
- 2019
- Full Text
- View/download PDF
49. Semi-automatic Brain Lesion Segmentation in Gamma Knife Treatments Using an Unsupervised Fuzzy C-Means Clustering Technique
- Author
-
Corrado D’Arrigo, Francesco Marletta, Salvatore Vitabile, Pietro Pisciotta, Massimo Midiri, Maria Carla Gilardi, Massimo Ippolito, Carmelo Militello, Leonardo Rundo, Giorgio Ivan Russo, Bassis, S., Esposito, A., Morabito, F.C., Rundo, L, Militello, C, Vitabile, S, Russo, G, Pisciotta, P, Marletta, F, Ippolito, M, D'Arrigo, C, Midiri, M, Gilardi, MC, Esposito A.,Esposito A.,Morabito F.C.,Pasero E.,Bassis S., and Gilardi, M
- Subjects
Computer science ,Gamma knife ,Brain lesions, Gamma knife treatments, MR imaging, Semi-automatic segmentation, Unsupervised FCM clustering ,Fuzzy logic ,Brain lesions ,Gamma knife treatments ,MR imaging ,Semi-automatic segmentation ,Unsupervised FCM clustering ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Computer vision ,Segmentation ,Radiation treatment planning ,Cluster analysis ,Settore ING-INF/05 - Sistemi Di Elaborazione Delle Informazioni ,business.industry ,Mr imaging ,Brain lesion ,Gamma knife treatment ,Semi automatic ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
MR Imaging is being increasingly used in radiation treatment planning as well as for staging and assessing tumor response. Leksell Gamma Knife (R) is a device for stereotactic neuro-radiosurgery to deal with inaccessible or insufficiently treated lesions with traditional surgery or radiotherapy. The target to be treated with radiation beams is currently contoured through slice-by-slice manual segmentation on MR images. This procedure is time consuming and operator-dependent. Segmentation result repeatability may be ensured only by using automatic/semi-automatic methods with the clinicians supporting the planning phase. In this paper a semi-automatic segmentation method, based on an unsupervised Fuzzy C-Means clustering technique, is proposed. The presented approach allows for the target segmentation and its volume calculation. Segmentation tests on 5 MRI series were performed, using both area-based and distance-based metrics. The following average values have been obtained: DS = 95.10, JC = 90.82, TPF = 95.86, FNF = 2.18, MAD = 0.302, MAXD = 1.260, H = 1.636.
- Published
- 2016
- Full Text
- View/download PDF
50. Neuro-radiosurgery treatments: MRI brain tumor seeded image segmentation based on a cellular automata model
- Author
-
Maria Gabriella Sabini, Salvatore Vitabile, L. M. Valastro, Carmelo Militello, Maria Carla Gilardi, Giancarlo Mauri, Giorgio Ivan Russo, Leonardo Rundo, Pietro Pisciotta, El Yacoubi, S, Wąs, J, Bandini, S, Rundo, L, Militello, C, Russo, G, Pisciotta, P, Valastro, L, Sabini, M, Vitabile, S, Gilardi, M, Mauri, G, S. El Yacoubi et al., and Gilardi, MC
- Subjects
medicine.medical_specialty ,Computer science ,medicine.medical_treatment ,02 engineering and technology ,Cellular Automata ,Brain tumors ,Cellular automata ,Gamma knife treatments ,MR imaging ,Semi-automatic segmentation ,Radiosurgery ,030218 nuclear medicine & medical imaging ,Theoretical Computer Science ,03 medical and health sciences ,0302 clinical medicine ,Gamma Knife treatments ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Segmentation ,Mri brain ,Modality (human–computer interaction) ,medicine.diagnostic_test ,business.industry ,INF/01 - INFORMATICA ,Magnetic resonance imaging ,Image segmentation ,Cellular automaton ,Radiation therapy ,Brain tumor ,020201 artificial intelligence & image processing ,Gamma Knife treatment ,Artificial intelligence ,Radiology ,business - Abstract
Gross Tumor Volume (GTV) segmentation on medical images is an open issue in neuro-radiosurgery. Magnetic Resonance Imaging (MRI) is the most promi-nent modality in radiation therapy for soft-tissue anatomical districts. Gamma Knife stereotactic neuro-radiosurgery is a mini-invasive technique used to deal with inaccessible or insufficiently treated tumors. During the planning phase, the GTV is usually contoured by radiation oncologists using a manual segmentation procedure on MR images. This methodology is certainly time-consuming and op-erator-dependent. Delineation result repeatability, in terms of both intra- and inter-operator reliability, is only obtained by using computer-assisted approaches. In this paper a novel semi-automatic segmentation method, based on Cellular Au-tomata, is proposed. The developed approach allows for the GTV segmentation and computes the lesion volume to be treated. The method was evaluated on 10 brain cancers, using both area-based and distance-based metrics.
- Published
- 2016
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.