9 results on '"Pereañez M"'
Search Results
2. Semi-Automatic Graphical Tool for Measuring Coronary Artery Spatially Weighted Calcium Score from Gated Cardiac Computed Tomography Images.
- Author
-
Patel HJ, Kaufman AE, Pereañez M, Soultanidis G, Ramachandran S, Naidu S, Mani V, Fayad ZA, and Robson PM
- Subjects
- Humans, Calcium, Coronary Vessels diagnostic imaging, Tomography, X-Ray Computed methods, Reproducibility of Results, Coronary Angiography methods, Coronary Artery Disease diagnostic imaging, Calcinosis
- Abstract
The current standard for measuring coronary artery calcification to determine the extent of atherosclerosis is by calculating the Agatston score from computed tomography (CT). However, the Agatston score disregards pixel values less than 130 Hounsfield Units (HU) and calcium regions less than 1 mm
2 . Due to this thresholding, the score is not sensitive to small, weakly attenuating regions of calcium deposition and may not detect nascent micro-calcification. A recently proposed metric called the spatially weighted calcium score (SWCS) also utilizes CT but does not include a threshold for HU and does not require elevated signals in contiguous pixels. Thus, the SWCS is sensitive to weakly attenuating, smaller calcium deposits and may improve the measurement of coronary heart disease risk. Currently, the SWCS is underutilized owing to the added computational complexity. To promote translation of the SWCS into clinical research and reliable, repeatable computation of the score, the aim of this study was to develop a semi-automatic graphical tool that calculates both the SWCS and the Agatston score. The program requires gated cardiac CT scans with a calcium hydroxyapatite phantom in the field of view. The phantom allows for deriving a weighting function, from which each pixel's weight is adjusted, allowing for the mitigation of signal variations and variability between scans. With all three anatomical views visible simultaneously, the user traces the course of the four main coronary arteries by placing points or regions of interest. Features such as scroll-to-zoom, double-click to delete, and brightness/contrast adjustment, along with written guidance at every step, make the program user-friendly and easy to use. Once tracing the arteries is complete, the program generates reports, which include the scores and snapshots of any visible calcium. The SWCS may reveal the presence of subclinical disease, which may be used for early intervention and lifestyle changes.- Published
- 2023
- Full Text
- View/download PDF
3. Quantitative CMR population imaging on 20,000 subjects of the UK Biobank imaging study: LV/RV quantification pipeline and its evaluation.
- Author
-
Attar R, Pereañez M, Gooya A, Albà X, Zhang L, de Vila MH, Lee AM, Aung N, Lukaschuk E, Sanghvi MM, Fung K, Paiva JM, Piechnik SK, Neubauer S, Petersen SE, and Frangi AF
- Subjects
- Biological Specimen Banks, Female, Humans, Imaging, Three-Dimensional, Male, Pattern Recognition, Automated, United Kingdom, Heart Ventricles diagnostic imaging, Image Interpretation, Computer-Assisted methods, Magnetic Resonance Imaging, Cine methods, Models, Statistical, Neural Networks, Computer
- Abstract
Population imaging studies generate data for developing and implementing personalised health strategies to prevent, or more effectively treat disease. Large prospective epidemiological studies acquire imaging for pre-symptomatic populations. These studies enable the early discovery of alterations due to impending disease, and enable early identification of individuals at risk. Such studies pose new challenges requiring automatic image analysis. To date, few large-scale population-level cardiac imaging studies have been conducted. One such study stands out for its sheer size, careful implementation, and availability of top quality expert annotation; the UK Biobank (UKB). The resulting massive imaging datasets (targeting ca. 100,000 subjects) has put published approaches for cardiac image quantification to the test. In this paper, we present and evaluate a cardiac magnetic resonance (CMR) image analysis pipeline that properly scales up and can provide a fully automatic analysis of the UKB CMR study. Without manual user interactions, our pipeline performs end-to-end image analytics from multi-view cine CMR images all the way to anatomical and functional bi-ventricular quantification. All this, while maintaining relevant quality controls of the CMR input images, and resulting image segmentations. To the best of our knowledge, this is the first published attempt to fully automate the extraction of global and regional reference ranges of all key functional cardiovascular indexes, from both left and right cardiac ventricles, for a population of 20,000 subjects imaged at 50 time frames per subject, for a total of one million CMR volumes. In addition, our pipeline provides 3D anatomical bi-ventricular models of the heart. These models enable the extraction of detailed information of the morphodynamics of the two ventricles for subsequent association to genetic, omics, lifestyle habits, exposure information, and other information provided in population imaging studies. We validated our proposed CMR analytics pipeline against manual expert readings on a reference cohort of 4620 subjects with contour delineations and corresponding clinical indexes. Our results show broad significant agreement between the manually obtained reference indexes, and those automatically computed via our framework. 80.67% of subjects were processed with mean contour distance of less than 1 pixel, and 17.50% with mean contour distance between 1 and 2 pixels. Finally, we compare our pipeline with a recently published approach reporting on UKB data, and based on deep learning. Our comparison shows similar performance in terms of segmentation accuracy with respect to human experts., (Crown Copyright © 2019. Published by Elsevier B.V. All rights reserved.)
- Published
- 2019
- Full Text
- View/download PDF
4. Automatic initialization and quality control of large-scale cardiac MRI segmentations.
- Author
-
Albà X, Lekadir K, Pereañez M, Medrano-Gracia P, Young AA, and Frangi AF
- Subjects
- Automation, Humans, Magnetic Resonance Imaging instrumentation, Quality Control, Magnetic Resonance Imaging methods
- Abstract
Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The results obtained based on over 1200 cases from the Cardiac Atlas Project show the promise of fully automatic initialization and quality control for population studies., (Copyright © 2017 Elsevier B.V. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
5. An Algorithm for the Segmentation of Highly Abnormal Hearts Using a Generic Statistical Shape Model.
- Author
-
Albà X, Pereañez M, Hoogendoorn C, Swift AJ, Wild JM, Frangi AF, and Lekadir K
- Subjects
- Algorithms, Cardiomyopathy, Hypertrophic diagnostic imaging, Cardiomyopathy, Hypertrophic pathology, Humans, Hypertension, Pulmonary diagnostic imaging, Hypertension, Pulmonary pathology, Reproducibility of Results, Cardiac Imaging Techniques methods, Heart diagnostic imaging, Models, Cardiovascular, Models, Statistical, Myocardium pathology
- Abstract
Statistical shape models (SSMs) have been widely employed in cardiac image segmentation. However, in conditions that induce severe shape abnormality and remodeling, such as in the case of pulmonary hypertension (PH) or hypertrophic cardiomyopathy (HCM), a single SSM is rarely capable of capturing the anatomical variability in the extremes of the distribution. This work presents a new algorithm for the segmentation of severely abnormal hearts. The algorithm is highly flexible, as it does not require a priori knowledge of the involved pathology or any specific parameter tuning to be applied to the cardiac image under analysis. The fundamental idea is to approximate the gross effect of the abnormality with a virtual remodeling transformation between the patient-specific geometry and the average shape of the reference model (e.g., average normal morphology). To define this mapping, a set of landmark points are automatically identified during boundary point search, by estimating the reliability of the candidate points. With the obtained transformation, the feature points extracted from the patient image volume are then projected onto the space of the reference SSM, where the model is used to effectively constrain and guide the segmentation process. The extracted shape in the reference space is finally propagated back to the original image of the abnormal heart to obtain the final segmentation. Detailed validation with patients diagnosed with PH and HCM shows the robustness and flexibility of the technique for the segmentation of highly abnormal hearts of different pathologies.
- Published
- 2016
- Full Text
- View/download PDF
6. Accurate Segmentation of Vertebral Bodies and Processes Using Statistical Shape Decomposition and Conditional Models.
- Author
-
Pereañez M, Lekadir K, Castro-Mateos I, Pozo JM, Lazáry Á, and Frangi AF
- Subjects
- Adult, Algorithms, Female, Humans, Male, Middle Aged, Models, Statistical, Imaging, Three-Dimensional methods, Lumbar Vertebrae diagnostic imaging, Tomography, X-Ray Computed methods
- Abstract
Detailed segmentation of the vertebrae is an important pre-requisite in various applications of image-based spine assessment, surgery and biomechanical modeling. In particular, accurate segmentation of the processes is required for image-guided interventions, for example for optimal placement of bone grafts between the transverse processes. Furthermore, the geometry of the processes is now required in musculoskeletal models due to their interaction with the muscles and ligaments. In this paper, we present a new method for detailed segmentation of both the vertebral bodies and processes based on statistical shape decomposition and conditional models. The proposed technique is specifically developed with the aim to handle the complex geometry of the processes and the large variability between individuals. The key technical novelty in this work is the introduction of a part-based statistical decomposition of the vertebrae, such that the complexity of the subparts is effectively reduced, and model specificity is increased. Subsequently, in order to maintain the statistical and anatomic coherence of the ensemble, conditional models are used to model the statistical inter-relationships between the different subparts. For shape reconstruction and segmentation, a robust model fitting procedure is used to exclude improbable inter-part relationships in the estimation of the shape parameters. Segmentation results based on a dataset of 30 healthy CT scans and a dataset of 10 pathological scans show a point-to-surface error improvement of 20% and 17% respectively, and the potential of the proposed technique for detailed vertebral modeling.
- Published
- 2015
- Full Text
- View/download PDF
7. Statistical Interspace Models (SIMs): Application to Robust 3D Spine Segmentation.
- Author
-
Castro-Mateos I, Pozo JM, Pereañez M, Lekadir K, Lazary A, and Frangi AF
- Subjects
- Adult, Algorithms, Databases, Factual, Female, Humans, Male, Middle Aged, Tomography, X-Ray Computed methods, Imaging, Three-Dimensional methods, Models, Statistical, Spine diagnostic imaging
- Abstract
Statistical shape models (SSM) are used to introduce shape priors in the segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, since it is required to obtain not only the individual shape variations but also the relative position and orientation among objects. A solution to overcome this limitation is to model each individual shape independently. However, this approach does not take into account the relative position, orientations and shapes among the parts of an articulated object, which may result in unrealistic geometries, such as with object overlaps. In this article, we propose a new Statistical Model, the Statistical Interspace Model (SIM), which provides information about the interaction of all the individual structures by modeling the interspace between them. The SIM is described using relative position vectors between pair of points that belong to different objects that are facing each other. These vectors are divided into their magnitude and direction, each of these groups modeled as independent manifolds. The SIM was included in a segmentation framework that contains an SSM per individual object. This framework was tested using three distinct types of datasets of CT images of the spine. Results show that the SIM completely eliminated the inter-process overlap while improving the segmentation accuracy.
- Published
- 2015
- Full Text
- View/download PDF
8. A framework for the merging of pre-existing and correspondenceless 3D statistical shape models.
- Author
-
Pereañez M, Lekadir K, Butakoff C, Hoogendoorn C, and Frangi AF
- Subjects
- Algorithms, Humans, Pattern Recognition, Automated methods, Reproducibility of Results, Sensitivity and Specificity, Caudate Nucleus anatomy & histology, Heart Ventricles anatomy & histology, Imaging, Three-Dimensional, Lumbar Vertebrae anatomy & histology, Magnetic Resonance Imaging, Models, Statistical, Tomography, X-Ray Computed
- Abstract
The construction of statistical shape models (SSMs) that are rich, i.e., that represent well the natural and complex variability of anatomical structures, is an important research topic in medical imaging. To this end, existing works have addressed the limited availability of training data by decomposing the shape variability hierarchically or by combining statistical and synthetic models built using artificially created modes of variation. In this paper, we present instead a method that merges multiple statistical models of 3D shapes into a single integrated model, thus effectively encoding extra variability that is anatomically meaningful, without the need for the original or new real datasets. The proposed framework has great flexibility due to its ability to merge multiple statistical models with unknown point correspondences. The approach is beneficial in order to re-use and complement pre-existing SSMs when the original raw data cannot be exchanged due to ethical, legal, or practical reasons. To this end, this paper describes two main stages, i.e., (1) statistical model normalization and (2) statistical model integration. The normalization algorithm uses surface-based registration to bring the input models into a common shape parameterization with point correspondence established across eigenspaces. This allows the model fusion algorithm to be applied in a coherent manner across models, with the aim to obtain a single unified statistical model of shape with improved generalization ability. The framework is validated with statistical models of the left and right cardiac ventricles, the L1 vertebra, and the caudate nucleus, constructed at distinct research centers based on different imaging modalities (CT and MRI) and point correspondences. The results demonstrate that the model integration is statistically and anatomically meaningful, with potential value for merging pre-existing multi-modality statistical models of 3D shapes., (Copyright © 2014 Elsevier B.V. All rights reserved.)
- Published
- 2014
- Full Text
- View/download PDF
9. Fusing correspondenceless 3D point distribution models.
- Author
-
Pereañez M, Lekadir K, Butakoff C, Hoogendoorn C, and Frangi A
- Subjects
- Computer Simulation, Humans, Magnetic Resonance Imaging methods, Models, Cardiovascular, Models, Statistical, Statistical Distributions, Tomography, X-Ray Computed methods, Anatomic Landmarks diagnostic imaging, Anatomic Landmarks pathology, Imaging, Three-Dimensional methods, Multimodal Imaging methods, Pattern Recognition, Automated methods, Subtraction Technique, Ventricular Dysfunction diagnosis
- Abstract
This paper presents a framework for the fusion of multiple point distribution models (PDMs) with unknown point correspondences. With this work, models built from distinct patient groups and imaging modalities can be merged, with the aim to obtain a PDM that encodes a wider range of anatomical variability. To achieve this, two technical challenges are addressed in this work. Firstly, the model fusion must be carried out directly on the corresponding means and eigenvectors as the original data is not always available and cannot be freely exchanged across centers for various legal and practical reasons. Secondly, the PDMs need to be normalized before fusion as the point correspondence is unknown. The proposed framework is validated by integrating statistical models of the left and right ventricles of the heart constructed from different imaging modalities (MRI and CT) and with different landmark representations of the data. The results show that the integration is statistically and anatomically meaningful and that the quality of the resulting model is significantly improved.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.