134 results on '"Segars WP"'
Search Results
2. CardSegNet: An adaptive hybrid CNN-vision transformer model for heart region segmentation in cardiac MRI.
- Author
-
Aghapanah H, Rasti R, Kermani S, Tabesh F, Banaem HY, Aliakbar HP, Sanei H, and Segars WP
- Subjects
- Humans, Heart diagnostic imaging, Image Processing, Computer-Assisted methods, Deep Learning, Heart Ventricles diagnostic imaging, Algorithms, Magnetic Resonance Imaging methods, Neural Networks, Computer
- Abstract
Cardiovascular MRI (CMRI) is a non-invasive imaging technique adopted for assessing the blood circulatory system's structure and function. Precise image segmentation is required to measure cardiac parameters and diagnose abnormalities through CMRI data. Because of anatomical heterogeneity and image variations, cardiac image segmentation is a challenging task. Quantification of cardiac parameters requires high-performance segmentation of the left ventricle (LV), right ventricle (RV), and left ventricle myocardium from the background. The first proposed solution here is to manually segment the regions, which is a time-consuming and error-prone procedure. In this context, many semi- or fully automatic solutions have been proposed recently, among which deep learning-based methods have revealed high performance in segmenting regions in CMRI data. In this study, a self-adaptive multi attention (SMA) module is introduced to adaptively leverage multiple attention mechanisms for better segmentation. The convolutional-based position and channel attention mechanisms with a patch tokenization-based vision transformer (ViT)-based attention mechanism in a hybrid and end-to-end manner are integrated into the SMA. The CNN- and ViT-based attentions mine the short- and long-range dependencies for more precise segmentation. The SMA module is applied in an encoder-decoder structure with a ResNet50 backbone named CardSegNet. Furthermore, a deep supervision method with multi-loss functions is introduced to the CardSegNet optimizer to reduce overfitting and enhance the model's performance. The proposed model is validated on the ACDC2017 (n=100), M&Ms (n=321), and a local dataset (n=22) using the 10-fold cross-validation method with promising segmentation results, demonstrating its outperformance versus its counterparts., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2024 Elsevier Ltd. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
3. Estimation of threshold thickness of residual normal tissue in lung dysfunction detectable by dynamic chest radiography: A virtual imaging trial.
- Author
-
Yamaguchi S, Tanaka R, Matsumoto I, Ohkura N, Segars WP, Abadi E, and Samei E
- Abstract
Background: Dynamic chest radiography (DCR) is a recently developed functional x-ray imaging technique that detects pulmonary ventilation impairment as a decrease in changes in lung density during respiration. However, the diagnostic performance of DCR is uncertain owing to an insufficient number of clinical cases. One solution is virtual imaging trials (VITs), which is an emerging alternative method for efficiently evaluating medical imaging technology via computer simulation techniques., Purpose: This study aimed to estimate the typical threshold thickness of residual normal tissue below which the presence of emphysema may be detected by DCR via VITs using virtual patients with different physiques and a user-defined ground truth., Methods: Twenty extended cardiac-torso (XCAT) phantoms that exhibited changes in lung density during respiration were generated to simulate virtual patients. To simulate a locally collapsed lung, an air sphere was inserted into each lung regions in the phantom. The XCAT phantom was virtually projected using an x-ray simulator. The respiratory changes in pixel value (ΔPV) were measured on the projected air spheres (simulated lesions) to calculate the percentage of decrease (ΔPV%) relative to ΔPV
exp-ins in the absence of an air sphere. The relationship between the amount of residual normal tissue and ΔPV% was fitted to a cubic approximation curve (hereafter, performance curve), and the threshold at which the ΔPV% began to decrease (normal-tissuethre ) was determined. The goodness of fit for each performance curve was evaluated according to the coefficient of determination (R2 ) and the 95% confidence interval derived from the standard errors between the measured and theoretical values corresponding to each performance curve. The ΔPV% was also visualized as a color scaling to validate the results of the VITs in both virtual and clinical patients., Results: For each lung region in all body sizes, the ΔPV% decreased as the amount of residual normal tissue decreased and could be defined as a function of the amount of residual normal tissue in front of and behind the simulated lesions with high R2 values. Meanwhile, the difference between the measured and theoretical values corresponding to each performance curve was only partially included in the 95% confidence interval. The normal-tissuethre values were 146.0, 179.5, and 170.9 mm for the upper, middle, and lower lungs, respectively, which were demonstrated in virtual patients and one real patient, where the value of the residual normal tissue was less than that of normal-tissuethre ; any reduction in the residual normal tissue was reflected as a reduced ΔPV and depicted as a reduced color intensity., Conclusions: The performance of DCR-based pulmonary impairment assessment depends on the amount of residual normal tissue in front of and behind the lesion rather than on the lesion size. The performance curve can be defined as a function of the amount of residual normal tissue in each lung region with a specific threshold of normal tissue remaining where lesions become detectable, shown as a decrease in ΔPV. The results of VITs are expected to accelerate future clinical trials for DCR-based pulmonary function assessment., (© 2024 The Author(s). Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.)- Published
- 2024
- Full Text
- View/download PDF
4. VLST: Virtual Lung Screening Trial for Lung Cancer Detection Using Virtual Imaging Trial.
- Author
-
Tushar FI, Vancoillie L, McCabe C, Kavuri A, Dahal L, Harrawood B, Fryling M, Zarei M, Sotoudeh-Paima S, Ho FC, Ghosh D, Luo S, Segars WP, Abadi E, Lafata KJ, Samei E, and Lo JY
- Abstract
Importance: The efficacy of lung cancer screening can be significantly impacted by the imaging modality used. This Virtual Lung Screening Trial (VLST) addresses the critical need for precision in lung cancer diagnostics and the potential for reducing unnecessary radiation exposure in clinical settings., Objectives: To establish a virtual imaging trial (VIT) platform that accurately simulates real-world lung screening trials (LSTs) to assess the diagnostic accuracy of CT and CXR modalities., Design Setting and Participants: Utilizing computational models and machine learning algorithms, we created a diverse virtual patient population. The cohort, designed to mirror real-world demographics, was assessed using virtual imaging techniques that reflect historical imaging technologies., Main Outcomes and Measures: The primary outcome was the difference in the Area Under the Curve (AUC) for CT and CXR modalities across lesion types and sizes., Results: The study analyzed 298 CT and 313 CXR simulated images from 313 virtual patients, with a lesion-level AUC of 0.81 (95% CI: 0.78-0.84) for CT and 0.55 (95% CI: 0.53-0.56) for CXR. At the patient level, CT demonstrated an AUC of 0.85 (95% CI: 0.80-0.89), compared to 0.53 (95% CI: 0.47-0.60) for CXR. Subgroup analyses indicated CT's superior performance in detecting homogeneous lesions (AUC of 0.97 for lesion-level) and heterogeneous lesions (AUC of 0.71 for lesion-level) as well as in identifying larger nodules (AUC of 0.98 for nodules > 8 mm)., Conclusion and Relevance: The VIT platform validated the superior diagnostic accuracy of CT over CXR, especially for smaller nodules, underscoring its potential to replicate real clinical imaging trials. These findings advocate for the integration of virtual trials in the evaluation and improvement of imaging-based diagnostic tools.
- Published
- 2024
5. A systematic assessment and optimization of photon-counting CT for lung density quantifications.
- Author
-
Sotoudeh-Paima S, Segars WP, Ghosh D, Luo S, Samei E, and Abadi E
- Subjects
- Humans, Cross-Sectional Studies, Retrospective Studies, Phantoms, Imaging, Tomography, X-Ray Computed methods, Lung diagnostic imaging, Lung Diseases, Pulmonary Emphysema, Emphysema
- Abstract
Background: Photon-counting computed tomography (PCCT) has recently emerged into clinical use; however, its optimum imaging protocols and added benefits remains unknown in terms of providing more accurate lung density quantification compared to energy-integrating computed tomography (EICT) scanners., Purpose: To systematically assess the performance of a clinical PCCT scanner for lung density quantifications and compare it against EICT., Methods: This cross-sectional study involved a retrospective analysis of subjects scanned (August-December 2021) using a clinical PCCT system. The influence of altering reconstruction parameters was studied (reconstruction kernel, pixel size, slice thickness). A virtual CT dataset of anthropomorphic virtual subjects was acquired to demonstrate the correspondence of findings to clinical dataset, and to perform systematic imaging experiments, not possible using human subjects. The virtual subjects were imaged using a validated, scanner-specific CT simulator of a PCCT and two EICT (defined as EICT A and B) scanners. The images were evaluated using mean absolute error (MAE) of lung and emphysema density against their corresponding ground truth., Results: Clinical and virtual PCCT datasets showed similar trends, with sharper kernels and smaller voxel sizes increasing percentage of low-attenuation areas below -950 HU (LAA-950) by up to 15.7 ± 6.9% and 11.8 ± 5.5%, respectively. Under the conditions studied, higher doses, thinner slices, smaller pixel sizes, iterative reconstructions, and quantitative kernels with medium sharpness resulted in lower lung MAE values. While using these settings for PCCT, changes in the dose level (13 to 1.3 mGy), slice thickness (0.4 to 1.5 mm), pixel size (0.49 to 0.98 mm), reconstruction technique (70 keV-VMI to wFBP), and kernel (Qr48 to Qr60) increased lung MAE by 15.3 ± 2.0, 1.4 ± 0.6, 2.2 ± 0.3, 4.2 ± 0.8, and 9.1 ± 1.6 HU, respectively. At the optimum settings identified per scanner, PCCT images exhibited lower lung and emphysema MAE than those of EICT scanners (by 2.6 ± 1.0 and 9.6 ± 3.4 HU, compared to EICT A, and by 4.8 ± 0.8 and 7.4 ± 2.3 HU, compared to EICT B). The accuracy of lung density measurements was correlated with subjects' mean lung density (p < 0.05), measured by PCCT at optimum setting under the conditions studied., Conclusion: Photon-counting CT demonstrated superior performance in density quantifications, with its influences of imaging parameters in line with energy-integrating CT scanners. The technology offers improvement in lung quantifications, thus demonstrating potential toward more objective assessment of respiratory conditions., (© 2024 American Association of Physicists in Medicine.)
- Published
- 2024
- Full Text
- View/download PDF
6. Development of physiologically-informed computational coronary artery plaques for use in virtual imaging trials.
- Author
-
Sauer TJ, Buckler AJ, Abadi E, Daubert M, Douglas PS, Samei E, and Segars WP
- Subjects
- Humans, Heart, Phantoms, Imaging, Computer Simulation, Coronary Vessels diagnostic imaging, Tomography, X-Ray Computed methods
- Abstract
Background: As a leading cause of death, worldwide, cardiovascular disease is of great clinical importance. Among cardiovascular diseases, coronary artery disease (CAD) is a key contributor, and it is the attributed cause of death for 10% of all deaths annually. The prevalence of CAD is commensurate with the rise in new medical imaging technologies intended to aid in its diagnosis and treatment. The necessary clinical trials required to validate and optimize these technologies require a large cohort of carefully controlled patients, considerable time to complete, and can be prohibitively expensive. A safer, faster, less expensive alternative is using virtual imaging trials (VITs), utilizing virtual patients or phantoms combined with accurate computer models of imaging devices., Purpose: In this work, we develop realistic, physiologically-informed models for coronary plaques for application in cardiac imaging VITs., Methods: Histology images of plaques at micron-level resolution were used to train a deep convolutional generative adversarial network (DC-GAN) to create a library of anatomically variable plaque models with clinical anatomical realism. The stability of each plaque was evaluated by finite element analysis (FEA) in which plaque components and vessels were meshed as volumes, modeled as specialized tissues, and subjected to the range of normal coronary blood pressures. To demonstrate the utility of the plaque models, we combined them with the whole-body XCAT computational phantom to perform initial simulations comparing standard energy-integrating detector (EID) CT with photon-counting detector (PCD) CT., Results: Our results show the network is capable of generating realistic, anatomically variable plaques. Our simulation results provide an initial demonstration of the utility of the generated plaque models as targets to compare different imaging devices., Conclusions: Vast, realistic, and variable CAD pathologies can be generated to incorporate into computational phantoms for VITs. There they can serve as a known truth from which to optimize and evaluate cardiac imaging technologies quantitatively., (© 2024 American Association of Physicists in Medicine.)
- Published
- 2024
- Full Text
- View/download PDF
7. Development and Application of a Virtual Imaging Trial Framework for Longitudinal Quantification of Emphysema in CT.
- Author
-
Sotoudeh-Paima S, Ho FC, Nejad MG, Kavuri A, O'Sullivan-Murphy B, Lynch DA, Segars WP, Samei E, and Abadi E
- Abstract
Pulmonary emphysema is a progressive lung disease that requires accurate evaluation for optimal management. This task, possible using quantitative CT, is particularly challenging as scanner and patient attributes change over time, negatively impacting the CT-derived quantitative measures. Efforts to minimize such variations have been limited by the absence of ground truth in clinical data, thus necessitating reliance on clinical surrogates, which may not have one-to-one correspondence to CT-based findings. This study aimed to develop the first suite of human models with emphysema at multiple time points, enabling longitudinal assessment of disease progression with access to ground truth. A total of 14 virtual subjects were modeled across three time points. Each human model was virtually imaged using a validated imaging simulator (DukeSim), modeling an energy-integrating CT scanner. The models were scanned at two dose levels and reconstructed with two reconstruction kernels, slice thicknesses, and pixel sizes. The developed longitudinal models were further utilized to demonstrate utility in algorithm testing and development. Two previously developed image processing algorithms (CT-HARMONICA, EmphysemaSeg) were evaluated. The results demonstrated the efficacy of both algorithms in improving the accuracy and precision of longitudinal quantifications, from 6.1±6.3% to 1.1±1.1% and 1.6±2.2% across years 0-5. Further investigation in EmphysemaSeg identified that baseline emphysema severity, defined as >5% emphysema at year 0, contributed to its reduced performance. This finding highlights the value of virtual imaging trials in enhancing the explainability of algorithms. Overall, the developed longitudinal human models enabled ground-truth based assessment of image processing algorithms for lung quantifications.
- Published
- 2024
- Full Text
- View/download PDF
8. Quantitative accuracy of lung function measurement using parametric response mapping: A virtual imaging study.
- Author
-
Kavuri A, Ho FC, Ghojogh-Nejad M, Sotoudeh-Paima S, Samei E, Segars WP, and Abadi E
- Abstract
Parametric response mapping (PRM) is a voxel-based quantitative CT imaging biomarker that measures the severity of chronic obstructive pulmonary disease (COPD) by analyzing both inspiratory and expiratory CT scans. Although PRM-derived measurements have been shown to predict disease severity and phenotyping, their quantitative accuracy is impacted by the variability of scanner settings and patient conditions. The aim of this study was to evaluate the variability of PRM-based measurements due to the changes in the scanner types and configurations. We developed 10 human chest models with emphysema and air-trapping at end-inspiration and end-expiration states. These models were virtually imaged using a scanner-specific CT simulator (DukeSim) to create CT images at different acquisition settings for energy-integrating and photon-counting CT systems. The CT images were used to estimate PRM maps. The quantified measurements were compared with ground truth values to evaluate the deviations in the measurements. Results showed that PRM measurements varied with scanner type and configurations. The emphysema volume was overestimated by 3 ± 9.5 % (mean ± standard deviation) of the lung volume, and the functional small airway disease (fSAD) volume was underestimated by 7.5±19 % of the lung volume. PRM measurements were more accurate and precise when the acquired settings were photon-counting CT, higher dose, smoother kernel, and larger pixel size. This study demonstrates the development and utility of virtual imaging tools for systematic assessment of a quantitative biomarker accuracy.
- Published
- 2024
- Full Text
- View/download PDF
9. Surface-based anthropomorphic bone structures for use in high-resolution simulated medical imaging.
- Author
-
Sauer TJ, McCabe C, Abadi E, Samei E, and Segars WP
- Subjects
- Adult, Humans, Male, Computer Simulation, Phantoms, Imaging, Bone and Bones diagnostic imaging, Tomography, X-Ray Computed methods, Algorithms
- Abstract
Objective. Virtual imaging trials enable efficient assessment and optimization of medical image devices and techniques via simulation rather than physical studies. These studies require realistic, detailed ground-truth models or phantoms of the relevant anatomy or physiology. Anatomical structures within computational phantoms are typically based on medical imaging data; however, for small and intricate structures (e.g. trabecular bone), it is not reasonable to use existing clinical data as the spatial resolution of the scans is insufficient. In this study, we develop a mathematical method to generate arbitrary-resolution bone structures within virtual patient models (XCAT phantoms) to model the appearance of CT-imaged trabecular bone. Approach . Given surface definitions of a bone, an algorithm was implemented to generate stochastic bicontinuous microstructures to form a network to define the trabecular bone structure with geometric and topological properties indicative of the bone. For an example adult male XCAT phantom (50th percentile in height and weight), the method was used to generate the trabecular structure of 46 chest bones. The produced models were validated in comparison with published properties of bones. The utility of the method was demonstrated with pilot CT and photon-counting CT simulations performed using the accurate DukeSim CT simulator on the XCAT phantom containing the detailed bone models. Main results . The method successfully generated the inner trabecular structure for the different bones of the chest, having quantiative measures similar to published values. The pilot simulations showed the ability of photon-counting CT to better resolve the trabecular detail emphasizing the necessity for high-resolution bone models. Significance. As demonstrated, the developed tools have great potential to provide ground truth simulations to access the ability of existing and emerging CT imaging technology to provide quantitative information about bone structures., (© 2023 Institute of Physics and Engineering in Medicine.)
- Published
- 2023
- Full Text
- View/download PDF
10. Coronary stenosis quantification in cardiac computed tomography angiography: multi-factorial optimization of image quality and radiation dose.
- Author
-
Zarei M, Abadi E, Segars WP, and Samei E
- Abstract
Background: The accuracy and variability of quantification in computed tomography angiography (CTA) are affected by the interplay of imaging parameters and patient attributes. The assessment of these combined effects has been an open engineering challenge., Purpose: In this study, we developed a framework that optimizes imaging parameters for accurate and consistent coronary stenosis quantification in cardiac CTA while accounting for patient-specific variables., Methods: The framework utilizes a task-specific image quality index, the estimability index ( e ' ), approximated by a surrogate estimability polynomial function (EPF) capable of finding the optimal protocol that (1) maximizes image quality with an upper bound for desired radiation dose or (2) minimizes the dose level with a lower bound of acceptable image quality. The optimization process was formulated with the decision variables being subject to a set of constraints. The methodology was verified using CTA data from a prior clinical trial (prospective multi-center imaging study for evaluation of chest pain) by assessing the concordance of its prediction with the trial results. Further, the framework was used to derive an optimum protocol for each case based on the patient attributes, gauging how much improvement would have been possible if the derived optimized protocol would have been deployed., Results: The framework produced results consistent with imaging physics principles with approximated EPFs of 97% accuracy. The feature importance evaluation demonstrated a close match with earlier studies. The verification study found e ' scores closely predicting the cardiologist scores to within 95% in terms of the area under the receiver operating characteristic curve and predicting potential for either an average of fourfold increase in e ' within a targeted dose or a reduction in radiation dose by an average of 57% without reducing the image quality., Conclusions: The protocol optimization framework provides means to assess and optimize CTA in terms of either image quality or radiation dose objectives with its results predicting prior clinical trial findings., (© 2023 Society of Photo-Optical Instrumentation Engineers (SPIE).)
- Published
- 2023
- Full Text
- View/download PDF
11. Simulating Cardiac Fluid Dynamics in the Human Heart.
- Author
-
Davey M, Puelz C, Rossi S, Smith MA, Wells DR, Sturgeon G, Segars WP, Vavalle JP, Peskin CS, and Griffith BE
- Abstract
Cardiac fluid dynamics fundamentally involves interactions between complex blood flows and the structural deformations of the muscular heart walls and the thin, flexible valve leaflets. There has been longstanding scientific, engineering, and medical interest in creating mathematical models of the heart that capture, explain, and predict these fluid-structure interactions. However, existing computational models that account for interactions among the blood, the actively contracting myocardium, and the cardiac valves are limited in their abilities to predict valve performance, resolve fine-scale flow features, or use realistic descriptions of tissue biomechanics. Here we introduce and benchmark a comprehensive mathematical model of cardiac fluid dynamics in the human heart. A unique feature of our model is that it incorporates biomechanically detailed descriptions of all major cardiac structures that are calibrated using tensile tests of human tissue specimens to reflect the heart's microstructure. Further, it is the first fluid-structure interaction model of the heart that provides anatomically and physiologically detailed representations of all four cardiac valves. We demonstrate that this integrative model generates physiologic dynamics, including realistic pressure-volume loops that automatically capture isovolumetric contraction and relaxation, and predicts fine-scale flow features. None of these outputs are prescribed; instead, they emerge from interactions within our comprehensive description of cardiac physiology. Such models can serve as tools for predicting the impacts of medical devices or clinical interventions. They also can serve as platforms for mechanistic studies of cardiac pathophysiology and dysfunction, including congenital defects, cardiomyopathies, and heart failure, that are difficult or impossible to perform in patients.
- Published
- 2023
12. Modeling the effect of patient size on cerebral perfusion during veno-arterial extracorporeal membrane oxygenation.
- Author
-
Feiger B, Jensen CW, Bryner BS, Segars WP, and Randles A
- Abstract
Introduction: A well-known complication of veno-arterial extracorporeal membrane oxygenation (VA ECMO) is differential hypoxia, in which poorly-oxygenated blood ejected from the left ventricle mixes with and displaces well-oxygenated blood from the circuit, thereby causing cerebral hypoxia and ischemia. We sought to characterize the impact of patient size and anatomy on cerebral perfusion under a range of different VA ECMO flow conditions., Methods: We use one-dimensional (1D) flow simulations to investigate mixing zone location and cerebral perfusion across 10 different levels of VA ECMO support in eight semi-idealized patient geometries, for a total of 80 scenarios. Measured outcomes included mixing zone location and cerebral blood flow (CBF)., Results: Depending on patient anatomy, we found that a VA ECMO support ranging between 67-97% of a patient's ideal cardiac output was needed to perfuse the brain. In some cases, VA ECMO flows exceeding 90% of the patient's ideal cardiac output are needed for adequate cerebral perfusion., Conclusions: Individual patient anatomy markedly affects mixing zone location and cerebral perfusion in VA ECMO. Future fluid simulations of VA ECMO physiology should incorporate varied patient sizes and geometries in order to best provide insights toward reducing neurologic injury and improved outcomes in this patient population., Competing Interests: Declaration of conflicting interestsThe author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
- Published
- 2023
- Full Text
- View/download PDF
13. Emphysema Quantifications With CT Scan: Assessing the Effects of Acquisition Protocols and Imaging Parameters Using Virtual Imaging Trials.
- Author
-
Abadi E, Jadick G, Lynch DA, Segars WP, and Samei E
- Subjects
- Humans, Reproducibility of Results, Tomography, X-Ray Computed methods, Lung diagnostic imaging, Radiation Dosage, Pulmonary Emphysema diagnostic imaging, Emphysema
- Abstract
Background: CT scan has notable potential to quantify the severity and progression of emphysema in patients. Such quantification should ideally reflect the true attributes and pathologic conditions of subjects, not scanner parameters. To achieve such an objective, the effects of the scanner conditions need to be understood so the influence can be mitigated., Research Question: How do CT scan imaging parameters affect the accuracy of emphysema-based quantifications and biomarkers?, Study Design and Methods: Twenty anthropomorphic digital phantoms were developed with diverse anatomic attributes and emphysema abnormalities informed by a real COPD cohort. The phantoms were input to a validated CT scan simulator (DukeSim), modeling a commercial scanner (Siemens Flash). Virtual images were acquired under various clinical conditions of dose levels, tube current modulations (TCM), and reconstruction techniques and kernels. The images were analyzed to evaluate the effects of imaging parameters on the accuracy of density-based quantifications (percent of lung voxels with HU < -950 [LAA-950] and 15
th percentile of lung histogram HU [Perc15]) across varied subjects. Paired t tests were performed to explore statistical differences between any two imaging conditions., Results: The most accurate imaging condition corresponded to the highest acquired dose (100 mAs) and iterative reconstruction (SAFIRE) with the smooth kernel of I31, where the measurement errors (difference between measurement and ground truth) were 35 ± 3 Hounsfield Units (HU), -4% ± 5%, and 26 ± 10 HU (average ± SD), for the mean lung HU, LAA-950, and Perc15, respectively. Without TCM and at the I31 kernel, increase of dose (20 to 100 mAs) improved the lung mean absolute error (MAE) by 4.2 ± 2.3 HU (average ± SD). TCM did not contribute to a systematic improvement of lung MAE., Interpretation: The results highlight that although CT scan quantification is possible, its reliability is impacted by the choice of imaging parameters. The developed virtual imaging trial platform in this study enables comprehensive evaluation of CT scan methods in reliable quantifications, an effort that cannot be readily made with patient images or simplistic physical phantoms., (Copyright © 2022 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.)- Published
- 2023
- Full Text
- View/download PDF
14. Development and Application of a Virtual Imaging Trial framework for Airway Quantifications via CT.
- Author
-
Ho FC, Sotoudeh-Paima S, Segars WP, Samei E, and Abadi E
- Abstract
Chronic obstructive pulmonary disease (COPD) is one of the top three causes of death worldwide, characterized by emphysema and bronchitis. Airway measurements reflect the severity of bronchitis and other airway-related diseases. Airway structures can be objectively evaluated with quantitative computed tomography (CT). The accuracy of such quantifications is limited by the spatial resolution and image noise characteristics of the imaging system and can be potentially improved with the emerging photon-counting CT (PCCT) technology. This study evaluated the quantitative performance of PCCT against energy-integrating CT (EICT) systems for airway measurements, and further identified optimum CT imaging parameters for such quantifications. The study was performed using a novel virtual imaging framework by developing the first library of virtual patients with bronchitis. These virtual patients were developed based on CT images of confirmed COPD patients with varied bronchitis severity. The human models were virtually imaged at 6.3 and 12.6 mGy dose levels using a scanner-specific simulator (DukeSim), synthesizing clinical PCCT and EICT scanners (NAEOTOM Alpha, FLASH, Siemens). The projections were reconstructed with two algorithms and kernels at different matrix sizes and slice thicknesses. The CT images were used to quantify clinically relevant airway measurements ("Pi10" and "WA%") and compared against their ground truth values. Compared to EICT, PCCT provided more accurate Pi10 and WA% measurements by 63.1% and 68.2%, respectively. For both technologies, sharper kernels and larger matrix sizes led to more reliable bronchitis quantifications. This study highlights the potential advantages of PCCT against EICT in characterizing bronchitis utilizing a virtual imaging platform.
- Published
- 2023
- Full Text
- View/download PDF
15. A systematic assessment of photon-counting CT for bone mineral density and microarchitecture quantifications.
- Author
-
McCabe C, Sauer TJ, Zarei M, Segars WP, Samei E, and Abadi E
- Abstract
Photon-counting CT (PCCT) is an emerging imaging technology with potential improvements in quantification and rendition of micro-structures due to its smaller detector sizes. The aim of this study was to assess the performance of a new PCCT scanner (NAEOTOM Alpha, Siemens) in quantifying clinically relevant bone imaging biomarkers for characterization of common bone diseases. We evaluated the ability of PCCT in quantifying microarchitecture in bones compared to conventional energy-integrating CT. The quantifications were done through virtual imaging trials, using a 50 percentile BMI male virtual patient, with a detailed model of trabecular bone with varied bone densities in the lumbar spine. The virtual patient was imaged using a validated CT simulator (DukeSim) at CTDI
vol of 20 and 40 mGy for three scan modes: ultra-high-resolution PCCT (UHR-PCCT), high-resolution PCCT (HR-PCCT), and a conventional energy-integrating CT (EICT) (FORCE, Siemens). Further, each scan mode was reconstructed with varying parameters to evaluate their effect on quantification. Bone mineral density (BMD), trabecular volume to total bone volume (BV/TV), and radiomics texture features were calculated in each vertebra. The most accurate BMD measurements relative to the ground truth were UHR-PCCT images (error: 3.3% ± 1.5%), compared to HR-PCCT (error: 5.3% ± 2.0%) and EICT (error: 7.1% ± 2.0%). UHR-PCCT images outperformed EICT and HR-PCCT. In BV/TV quantifications, UHR-PCCT (errors of 29.7% ± 11.8%) outperformed HR-PCCT (error: 80.6% ± 31.4%) and EICT (error: 67.3% ± 64.3). UHR-PCCT and HR-PCCT texture features were sensitive to anatomical changes using the sharpest kernel. Conversely, the texture radiomics showed no clear trend to reflect the progression of the disease in EICT. This study demonstrated the potential utility of PCCT technology in improved performance of bone quantifications leading to more accurate characterization of bone diseases.- Published
- 2023
16. S-values for radium-223 and absorbed doses estimates for 223 RACL 2 using three computational phantoms.
- Author
-
Silva CCO, da Silva AX, Braz D, Lima LFC, Segars WP, and de Sá LV
- Subjects
- Bismuth therapeutic use, Body Weight, Female, Humans, Male, Monte Carlo Method, Phantoms, Imaging, Radioisotopes therapeutic use, Thallium, Bone Neoplasms secondary, Polonium, Radium therapeutic use, Radon
- Abstract
Radium-223 dichloride (
223 RaCl2), approved by FDA (Food and Drug Administration) in 2013 and in Brazil by ANVISA (Agência Nacional de Vigilância Sanitária) in 2016, offers a new therapeutic option for bone metastases from castration-resistant prostate cancer (CRPC). The advantages of radionuclide therapy for bone metastases include the simultaneous treatment of multiple lesions at the same time. The activity prescription is based on the patient's body weight, disregarding the absorbed dose limit of 2 Gy in the organ at risk: bone marrow. This study focuses on Internal Dosimetry for223 RaCl2 therapy aiming to apply biokinetic models described in the literature to estimate absorbed doses in the organs of interests, especially for the bone marrow. For this purpose, the present paper compares and validates the GATE Monte Carlo simulation with the Radioactive Decay Module (RDM) and calculates a set of S-values for Radium-223 radionuclide using male and female XCAT computational models. Moreover, a comparison of S-values for Radium-223 for three male computational models with different anatomies is also evaluated, Male (standard), Pat1 (lower body weight) and Pat2 (highest body weight). A comprehensive set of S-values was calculated for the Male model, 30 source-regions and 47 target-regions, and for Female model, 30 source-regions and 42 target-regions for Radium-223 and its decay scheme: Radon-219, Polonium-215, Lead-211, Bismuth- 211, Polonium-211 and Thallium-207. The new set of S-values will facilitate absorbed dose calculations for Radium-223 therapy. In addition, Absorbed Dose Evaluation for223 RaCl2 therapy was estimated for three different biodistributions described in the literature within three male computational models. For all biodistributions, the Pat2 phantom has a greatest absorbed dose within the red marrow, when compared with Male and Pat1., Competing Interests: Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests:Catherine C O Silva reports financial support was provided by CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico., (Copyright © 2022 Elsevier Ltd. All rights reserved.)- Published
- 2022
- Full Text
- View/download PDF
17. Development of scalable lymphatic system in the 4D XCAT phantom: Application to quantitative evaluation of lymphoma PET segmentations.
- Author
-
Fedrigo R, Segars WP, Martineau P, Gowdy C, Bloise I, Uribe CF, and Rahmim A
- Subjects
- United States, Humans, Lymphatic System, Positron-Emission Tomography, Lymphoma
- Abstract
Background: Digital anthropomorphic phantoms, such as the 4D extended cardiac-torso (XCAT) phantom, are actively used to develop, optimize, and evaluate a variety of imaging applications, allowing for realistic patient modeling and knowledge of ground truth. The XCAT phantom defines the activity and attenuation for a simulated patient, which includes a complete set of organs, muscle, bone, and soft tissue, while also accounting for cardiac and respiratory motion. However, the XCAT phantom does not currently include the lymphatic system, critical for evaluating medical imaging tasks such as sentinel node detection, node density measurement, and radiation dosimetry., Purpose: In this study, we aimed to develop a scalable lymphatic system in the XCAT phantom, to facilitate improved research of the lymphatic system in medical imaging. Using this scalable lymphatic system, we modeled the lymph node conglomerate pathology that is characteristically observed in primary mediastinal B-cell lymphoma (PMBCL). As an extended application, we evaluated positron emission tomography (PET) image quantification of metabolic tumor volume (MTV) and total lesion glycolysis (TLG) of these simulated lymphomas, though the phantoms may be applied to other imaging modalities and study design paradigms (e.g., image quality, detection)., Methods: A template model for the lymphatic system was developed based on anatomical data from the Visible Human Project of the National Library of Medicine. The segmented nodes and vessels were fit with non-uniform rational basis spline surfaces, and multichannel large deformation diffeomorphic metric mapping was used to propagate the template to different XCAT anatomies. To model conglomerates observed in PMBCL, lymph nodes were enlarged, converged within the mediastinum, and tracer concentration was increased. We used the phantoms as inputs to a PET simulation tool, which generated images using ordered subsets expectation maximization reconstruction with 2-8 mm Gaussian filters. Fixed thresholding (FT) and gradient segmentation were used to determine MTV and TLG. Percent bias (%Bias) and coefficient of variation (COV) were computed as measures of accuracy and precision, respectively, for each MTV and TLG measurement., Results: Using the methodology described above, we introduced a scalable lymphatic system in the XCAT phantom, which allows for the radioactivity and attenuation ground truth to be generated in 116 ± 2.5 s using a 2.3 GHz processor. Within the Rhinoceros interface, lymph node anatomy and function were modified to create a cohort of 10 phantoms with lymph node conglomerates. Using the lymphoma phantoms to evaluate PET quantification of MTV, mean %Bias values were -9.3%, -41.3%, and 20.9%, while COV values were 4.08%, 7.6%, and 3.4% using 25% FT, 40% FT, and gradient segmentations, respectively. Comparatively for TLG, mean %Bias values were -27.4%, -45.8%, and -16.0%, while COV values were 1.9%, 5.7%, and 1.4%, for the 25% FT, 40% FT, and gradient segmentations, respectively., Conclusions: In this work, we upgraded the XCAT phantom to include a lymphatic system, comprised of a network of 276 scalable lymph nodes and corresponding vessels. As an application, we created a cohort of phantoms with lymph node conglomerates to evaluate lymphoma quantification in PET imaging, which highlights an important application of this work., (© 2022 American Association of Physicists in Medicine.)
- Published
- 2022
- Full Text
- View/download PDF
18. TransMorph: Transformer for unsupervised medical image registration.
- Author
-
Chen J, Frey EC, He Y, Segars WP, Li Y, and Du Y
- Subjects
- Humans, Bayes Theorem, Magnetic Resonance Imaging, Phantoms, Imaging, Image Processing, Computer-Assisted methods, Neural Networks, Computer, Imaging, Three-Dimensional
- Abstract
In the last decade, convolutional neural networks (ConvNets) have been a major focus of research in medical image analysis. However, the performances of ConvNets may be limited by a lack of explicit consideration of the long-range spatial relationships in an image. Recently, Vision Transformer architectures have been proposed to address the shortcomings of ConvNets and have produced state-of-the-art performances in many medical imaging applications. Transformers may be a strong candidate for image registration because their substantially larger receptive field enables a more precise comprehension of the spatial correspondence between moving and fixed images. Here, we present TransMorph, a hybrid Transformer-ConvNet model for volumetric medical image registration. This paper also presents diffeomorphic and Bayesian variants of TransMorph: the diffeomorphic variants ensure the topology-preserving deformations, and the Bayesian variant produces a well-calibrated registration uncertainty estimate. We extensively validated the proposed models using 3D medical images from three applications: inter-patient and atlas-to-patient brain MRI registration and phantom-to-CT registration. The proposed models are evaluated in comparison to a variety of existing registration methods and Transformer architectures. Qualitative and quantitative results demonstrate that the proposed Transformer-based model leads to a substantial performance improvement over the baseline methods, confirming the effectiveness of Transformers for medical image registration., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2022 Elsevier B.V. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
19. XCIST-an open access x-ray/CT simulation toolkit.
- Author
-
Wu M, FitzGerald P, Zhang J, Segars WP, Yu H, Xu Y, and De Man B
- Subjects
- Algorithms, Computer Simulation, Image Processing, Computer-Assisted methods, Phantoms, Imaging, X-Rays, Access to Information, Tomography, X-Ray Computed methods
- Abstract
Objective . X-ray-based imaging modalities including mammography and computed tomography (CT) are widely used in cancer screening, diagnosis, staging, treatment planning, and therapy response monitoring. Over the past few decades, improvements to these modalities have resulted in substantially improved efficacy and efficiency, and substantially reduced radiation dose and cost. However, such improvements have evolved more slowly than would be ideal because lengthy preclinical and clinical evaluation is required. In many cases, new ideas cannot be evaluated due to the high cost of fabricating and testing prototypes. Wider availability of computer simulation tools could accelerate development of new imaging technologies. This paper introduces the development of a new open-access simulation environment for x-ray-based imaging. The main motivation of this work is to publicly distribute a fast but accurate ray-tracing x-ray and CT simulation tool along with realistic phantoms and 3D reconstruction capability, building on decades of developments in industry and academia. Approach . The x-ray-based Cancer Imaging Simulation Toolkit (XCIST) is developed in the context of cancer imaging, but can more broadly be applied. XCIST is physics-based, written in Python and C/C++, and currently consists of three major subsets: digital phantoms, the simulator itself (CatSim), and image reconstruction algorithms; planned future features include a fast dose-estimation tool and rigorous validation. To enable broad usage and to model and evaluate new technologies, XCIST is easily extendable by other researchers. To demonstrate XCIST's ability to produce realistic images and to show the benefits of using XCIST for insight into the impact of separate physics effects on image quality, we present exemplary simulations by varying contributing factors such as noise and sampling. Main results . The capabilities and flexibility of XCIST are demonstrated, showing easy applicability to specific simulation problems. Geometric and x-ray attenuation accuracy are shown, as well as XCIST's ability to model multiple scanner and protocol parameters, and to attribute fundamental image quality characteristics to specific parameters. Significance . This work represents an important first step toward the goal of creating an open-access platform for simulating existing and emerging x-ray-based imaging systems. While numerous simulation tools exist, we believe the combined XCIST toolset provides a unique advantage in terms of modeling capabilities versus ease of use and compute time. We publicly share this toolset to provide an environment for scientists to accelerate and improve the relevance of their research in x-ray and CT., (© 2022 Institute of Physics and Engineering in Medicine.)
- Published
- 2022
- Full Text
- View/download PDF
20. Model-based pulse pileup and charge sharing compensation for photon counting detectors: A simulation study.
- Author
-
Taguchi K, Polster C, Segars WP, Aygun N, and Stierstorfer K
- Subjects
- Computer Simulation, Phantoms, Imaging, Radiography, Photons, Tomography, X-Ray Computed methods
- Abstract
Purpose: We aim at developing a model-based algorithm that compensates for the effect of both pulse pileup (PP) and charge sharing (CS) and evaluates the performance using computer simulations., Methods: The proposed PCP algorithm for PP and CS compensation uses cascaded models for CS and PP we previously developed, maximizes Poisson log-likelihood, and uses an efficient three-step exhaustive search. For comparison, we also developed an LCP algorithm that combines models for a loss of counts (LCs) and CS. Two types of computer simulations, slab- and computed tomography (CT)-based, were performed to assess the performance of both PCP and LCP with 200 and 800 mA, (300 µm)
2 × 1.6-mm cadmium telluride detector, and a dead-time of 23 ns. A slab-based assessment used a pair of adipose and iodine with different thicknesses, attenuated X-rays, and assessed the bias and noise of the outputs from one detector pixel; a CT-based assessment simulated a chest/cardiac scan and a head-and-neck scan using 3D phantom and noisy cone-beam projections., Results: With the slab simulation, the PCP had little or no biases when the expected counts were sufficiently large, even though a probability of count loss (PCL) due to dead-time loss or PP was as high as 0.8. In contrast, the LCP had significant biases (>±2 cm of adipose) when the PCL was higher than 0.15. Biases were present with both PCP and LCP when the expected counts were less than 10-120 per datum, which was attributed to the fact that the maximum likelihood did not approach the asymptote. The noise of PCP was within 8% from the Cramér-Rao lower bounds for most cases when no significant bias was present. The two CT studies essentially agreed with the slab simulation study. PCP had little or no biases in the estimated basis line integrals, reconstructed basis density maps, and synthesized monoenergetic CT images. But the LCP had significant biases in basis line integrals when X-ray beams passed through lungs and near the body and neck contours, where the PCLs were above 0.15. As a consequence, basis density maps and monoenergetic CT images obtained by LCP had biases throughout the imaged space., Conclusion: We have developed the PCP algorithm that uses the PP-CS model. When the expected counts are more than 10-120 per datum, the PCP algorithm is statistically efficient and successfully compensates for the effect of the spectral distortion due to both PP and CS providing little or no biases in basis line integrals, basis density maps, and monoenergetic CT images regardless of count-rates. In contrast, the LCP algorithm, which models an LC due to pileup, produces severe biases when incident count-rates are high and the PCL is 0.15 or higher., (© 2022 The Authors. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.)- Published
- 2022
- Full Text
- View/download PDF
21. Dose coefficients for organ dosimetry in tomosynthesis imaging of adults and pediatrics across diverse protocols.
- Author
-
Sharma S, Kapadia A, Ria F, Segars WP, and Samei E
- Subjects
- Adult, Child, Humans, Monte Carlo Method, Phantoms, Imaging, Prospective Studies, Radiation Dosage, Retrospective Studies, Pediatrics, Radiometry methods
- Abstract
Purpose: The gold-standard method for estimation of patient-specific organ doses in digital tomosynthesis (DT) requires protocol-specific Monte Carlo (MC) simulations of radiation transport in anatomically accurate computational phantoms. Although accurate, MC simulations are computationally expensive, leading to a turnaround time in the order of core hours for simulating a single exam. This limits their clinical utility. The purpose of this study is to overcome this limitation by utilizing patient- and protocol-specific MC simulations to develop a comprehensive database of air-kerma-normalized organ dose coefficients for a virtual population of adult and pediatric patient models over an expanded set of exam protocols in DT for retrospective and prospective estimation of radiation dose in clinical tomosynthesis., Materials and Methods: A clinically representative virtual population of 14 patient models was used, with pediatric models (M and F) at ages 1, 5, 10, and 15 and adult patient models (M and F) with body mass index (BMIs) at 10th, 50th, and 90th percentiles of the US population. A graphics processing unit (GPU)-based MC simulation framework was used to simulate organ doses in the patient models, incorporating the scanner-specific configuration of a clinical DT system (VolumeRad, GE Healthcare, Waukesha, WI, USA) and an expanded set of exam protocols, including 21 distinct acquisition techniques for imaging a variety of anatomical regions (head and neck, thorax, spine, abdomen, and knee). Organ dose coefficients (h
n ) were estimated by normalizing organ dose estimates to air kerma at 70 cm (X70cm ) from the source in the scout view. The corresponding coefficients for projection radiography were approximated using organ doses estimated for the scout view. The organ dose coefficients were further used to compute air-kerma-normalized patient-specific effective dose coefficients (Kn ) for all combinations of patients and protocols, and a comparative analysis examining the variation of radiation burden across sex, age, and exam protocols in DT, and with projection radiography was performed., Results: The database of organ dose coefficients (hn ) containing 294 distinct combinations of patients and exam protocols was developed and made publicly available. The values of Kn were observed to produce estimates of effective dose in agreement with prior studies and consistent with magnitudes expected for pediatric and adult patients across the different exam protocols, with head and neck regions exhibiting relatively lower and thorax and C-spine (apsc, apcs) regions relatively higher magnitudes. The ratios (r = Kn /Kn ,rad ) quantifying the differences air-kerma-normalized patient-specific effective doses between DT and projection radiography were centered around 1.0 for all exam protocols, with the exception of protocols covering the knee region (pawk, patk)., Conclusions: This study developed a database of organ dose coefficients for a virtual population of 14 adult and pediatric XCAT patient models over a set of 21 exam protocols in DT. Using empirical measurements of air kerma in the clinic, these organ dose coefficients enable practical retrospective and prospective patient-specific radiation dosimetry. The computation of air-kerma-normalized patient-specific effective doses further enables the comparison of radiation burden to the patient populations between protocols and between imaging modalities (e.g., DT and projection radiography), as presented in this study., (© 2022 American Association of Physicists in Medicine.)- Published
- 2022
- Full Text
- View/download PDF
22. Quantitative analysis of changes in lung density by dynamic chest radiography in association with CT values: a virtual imaging study and initial clinical corroboration.
- Author
-
Sugiura T, Tanaka R, Samei E, Segars WP, Abadi E, Kasahara K, Ohkura N, Tamura M, and Matsumoto I
- Subjects
- Humans, Lung diagnostic imaging, Radiography, Tomography, X-Ray Computed methods, Lung Diseases diagnostic imaging, Pulmonary Disease, Chronic Obstructive diagnostic imaging
- Abstract
Dynamic chest radiography (DCR) identifies pulmonary impairments as decreased changes in radiographic lung density during respiration (Δpixel values), but not as scaled/standardized computed tomography (CT) values. Quantitative analysis correlated with CT values is beneficial for a better understanding of Δpixel values in DCR-based assessment of pulmonary function. The present study aimed to correlate Δpixel values from DCR with changes in CT values during respiration (ΔCT values) through a computer-based phantom study. A total of 20 four-dimensional computational phantoms during forced breathing were created to simulate both CT and projection images of the same virtual patients. The Δpixel and ΔCT values of the lung fields were correlated on a regression line, and the inclination was statistically evaluated to determine whether there were significant differences among physical types, sex, and breathing methods. The resulting conversion expression was also assessed in the DCR images of 37 patients. The resulting Δpixel values for 30/37 (81%) real patients, 6/7 (86%) normal controls, and 24/30 (80%) chronic obstructive pulmonary disorder patients were within the range of ΔCT values ± standard deviation (SD) reported in a previous study. In addition, no significant differences were detected for each condition of thoracic breathing, suggesting that the same regression line inclination values measured across the entire lung can be used for the conversion of Δpixel values, providing a quantitative analysis that can be correlated with ΔCT values. The developed conversion expression may be helpful for improving the understanding of respiratory changes using radiographic lung densities from DCR-based assessments of pulmonary function., (© 2021. Japanese Society of Radiological Technology and Japan Society of Medical Physics.)
- Published
- 2022
- Full Text
- View/download PDF
23. Photon-counting CT versus conventional CT for COPD quantifications: intra-scanner optimization and inter-scanner assessments using virtual imaging trials.
- Author
-
Sotoudeh-Paima S, Segars WP, Samei E, and Abadi E
- Abstract
Chronic obstructive pulmonary disease (COPD) is a chronic inflammatory lung disease and a major cause of death and disability worldwide. Quantitative CT is a powerful tool to better understand the heterogeneity and severity of this disease. Quantitative CT is being increasingly used in COPD research, and the recent advancements in CT technology have made it even more encouraging. One recent advancement has been the development of photon-counting detectors, offering higher spatial resolution, higher image contrast, and lower noise levels in the images. However, the quantification performance of this new technology compared to conventional scanners remains unknown. Additionally, different protocol settings (e.g., different dose levels, slice thicknesses, reconstruction kernels and algorithms) affect quantifications in an unsimilar fashion. This study investigates the potential advantages of photon-counting CT (PCCT) against conventional energy-integrating detector (EID) CT and explores the effects of protocol settings on lung density quantifications in COPD patients. This study was made possible using a virtual imaging platform, taking advantage of anthropomorphic phantoms with COPD (COPD-XCAT) and a scanner-specific CT simulator (DukeSim). Having the physical and geometrical properties of three Siemens commercial scanners (Flash, Force for EID and NAEOTOM Alpha for PCCT) modeled, we simulated CT images of ten COPD-XCAT phantoms at 0.63 and 3.17 mGy dose levels and reconstructed at three levels of kernel sharpness. The simulated CT images were quantified in terms of "Lung mean absolute error (MAE)," "LAA -950," "Perc 15," "Lung mass" imaging biomarkers and compared against the ground truth values of the phantoms. The intra-scanner assessment demonstrated the superior qualitative and quantitative performance of the PCCT scanner over the conventional scanners (21.01% and 22.74% mean lung MAE improvement, and 53.97% and 68.13% mean LAA -950 error improvement compared to Flash and Force). The results also showed that higher mAs, thinner slices, smoother kernels, and iterative reconstruction could lead to more accurate and precise quantification scores. This study underscored the qualitative and quantitative benefits of PCCT against conventional EID scanners as well as the importance of optimal protocol choice within scanners for more accurate quantifications.
- Published
- 2022
- Full Text
- View/download PDF
24. A GPU-accelerated framework for individualized estimation of organ doses in digital tomosynthesis.
- Author
-
Sharma S, Kapadia A, Brown J, Segars WP, Bolch W, and Samei E
- Subjects
- Adult, Humans, Monte Carlo Method, Phantoms, Imaging, Radiation Dosage, Radiometry, Tomography, X-Ray Computed
- Abstract
Purpose: Estimation of organ doses in digital tomosynthesis (DT) is challenging due to the lack of existing tools that accurately and flexibly model protocol- and view-specific collimations and motion trajectories of the source and detector for a variety of exam protocols, and the computational inefficiencies of conducting MC simulations. The purpose of this study was to overcome these limitations by developing and benchmarking a GPU-accelerated MC simulation framework compatible with patient-specific computational phantoms for individualized estimation of organ doses in DT., Materials and Methods: The framework for individualized estimation of dose in DT was developed as a two-step workflow: (1) a custom MATLAB code that accepts a patient-specific computational phantom and exam description (organ markers for defining the extremities of the anatomical region of interest, tube voltage, source-to-image distance, angular sweep range, number of projection views, and the pivot point to image distance - PPID) to compute the field of views (FOVs) for a clinical DT system, and (2) a MC tool (developed using MC-GPU) modeling the configuration of a clinical DT system to estimate organ doses based on the computed FOVs. Using this framework, we estimated organ doses for 28 radiosensitive organs in an adult reference patient model (M; 30 years) imaged using a commercial DT system (VolumeRad, GE Healthcare, Waukesha, WI). The estimates were benchmarked against values from a comparable organ dose estimation framework (reference dataset developed by the Advanced Laboratory for Radiation Dosimetry Studies at University of Florida) for a posterior-anterior chest exam. The resulting differences were quantified as percent relative errors and analyzed to identify any potential sources of bias and uncertainties. The timing performance (run duration in seconds) of the framework was also quantified for the same simulation to gauge the feasibility of the workflow for time-constrained clinical applications., Results: The organ dose estimates from the developed framework showed a close agreement with the reference dataset, with percent relative errors ranging from -6.9% to 5.0% and a mean absolute percent difference of 1.7% over all radiosensitive organs, with the exception of testes and eye lens, for which the percent relative errors were higher at -18.9% and -27.6%, respectively, due to their relative positioning outside the primary irradiation field, leading to fewer photons depositing energy and consequently higher errors in estimated organ doses. The run duration for the same simulation was 916.3 s, representing a substantial improvement in performance over existing nonparallelized MC tools., Conclusions: This study successfully developed and benchmarked a GPU-accelerated framework compatible with patient-specific anthropomorphic computational phantoms for accurate individualized estimation of organ doses in DT. By enabling patient-specific estimation of organ doses, this framework can aid clinicians and researchers by providing them with tools essential for tracking the radiation burden to patients for dose monitoring purposes and identifying the trends and relationships in organ doses for a patient population to optimize existing and develop new exam protocols., (© 2021 American Association of Physicists in Medicine.)
- Published
- 2022
- Full Text
- View/download PDF
25. Development and Clinical Applications of a Virtual Imaging Framework for Optimizing Photon-counting CT.
- Author
-
Abadi E, McCabe C, Harrawood B, Sotoudeh-Paima S, Segars WP, and Samei E
- Abstract
The purpose of this study was to develop a virtual imaging framework that simulates a new photon-counting CT (PCCT) system (NAEOTOM Alpha, Siemens). The PCCT simulator was built upon the DukeSim platform, which generates projection images of computational phantoms given the geometry and physics of the scanner and imaging parameters. DukeSim was adapted to account for the geometry of the PCCT prototype. To model the photon-counting detection process, we utilized a Monte Carlo-based detector model with the known properties of the detectors. We validated the simulation platform against experimental measurements. The images were acquired at four dose levels (CTDI
vol of 1.5, 3.0, 6.0, and 12.0 mGy) and reconstructed with three kernels (Br36, Br40, Br48). The experimental acquisitions were replicated using our developed simulation platform. The real and simulated images were quantitatively compared in terms of image quality metrics (HU values, noise magnitude, noise power spectrum, and modulation transfer function). The clinical utility of our framework was demonstrated by conducting two clinical applications (COPD quantifications and lung nodule radiomics). The phantoms with relevant pathologies were imaged with DukeSim modeling the PCCT systems. Different imaging parameters (e.g., dose, reconstruction techniques, pixel size, and slice thickness) were altered to investigate their effects on task-based quantifications. We successfully implemented the acquisition and physics attributes of the PCCT prototype into the DukeSim platform. The discrepancy between the real and simulated data was on average about 2 HU in terms of noise magnitude, 0.002 mm-1 in terms of noise power spectrum peak frequency and 0.005 mm-1 in terms of the frequency at 50% MTF. Analysis suggested that lung lesion radiomics to be more accurate with reduced pixel size and slice thickness. For COPD quantifications, higher doses, thinner slices, and softer kernels yielded more accurate quantification of density-based biomarkers. Our developed virtual imaging platform enables systematic comparison of new PCCT technologies as well as optimization of the imaging parameters for specific clinical tasks.- Published
- 2022
- Full Text
- View/download PDF
26. Corrections to "iPhantom: A Framework for Automated Creation of Individualized Computational Phantoms and its Application to CT Organ Dosimetry".
- Author
-
Fu W, Sharma S, Abadi E, Iliopoulos AS, Wang Q, Lo JY, Sun X, Segars WP, and Samei E
- Abstract
In [1], the dose estimation accuracy using the alternative baseline method under modulated tube current was not correctly calculated due to an unintentional simulation error.
- Published
- 2022
- Full Text
- View/download PDF
27. Deep learning classification of COVID-19 in chest radiographs: performance and influence of supplemental training.
- Author
-
Fricks RB, Ria F, Chalian H, Khoshpouri P, Abadi E, Bianchi L, Segars WP, and Samei E
- Abstract
Purpose: Accurate classification of COVID-19 in chest radiographs is invaluable to hard-hit pandemic hot spots. Transfer learning techniques for images using well-known convolutional neural networks show promise in addressing this problem. These methods can significantly benefit from supplemental training on similar conditions, considering that there currently exists no widely available chest x-ray dataset on COVID-19. We evaluate whether targeted pretraining for similar tasks in radiography labeling improves classification performance in a sample radiograph dataset containing COVID-19 cases. Approach: We train a DenseNet121 to classify chest radiographs through six training schemes. Each training scheme is designed to incorporate cases from established datasets for general findings in chest radiography (CXR) and pneumonia, with a control scheme with no pretraining. The resulting six permutations are then trained and evaluated on a dataset of 1060 radiographs collected from 475 patients after March 2020, containing 801 images of laboratory-confirmed COVID-19 cases. Results: Sequential training phases yielded substantial improvement in classification accuracy compared to a baseline of standard transfer learning with ImageNet parameters. The test set area under the receiver operating characteristic curve for COVID-19 classification improved from 0.757 in the control to 0.857 for the optimal training scheme in the available images. Conclusions: We achieve COVID-19 classification accuracies comparable to previous benchmarks of pneumonia classification. Deliberate sequential training, rather than pooling datasets, is critical in training effective COVID-19 classifiers within the limitations of early datasets. These findings bring clinical-grade classification through CXR within reach for more regions impacted by COVID-19., (© 2021 The Authors.)
- Published
- 2021
- Full Text
- View/download PDF
28. Correction to: Comparison of 12 surrogates to characterize CT radiation risk across a clinical population.
- Author
-
Ria F, Fu W, Hoye J, Segars WP, Kapadia AJ, and Samei E
- Published
- 2021
- Full Text
- View/download PDF
29. A scanner-specific framework for simulating CT images with tube current modulation.
- Author
-
Jadick G, Abadi E, Harrawood B, Sharma S, Segars WP, and Samei E
- Subjects
- Computer Simulation, Humans, Phantoms, Imaging, Radiation Dosage, X-Rays, Tomography, X-Ray Computed
- Abstract
Although tube current modulation (TCM) is routinely implemented in modern computed tomography (CT) scans, no existing CT simulator is capable of generating realistic images with TCM. The goal of this study was to develop such a framework to (1) facilitate patient-specific optimization of TCM parameters and (2) enable future virtual imaging trials (VITs) with more clinically realistic image quality and x-ray flux distributions. The framework was created by developing a TCM module and integrating it with an existing CT simulator (DukeSim). The developed module utilizes scanner-calibrated TCM parameters and two localizer radiographs to compute the mAs for each simulated CT projection. This simulation pipeline was validated in two parts. First, DukeSim was validated in the context of a commercial scanner with TCM (SOMATOM Force, Siemens Healthineers) by imaging a physical CT phantom (Mercury, Sun Nuclear) and its computational analogue. Second, the TCM module was validated by imaging a computational anthropomorphic phantom (ATOM, CIRS) using DukeSim with real and module-generated TCM profiles. The validation demonstrated DukeSim's realism in terms of noise magnitude, noise texture, spatial resolution, and image contrast (with average differences of 0.38%, 6.31%, 0.43%, and -9 HU, respectively). It also demonstrated the TCM module's realism in terms of projection-level mAs and resulting noise magnitude (2.86% and -2.60%, respectively). Finally, the framework was applied to a pilot VIT simulating images of three computational anthropomorphic phantoms (XCAT, with body mass indices (BMIs) of 24.3, 28.2, and 33.0) under five different TCM settings. The optimal TCM for each phantom was characterized based on various criteria, such as minimizing mAs or maximizing image quality. 'Very Weak' TCM minimized noise for the 24.3 BMI phantom, while 'Very Strong' TCM minimized noise for the 33.0 BMI phantom. This illustrates the utility of the developed framework for future optimization studies of TCM parameters and, more broadly, large-scale VITs with scanner-specific TCM., (© 2021 Institute of Physics and Engineering in Medicine.)
- Published
- 2021
- Full Text
- View/download PDF
30. Comparison of 12 surrogates to characterize CT radiation risk across a clinical population.
- Author
-
Ria F, Fu W, Hoye J, Segars WP, Kapadia AJ, and Samei E
- Subjects
- Adult, Benchmarking, Humans, Monte Carlo Method, Radiation Dosage, Young Adult, Thorax, Tomography, X-Ray Computed
- Abstract
Objectives: Quantifying radiation burden is essential for justification, optimization, and personalization of CT procedures and can be characterized by a variety of risk surrogates inducing different radiological risk reflections. This study compared how twelve such metrics can characterize risk across patient populations., Methods: This study included 1394 CT examinations (abdominopelvic and chest). Organ doses were calculated using Monte Carlo methods. The following risk surrogates were considered: volume computed tomography dose index (CTDI
vol ), dose-length product (DLP), size-specific dose estimate (SSDE), DLP-based effective dose (EDk ), dose to a defining organ (ODD ), effective dose and risk index based on organ doses (EDOD , RI), and risk index for a 20-year-old patient (RIrp ). The last three metrics were also calculated for a reference ICRP-110 model (ODD,0 , ED0 , and RI0 ). Lastly, motivated by the ICRP, an adjusted-effective dose was calculated as [Formula: see text]. A linear regression was applied to assess each metric's dependency on RI. The results were characterized in terms of risk sensitivity index (RSI) and risk differentiability index (RDI)., Results: The analysis reported significant differences between the metrics with EDr showing the best concordance with RI in terms of RSI and RDI. Across all metrics and protocols, RSI ranged between 0.37 (SSDE) and 1.29 (RI0 ); RDI ranged between 0.39 (EDk ) and 0.01 (EDr ) cancers × 103 patients × 100 mGy., Conclusion: Different risk surrogates lead to different population risk characterizations. EDr exhibited a close characterization of population risk, also showing the best differentiability. Care should be exercised in drawing risk predictions from unrepresentative risk metrics applied to a population., Key Points: • Radiation risk characterization in CT populations is strongly affected by the surrogate used to describe it. • Different risk surrogates can lead to different characterization of population risk. • Healthcare professionals should exercise care in ascribing an implicit risk to factors that do not closely reflect risk., (© 2021. European Society of Radiology.)- Published
- 2021
- Full Text
- View/download PDF
31. iPhantom: A Framework for Automated Creation of Individualized Computational Phantoms and Its Application to CT Organ Dosimetry.
- Author
-
Fu W, Sharma S, Abadi E, Iliopoulos AS, Wang Q, Lo JY, Sun X, Segars WP, and Samei E
- Subjects
- Humans, Phantoms, Imaging, Tomography, X-Ray Computed
- Abstract
Objective: This study aims to develop and validate a novel framework, iPhantom, for automated creation of patient-specific phantoms or "digital-twins (DT)" using patient medical images. The framework is applied to assess radiation dose to radiosensitive organs in CT imaging of individual patients., Method: Given a volume of patient CT images, iPhantom segments selected anchor organs and structures (e.g., liver, bones, pancreas) using a learning-based model developed for multi-organ CT segmentation. Organs which are challenging to segment (e.g., intestines) are incorporated from a matched phantom template, using a diffeomorphic registration model developed for multi-organ phantom-voxels. The resulting digital-twin phantoms are used to assess organ doses during routine CT exams., Result: iPhantom was validated on both with a set of XCAT digital phantoms (n = 50) and an independent clinical dataset (n = 10) with similar accuracy. iPhantom precisely predicted all organ locations yielding Dice Similarity Coefficients (DSC) 0.6 - 1 for anchor organs and DSC of 0.3-0.9 for all other organs. iPhantom showed <10% errors in estimated radiation dose for the majority of organs, which was notably superior to the state-of-the-art baseline method (20-35% dose errors)., Conclusion: iPhantom enables automated and accurate creation of patient-specific phantoms and, for the first time, provides sufficient and automated patient-specific dose estimates for CT dosimetry., Significance: The new framework brings the creation and application of CHPs (computational human phantoms) to the level of individual CHPs through automation, achieving wide and precise organ localization, paving the way for clinical monitoring, personalized optimization, and large-scale research.
- Published
- 2021
- Full Text
- View/download PDF
32. Generation of annotated multimodal ground truth datasets for abdominal medical image registration.
- Author
-
Bauer DF, Russ T, Waldkirch BI, Tönnes C, Segars WP, Schad LR, Zöllner FG, and Golla AK
- Subjects
- Humans, Algorithms, Computer Simulation, Cone-Beam Computed Tomography methods, Image Processing, Computer-Assisted methods, Magnetic Resonance Imaging methods, Phantoms, Imaging
- Abstract
Purpose: Sparsity of annotated data is a major limitation in medical image processing tasks such as registration. Registered multimodal image data are essential for the diagnosis of medical conditions and the success of interventional medical procedures. To overcome the shortage of data, we present a method that allows the generation of annotated multimodal 4D datasets., Methods: We use a CycleGAN network architecture to generate multimodal synthetic data from the 4D extended cardiac-torso (XCAT) phantom and real patient data. Organ masks are provided by the XCAT phantom; therefore, the generated dataset can serve as ground truth for image segmentation and registration. Realistic simulation of respiration and heartbeat is possible within the XCAT framework. To underline the usability as a registration ground truth, a proof of principle registration is performed., Results: Compared to real patient data, the synthetic data showed good agreement regarding the image voxel intensity distribution and the noise characteristics. The generated T1-weighted magnetic resonance imaging, computed tomography (CT), and cone beam CT images are inherently co-registered. Thus, the synthetic dataset allowed us to optimize registration parameters of a multimodal non-rigid registration, utilizing liver organ masks for evaluation., Conclusion: Our proposed framework provides not only annotated but also multimodal synthetic data which can serve as a ground truth for various tasks in medical imaging processing. We demonstrated the applicability of synthetic data for the development of multimodal medical image registration algorithms., (© 2021. The Author(s).)
- Published
- 2021
- Full Text
- View/download PDF
33. A dynamic simulation framework for CT perfusion in stroke assessment built from first principles.
- Author
-
Divel SE, Christensen S, Segars WP, Lansberg MG, and Pelc NJ
- Subjects
- Cerebrovascular Circulation, Humans, Perfusion, Reproducibility of Results, Stroke diagnostic imaging, Tomography, X-Ray Computed
- Abstract
Purpose: Physicians utilize cerebral perfusion maps (e.g., cerebral blood flow, cerebral blood volume, transit time) to prescribe the plan of care for stroke patients. Variability in scanning techniques and post-processing software can result in differences between these perfusion maps. To determine which techniques are acceptable for clinical care, it is important to validate the accuracy and reproducibility of the perfusion maps. Validation using clinical data is challenging due to the lack of a gold standard to assess cerebral perfusion and the impracticality of scanning patients multiple times with different scanning techniques. In contrast, simulated data from a realistic digital phantom of the cerebral perfusion in acute stroke patients would enable studies to optimize and validate the scanning and post-processing techniques., Methods: We describe a complete framework to simulate CT perfusion studies for stroke assessment. We begin by expanding the XCAT brain phantom to enable spatially varying contrast agent dynamics and incorporate a realistic model of the dynamics in the cerebral vasculature derived from first principles. A dynamic CT simulator utilizes the time-concentration curves to define the contrast agent concentration in the object at each time point and generates CT perfusion images compatible with commercially available post-processing software. We also generate ground truth perfusion maps to which the maps generated by post-processing software can be compared., Results: We demonstrate a dynamic CT perfusion study of a simulated patient with an ischemic stroke and the resulting perfusion maps generated by post-processing software. We include a visual comparison between the computer-generated perfusion maps and the ground truth perfusion maps. The framework is highly tunable; users can modify the perfusion properties (e.g., occlusion location, CBF, CBV, and MTT), scanner specifications (e.g., focal spot size and detector configuration), scanning protocol (e.g., kVp and mAs), and reconstruction parameters (e.g., slice thickness and reconstruction filter)., Conclusions: This framework provides realistic test data with the underlying ground truth that enables a robust assessment of CT perfusion techniques and post-processing methods for stroke assessment., (© 2021 American Association of Physicists in Medicine.)
- Published
- 2021
- Full Text
- View/download PDF
34. Novel Methodology for Measuring Regional Myocardial Efficiency.
- Author
-
Gullberg GT, Shrestha UM, Veress AI, Segars WP, Liu J, Ordovas K, and Seo Y
- Subjects
- Coronary Circulation, Heart diagnostic imaging, Heart Ventricles diagnostic imaging, Humans, Magnetic Resonance Imaging, Cine, Myocardium, Oxygen Consumption
- Abstract
Our approach differs from the usual global measure of cardiac efficiency by using PET/MRI to measure efficiency of small pieces of cardiac tissue whose limiting size is equal to the spatial resolution of the PET scanner. We initiated a dynamic cardiac PET study immediately prior to the injection of 15.1 mCi of
11 C-acetate acquiring data for 25 minutes while simultaneously acquiring MRI cine data. 1) A 3D finite element (FE) biomechanical model of the imaged heart was constructed by utilizing nonrigid deformable image registration to alter the Dassault Systèmes FE Living Heart Model (LHM) to fit the geometry in the cardiac MRI cine data. The patient specific FE cardiac model with estimates of stress, strain, and work was transformed into PET/MRI format. 2) A 1-tissue compartment model was used to calculate wash-in (K1 ) and the linear portion of the decay in the PET11 C-acetate time activity curve (TAC) was used to calculate the wash-out k2 (mono) rate constant. K1 was used to calculate blood flow and k2 (mono) was used to calculate myocardial volume oxygen consumption ( MVO2 ). 3) Estimates of stress and strain were used to calculate Myocardial Equivalent Minute Work ( MEMW ) and Cardiac Efficiency = MEMW/MVO2 was then calculated for 17 tissue segments of the left ventricle. The global MBF was 0.96 ± 0.15 ml/min/gm and MVO2 ranged from 8 to 17 ml/100gm/min. Six central slices of the MRI cine data provided a range of MEMW of 0.1 to 0.4 joules/gm/min and a range of Cardiac Efficiency of 6 to 18%.- Published
- 2021
- Full Text
- View/download PDF
35. A generative adversarial network (GAN)-based technique for synthesizing realistic respiratory motion in the extended cardiac-torso (XCAT) phantoms.
- Author
-
Chang Y, Jiang Z, Segars WP, Zhang Z, Lafata K, Cai J, Yin FF, and Ren L
- Subjects
- Humans, Motion, Phantoms, Imaging, Torso, Four-Dimensional Computed Tomography, Respiration
- Abstract
Objective . Synthesize realistic and controllable respiratory motions in the extended cardiac-torso (XCAT) phantoms by developing a generative adversarial network (GAN)-based deep learning technique. Methods . A motion generation model was developed using bicycle-GAN with a novel 4D generator. Input with the end-of-inhale (EOI) phase images and a Gaussian perturbation, the model generates inter-phase deformable-vector-fields (DVFs), which were composed and applied to the input to generate 4D images. The model was trained and validated using 71 4D-CT images from lung cancer patients and then applied to the XCAT EOI images to generate 4D-XCAT with realistic respiratory motions. A separate respiratory motion amplitude control model was built using decision tree regression to predict the input perturbation needed for a specific motion amplitude, and this model was developed using 300 4D-XCAT generated from 6 XCAT phantom sizes with 50 different perturbations for each size. In both patient and phantom studies, Dice coefficients for lungs and lung volume variation during respiration were compared between the simulated images and reference images. The generated DVF was evaluated by deformation energy. DVFs and ventilation maps of the simulated 4D-CT were compared with the reference 4D-CTs using cross correlation and Spearman's correlation. Comparison of DVFs and ventilation maps among the original 4D-XCAT, the generated 4D-XCAT, and reference patient 4D-CTs were made to show the improvement of motion realism by the model. The amplitude control error was calculated. Results . Comparing the simulated and reference 4D-CTs, the maximum deviation of lung volume during respiration was 5.8%, and the Dice coefficient reached at least 0.95 for lungs. The generated DVFs presented comparable deformation energy levels. The cross correlation of DVFs achieved 0.89 ± 0.10/0.86 ± 0.12/0.95 ± 0.04 along the x / y / z direction in the testing group. The cross correlation of ventilation maps derived achieved 0.80 ± 0.05/0.67 ± 0.09/0.68 ± 0.13, and the Spearman's correlation achieved 0.70 ± 0.05/0, 60 ± 0.09/0.53 ± 0.01, respectively, in the training/validation/testing groups. The generated 4D-XCAT phantoms presented similar deformation energy as patient data while maintained the lung volumes of the original XCAT phantom (Dice = 0.95, maximum lung volume variation = 4%). The motion amplitude control models controlled the motion amplitude control error to be less than 0.5 mm. Conclusions . The results demonstrated the feasibility of synthesizing realistic controllable respiratory motion in the XCAT phantom using the proposed method. This crucial development enhances the value of XCAT phantoms for various 4D imaging and therapy studies., (© 2021 Institute of Physics and Engineering in Medicine.)
- Published
- 2021
- Full Text
- View/download PDF
36. Assessment of pleural invasion and adhesion of lung tumors with dynamic chest radiography: A virtual clinical imaging study.
- Author
-
Tanaka R, Samei E, Segars WP, Abadi E, Matsumoto I, Tamura M, Takata M, and Yamashiro T
- Subjects
- Adult, Four-Dimensional Computed Tomography, Humans, Male, Phantoms, Imaging, Respiration, Lung Neoplasms diagnostic imaging, Pleura
- Abstract
Purpose: Accurate preoperative assessment of tumor invasion/adhesion is crucial for planning appropriate operative procedures. Recent advances in digital radiography allow a motion analysis of lung tumors with dynamic chest radiography (DCR) with total exposure dose comparable to that of conventional chest radiography. The aim of this study was to investigate the feasibility of preoperative evaluation of pleural invasion/adhesion of lung tumors with DCR through a virtual clinical imaging study, using a four-dimensional (4D) extended cardiac-torso (XCAT) computational phantom., Methods: An XCAT phantom of an adult man (50th percentile in height and weight) with simulated respiratory and cardiac motions was generated to use as a virtual patient. To simulate lung tumors with and without pleural invasion, a 30-mm diameter tumor sphere was inserted into each lobe of the phantom. The virtual patient during respiration was virtually projected using an x-ray simulator in posteroanterior (PA) and oblique directions, and sequential bone suppression (BS) images were created. The measurement points (tumor, rib, and diaphragm) were automatically tracked on simulated images by a template matching technique. We calculated five quantitative metrics related to the movement distance and directions of the targeted tumor and evaluated whether DCR could distinguish between tumors with and without pleural invasion/adhesion., Results: Precise tracking of the targeted tumor was achieved on the simulated BS images without undue influence of rib shadows. There was a significant difference in all five quantitative metrics between the lung tumors with and without pleural invasion both on the oblique and PA projection views (P < 0.05). Quantitative metrics related to the movement distance were effective for tumors in the middle and lower lobes, while, those related to the movement directions were effective for tumors close to the frontal chest wall on the oblique projection view. The oblique views were useful for the evaluation of the space between the chest wall and a moving tumor., Conclusion: DCR could help distinguish between tumors with and without pleural invasion/adhesion based on the two-dimensional movement distance and direction using oblique and PA projection views. With anticipated improved image: processing to evaluate the respiratory displacement of lung tumors in the upper lobe or behind the heart, DCR holds promise for clinical assessment of tumor invasion/adhesion in the parietal pleura., (© 2021 American Association of Physicists in Medicine.)
- Published
- 2021
- Full Text
- View/download PDF
37. A GPU-accelerated framework for rapid estimation of scanner-specific scatter in CT for virtual imaging trials.
- Author
-
Sharma S, Abadi E, Kapadia A, Segars WP, and Samei E
- Subjects
- Computer Simulation, Humans, Monte Carlo Method, Phantoms, Imaging, Scattering, Radiation, Cone-Beam Computed Tomography
- Abstract
Virtual imaging trials (VITs), defined as the process of conducting clinical imaging trials using computer simulations, offer a time- and cost-effective alternative to traditional imaging trials for CT. The clinical potential of VITs hinges on the realism of simulations modeling the image acquisition process, where the accurate scanner-specific simulation of scatter in a time-feasible manner poses a particular challenge. To meet this need, this study proposes, develops, and validates a rapid scatter estimation framework, based on GPU-accelerated Monte Carlo (MC) simulations and denoising methods, for estimating scatter in single source, dual-source, and photon-counting CT. A CT simulator was developed to incorporate parametric models for an anti-scatter grid and a curved energy integrating detector with an energy-dependent response. The scatter estimates from the simulator were validated using physical measurements acquired on a clinical CT system using the standard single-blocker method. The MC simulator was further extended to incorporate a pre-validated model for a PCD and an additional source-detector pair to model cross scatter in dual-source configurations. To estimate scatter with desirable levels of statistical noise using a manageable computational load, two denoising methods using a (1) convolutional neural network and an (2) optimized Gaussian filter were further deployed. The viability of this framework for clinical VITs was assessed by integrating it with a scanner-specific ray-tracer program to simulate images for an image quality (Mercury) and an anthropomorphic phantom (XCAT). The simulated scatter-to-primary ratios agreed with physical measurements within 4.4% ± 10.8% across all projection angles and kVs. The differences of ∼121 HU between images with and without scatter, signifying the importance of scatter for simulating clinical images. The denoising methods preserved the magnitudes and trends observed in the reference scatter distributions, with an averaged rRMSE value of 0.91 and 0.97 for the two methods, respectively. The execution time of ∼30 s for simulating scatter in a single projection with a desirable level of statistical noise indicates a major improvement in performance, making our tool an eligible candidate for conducting extensive VITs spanning multiple patients and scan protocols., (© 2021 Institute of Physics and Engineering in Medicine.)
- Published
- 2021
- Full Text
- View/download PDF
38. Patient-Informed Organ Dose Estimation in Clinical CT: Implementation and Effective Dose Assessment in 1048 Clinical Patients.
- Author
-
Fu W, Ria F, Segars WP, Choudhury KR, Wilson JM, Kapadia AJ, and Samei E
- Subjects
- Adolescent, Adult, Aged, Aged, 80 and over, Anatomic Landmarks diagnostic imaging, Body Size, Bone and Bones diagnostic imaging, Child, Female, Gestational Age, Humans, Liver diagnostic imaging, Lung diagnostic imaging, Male, Middle Aged, Pelvis diagnostic imaging, Phantoms, Imaging, Pregnancy, Reference Standards, Retrospective Studies, Workflow, Young Adult, Radiation Dosage, Radiation Monitoring methods, Tomography, X-Ray Computed
- Abstract
OBJECTIVE. The purpose of this study is to comprehensively implement a patient-informed organ dose monitoring framework for clinical CT and compare the effective dose (ED) according to the patient-informed organ dose with ED according to the dose-length product (DLP) in 1048 patients. MATERIALS AND METHODS. Organ doses for a given examination are computed by matching the topogram to a computational phantom from a library of anthropomorphic phantoms and scaling the fixed tube current dose coefficients by the examination volume CT dose index (CTDI
vol ) and the tube-current modulation using a previously validated convolution-based technique. In this study, the library was expanded to 58 adult, 56 pediatric, five pregnant, and 12 International Commission on Radiological Protection (ICRP) reference models, and the technique was extended to include multiple protocols, a bias correction, and uncertainty estimates. The method was implemented in a clinical monitoring system to estimate organ dose and organ dose-based ED for 647 abdomen-pelvis and 401 chest examinations, which were compared with DLP-based ED using a t test. RESULTS. For the majority of the organs, the maximum errors in organ dose estimation were 18% and 8%, averaged across all protocols, without and with bias correction, respectively. For the patient examinations, DLP-based ED was significantly different from organ dose-based ED by as much as 190.9% and 234.7% for chest and abdomen-pelvis scans, respectively (mean, 9.0% and 24.3%). The differences were statistically significant ( p < .001) and exhibited overestimation for larger-sized patients and underestimation for smaller-sized patients. CONCLUSION. A patient-informed organ dose estimation framework was comprehensively implemented applicable to clinical imaging of adult, pediatric, and pregnant patients. Compared with organ dose-based ED, DLP-based ED may overestimate effective dose for larger-sized patients and underestimate it for smaller-sized patients.- Published
- 2021
- Full Text
- View/download PDF
39. Task-dependent estimability index to assess the quality of cardiac computed tomography angiography for quantifying coronary stenosis.
- Author
-
Samei E, Richards T, Segars WP, Daubert MA, Ivanov A, Rubin GD, Douglas PS, and Hoffmann U
- Abstract
Purpose: Quantifying stenosis in cardiac computed tomography angiography (CTA) images remains a difficult task, as image noise and cardiac motion can degrade image quality and distort underlying anatomic information. The purpose of this study was to develop a computational framework to objectively assess the precision of quantifying coronary stenosis in cardiac CTA. Approach: The framework used models of coronary vessels and plaques, asymmetric motion point spread functions, CT image blur (task-based modulation transfer functions) and noise (noise-power spectrums), and an automated maximum-likelihood estimator implemented as a matched template squared-difference operator. These factors were integrated into an estimability index ( e ' ) as a task-based measure of image quality in cardiac CTA. The e ' index was applied to assess how well it can to predict the quality of 132 clinical cases selected from the Prospective Multicenter Imaging Study for Evaluation of Chest Pain trial. The cases were divided into two cohorts, high quality and low quality, based on clinical scores and the concordance of clinical evaluations of cases by experienced cardiac imagers. The framework was also used to ascertain protocol factors for CTA Biomarker initiative of the Quantitative Imaging Biomarker Alliance (QIBA). Results: The e ' index categorized the patient datasets with an area under the curve of 0.985, an accuracy of 0.977, and an optimal e ' threshold of 25.58 corresponding to a stenosis estimation precision (standard deviation) of 3.91%. Data resampling and training-test validation methods demonstrated stable classifier thresholds and receiver operating curve performance. The framework was successfully applicable to the QIBA objective. Conclusions: A computational framework to objectively quantify stenosis estimation task performance was successfully implemented and was reflective of clinical results in the context of a prominent clinical trial with diverse sites, readers, scanners, acquisition protocols, and patients. It also demonstrated the potential for prospective optimization of imaging protocols toward targeted precision and measurement consistency in cardiac CT images., (© 2021 The Authors.)
- Published
- 2021
- Full Text
- View/download PDF
40. Three-dimensional regions-of-interest-based intra-operative four-dimensional soft tissue perfusion imaging using a standard x-ray system with no gantry rotation: A simulation study for a proof of concept.
- Author
-
Taguchi K, Sauer TJ, Segars WP, Frey EC, Xu J, Liapi E, Stayman JW, Hong K, Hui FK, Unberath M, and Du Y
- Subjects
- Computer Simulation, Cross-Sectional Studies, Perfusion, Phantoms, Imaging, Rotation, X-Rays, Perfusion Imaging, Tomography, X-Ray Computed
- Abstract
Purpose: Many interventional procedures aim at changing soft tissue perfusion or blood flow. One problem at present is that soft tissue perfusion and its changes cannot be assessed in an interventional suite because cone-beam computed tomography is too slow (it takes 4-10 s per volume scan). In order to address the problem, we propose a novel method called IPEN for Intra-operative four-dimensional soft tissue PErfusion using a standard x-ray system with No gantry rotation., Methods: IPEN uses two input datasets: (a) the contours and locations of three-dimensional regions-of-interest (ROIs) such as arteries and sub-sections of cancerous lesions, and (b) a series of x-ray projection data obtained from an intra-arterial contrast injection to contrast enhancement to wash-out. IPEN then estimates a time-enhancement curve (TEC) for each ROI directly from projections without reconstructing cross-sectional images by maximizing the agreement between synthesized and measured projections with a temporal roughness penalty. When path lengths through ROIs are known for each x-ray beam, the ROI-specific enhancement can be accurately estimated from projections. Computer simulations are performed to assess the performance of the IPEN algorithm. Intra-arterial contrast-enhanced liver scans over 25 s were simulated using XCAT phantom version 2.0 with heterogeneous tissue textures and cancerous lesions. The following four sub-studies were performed: (a) The accuracy of the estimated TECs with overlapped lesions was evaluated at various noise (dose) levels with either homogeneous or heterogeneous lesion enhancement patterns; (b) the accuracy of IPEN with inaccurate ROI contours was assessed; (c) we investigated how overlapping ROIs and noise in projections affected the accuracy of the IPEN algorithm; and (d) the accuracy of the perfusion indices was assessed., Results: The TECs estimated by IPEN were sufficiently accurate at a reference dose level with the root-mean-square deviation (RMSD) of 0.0027 ± 0.0001 cm
-1 or 13 ± 1 Hounsfield unit (mean ± standard deviation) for the homogeneous lesion enhancement and 0.0032 ± 0.0005 cm-1 for the heterogeneous enhancement (N = 20 each). The accuracy was degraded with decreasing doses: The RMSD with homogeneous enhancement was 0.0220 ± 0.0003 cm-1 for 20% of the reference dose level. Performing 3 × 3 pixel averaging on projection data improved the RMSDs to 0.0051 ± 0.0002 cm-1 for 20% dose. When the ROI contours were inaccurate, smaller ROI contours resulted in positive biases in TECs, whereas larger ROI contours produced negative biases. The bias remained small, within ± 0.0070 cm-1 , when the Sorenson-Dice coefficients (SDCs) were larger than 0.81. The RMSD of the TEC estimation was strongly associated with the condition of the problem, which can be empirically quantified using the condition number of a matrix A z that maps a vector of ROI enhancement values z to projection data and a weighted variance of projection data: a linear correlation coefficient (R) was 0.794 (P < 0.001). The perfusion index values computed from the estimated TECs agreed well with the true values (R ≥ 0.985, P < 0.0001)., Conclusion: The IPEN algorithm can estimate ROI-specific TECs with high accuracy especially when 3 × 3 pixel averaging is applied, even when lesion enhancement is heterogeneous, or ROI contours are inaccurate but the SDC is at least 0.81., (© 2020 American Association of Physicists in Medicine.)- Published
- 2020
- Full Text
- View/download PDF
41. Impact of Using Uniform Attenuation Coefficients for Heterogeneously Dense Breasts in a Dedicated Breast PET/X-ray Scanner.
- Author
-
MacDonald LR, Lo JY, Sturgeon GM, Zeng C, Harrison RL, Kinahan PE, and Segars WP
- Abstract
We investigated PET image quantification when using a uniform attenuation coefficient ( μ ) for attenuation correction (AC) of anthropomorphic density phantoms derived from high-resolution breast CT scans. A breast PET system was modeled with perfect data corrections except for AC. Using uniform μ for AC resulted in quantitative errors roughly proportional to the difference between μ used in AC ( μ
AC ) and local μ , yielding approximately ± 5% bias, corresponding to the variation of μ for 511 keV photons in breast tissue. Global bias was lowest when uniform μAC was equal to the phantom mean μ ( μmean ). Local bias in 10-mm spheres increased as the sphere μ deviated from μmean , but remained only 2-3% when the μsphere was 6.5% higher than μmean . Bias varied linearly with and was roughly proportional to local μ mismatch. Minimizing local bias, e.g., in a small sphere, required the use of a uniform μ value between the local μ and the μmean . Thus, biases from using uniform- μ AC are low when local μsphere is close to μmean . As the μsphere increasingly differs from the phantom μmean , bias increases, and the optimal uniform μ is less predictable, having a value between μsphere and the phantom μmean .- Published
- 2020
- Full Text
- View/download PDF
42. Virtual clinical trial for quantifying the effects of beam collimation and pitch on image quality in computed tomography.
- Author
-
Abadi E, Segars WP, Harrawood B, Sharma S, Kapadia A, and Samei E
- Abstract
Purpose: To utilize a virtual clinical trial (VCT) construct to investigate the effects of beam collimation and pitch on image quality (IQ) in computed tomography (CT) under different respiratory and cardiac motion rates. Approach: A computational human model [extended cardiac-torso (XCAT) phantom] with added lung lesions was used to simulate seven different rates of cardiac and respiratory motions. A validated CT simulator (DukeSim) was used in this study. A supplemental validation was done to ensure the accuracy of DukeSim across different pitches and beam collimations. Each XCAT phantom was imaged using the CT simulator at multiple pitches (0.5 to 1.5) and beam collimations (19.2 to 57.6 mm) at a constant dose level. The images were compared against the ground truth using three task-generic IQ metrics in the lungs. Additionally, the bias and variability in radiomics (morphological) feature measurements were quantified for task-specific lung lesion quantification across the studied imaging conditions. Results: All task-generic metrics degraded by 1.6% to 13.3% with increasing pitch. When imaged with motion, increasing pitch reduced motion artifacts. The IQ slightly degraded (1.3%) with changes in the studied beam collimations. Patient motion exhibited negative effects (within 7%) on the IQ. Among all features across all imaging conditions studies, compactness2 and elongation showed the largest ( - 26.5 % , 7.8%) and smallest ( - 0.8 % , 2.7%) relative bias and variability. The radiomics results were robust across the motion profiles studied. Conclusions: While high pitch and large beam collimations can negatively affect the quality of CT images, they are desirable for fast imaging. Further, our results showed no major adverse effects in morphology quantification of lung lesions with the increase in pitch or beam collimation. VCTs, such as the one demonstrated in this study, represent a viable methodology for experiments in CT., (© 2020 Society of Photo-Optical Instrumentation Engineers (SPIE).)
- Published
- 2020
- Full Text
- View/download PDF
43. Virtual clinical trials in medical imaging: a review.
- Author
-
Abadi E, Segars WP, Tsui BMW, Kinahan PE, Bottenus N, Frangi AF, Maidment A, Lo J, and Samei E
- Abstract
The accelerating complexity and variety of medical imaging devices and methods have outpaced the ability to evaluate and optimize their design and clinical use. This is a significant and increasing challenge for both scientific investigations and clinical applications. Evaluations would ideally be done using clinical imaging trials. These experiments, however, are often not practical due to ethical limitations, expense, time requirements, or lack of ground truth. Virtual clinical trials (VCTs) (also known as in silico imaging trials or virtual imaging trials) offer an alternative means to efficiently evaluate medical imaging technologies virtually. They do so by simulating the patients, imaging systems, and interpreters. The field of VCTs has been constantly advanced over the past decades in multiple areas. We summarize the major developments and current status of the field of VCTs in medical imaging. We review the core components of a VCT: computational phantoms, simulators of different imaging modalities, and interpretation models. We also highlight some of the applications of VCTs across various imaging modalities., (© 2020 Society of Photo-Optical Instrumentation Engineers (SPIE).)
- Published
- 2020
- Full Text
- View/download PDF
44. Development of realistic multi-contrast textured XCAT (MT-XCAT) phantoms using a dual-discriminator conditional-generative adversarial network (D-CGAN).
- Author
-
Chang Y, Lafata K, Segars WP, Yin FF, and Ren L
- Subjects
- Contrast Media, Humans, Pilot Projects, Four-Dimensional Computed Tomography instrumentation, Machine Learning, Phantoms, Imaging
- Abstract
Develop a machine learning-based method to generate multi-contrast anatomical textures in the 4D extended cardiac-torso (XCAT) phantom for more realistic imaging simulations. As a pilot study, we synthesize CT and CBCT textures in the chest region. For training purposes, major organs and gross tumor volumes (GTVs) in chest region were segmented from real patient images and assigned to different HU values to generate organ maps, which resemble the XCAT images. A dual-discriminator conditional-generative adversarial network (D-CGAN) was developed to synthesize anatomical textures in the corresponding organ maps. The D-CGAN was uniquely designed with two discriminators, one trained for the body and the other for the tumor. Various XCAT phantoms were input to the D-CGAN to generate textured XCAT phantoms. The D-CGAN model was trained separately using 62 CT and 63 CBCT images from lung SBRT patients to generate multi-contrast textured XCAT (MT-XCAT). The MT-XCAT phantoms were evaluated by comparing the intensity histograms and radiomic features with those from real patient images using Wilcoxon rank-sum test. The visual examination demonstrated that the MT-XCAT phantoms presented similar general contrast and anatomical textures as CT and CBCT images. The mean HU of the MT-XCAT-CT and MT-XCAT-CBCT were [Formula: see text] and [Formula: see text], compared with that of real CT ([Formula: see text]) and CBCT ([Formula: see text]). The majority of radiomic features from the MT-XCAT phantoms followed the same distribution as the real images according to the Wilcoxon rank-sum test, except for limited second-order features. The study demonstrated the feasibility of generating realistic MT-XCAT phantoms using D-CGAN. The MT-XCAT phantoms can be further expanded to include other modalities (MRI, PET, ultrasound, etc) under the same scheme. This crucial development greatly enhances the value of the phantom for various clinical applications, including testing and optimizing novel imaging techniques, validation of radiomics analysis methods, and virtual clinical trials.
- Published
- 2020
- Full Text
- View/download PDF
45. A real-time Monte Carlo tool for individualized dose estimations in clinical CT.
- Author
-
Sharma S, Kapadia A, Fu W, Abadi E, Segars WP, and Samei E
- Subjects
- Adult, Child, Humans, Male, Phantoms, Imaging, Radiation Exposure, Time Factors, Monte Carlo Method, Radiation Dosage, Radiometry methods, Tomography, X-Ray Computed
- Abstract
The increasing awareness of the adverse effects associated with radiation exposure in computed tomography (CT) has necessesitated the quantification of dose delivered to patients for better risk assessment in the clinic. The current methods for dose quantification used in the clinic are approximations, lacking realistic models for the irradiation conditions utilized in the scan and the anatomy of the patient being imaged, which limits their relevance for a particular patient. The established gold-standard technique for individualized dose quantification uses Monte Carlo (MC) simulations to obtain patient-specific estimates of organ dose in anatomically realistic computational phantoms to provide patient-specific estimates of organ dose. Although accurate, MC simulations are computationally expensive, which limits their utility for time-constrained applications in the clinic. To overcome these shortcomings, a real-time GPU-based MC tool based on FDA's MC-GPU framework was developed for patient and scanner-specific dosimetry in the clinic. The tool was validated against (1) AAPM's TG-195 reference datasets and (2) physical measurements of dose acquired using TLD chips in adult and pediatric anthropomorphic phantoms. To demonstrate its utility towards providing individualized dose estimates, it was integrated with an automatic segmentation method for generating patient-specific models, which were then used to estimate patient- and scanner-specific organ doses for a select population of 50 adult patients using a clinically relevant CT protocol. The organ dose estimates were compared to corresponding dose estimates from a previously validated MC method based on Penelope. The dose estimates from our MC tool agreed within 5% for all organs (except thyroid) tabulated by TG-195 and within 10% for all TLD locations in the adult and pediactric phantoms, across all validation cases. Compared against Penelope, the organ dose estimates agreed within 3% on average for all organs in the patient population study. The average run duration for each patient was estimated at 23.79 s, representing a significant speedup (~700×) over existing non-parallelized MC methods. The accuracy of dose estimates combined with a significant improvement in execution times suggests a feasible solution utilizing the proposed MC tool for real-time individualized dosimetry in the clinic.
- Published
- 2019
- Full Text
- View/download PDF
46. Organ doses from CT localizer radiographs: Development, validation, and application of a Monte Carlo estimation technique.
- Author
-
Hoye J, Sharma S, Zhang Y, Fu W, Ria F, Kapadia A, Segars WP, Wilson J, and Samei E
- Subjects
- Adult, Early Detection of Cancer, Humans, Lung Neoplasms diagnostic imaging, Phantoms, Imaging, Monte Carlo Method, Radiation Dosage, Tomography, X-Ray Computed
- Abstract
Purpose: The purpose of this study was to simulate and validate organ doses from different computed tomography (CT) localizer radiograph geometries using Monte Carlo methods for a population of patients., Methods: A Monte Carlo method was developed to estimate organ doses from CT localizer radiographs using PENELOPE. The method was validated by comparing dosimetry estimates with measurements using an anthropomorphic phantom imbedded with thermoluminescent dosimeters (TLDs) scanned on a commercial CT system (Siemens SOMATOM Flash). The Monte Carlo simulation platform was then applied to conduct a population study with 57 adult computational phantoms (XCAT). In the population study, clinically relevant chest localizer protocols were simulated with the x-ray tube in anterior-posterior (AP), right lateral, and PA positions. Mean organ doses and associated standard deviations (in mGy) were then estimated for all simulations. The obtained organ doses were studied as a function of patient chest diameter. Organ doses for breast and lung were compared across different views and represented as a percentage of organ doses from rotational CT scans., Results: The validation study showed an agreement between the Monte Carlo and physical TLD measurements with a maximum percent difference of 15.5% and a mean difference of 3.5% across all organs. The XCAT population study showed that breast dose from AP localizers was the highest with a mean value of 0.24 mGy across patients, while the lung dose was relatively consistent across different localizer geometries. The organ dose estimates were found to vary across the patient population, partially explained by the changes in the patient chest diameter. The average effective dose was 0.18 mGy for AP, 0.09 mGy for lateral, and 0.08 mGy for PA localizer., Conclusion: A platform to estimate organ doses in CT localizer scans using Monte Carlo methods was implemented and validated based on comparison with physical dose measurements. The simulation platform was applied to a virtual patient population, where the localizer organ doses were found to range within 0.4%-8.6% of corresponding organ doses for a typical CT scan, 0.2%-3.3% of organ doses for a CT pulmonary angiography scan, and 1.1%-20.8% of organ doses for a low-dose lung cancer screening scan., (© 2019 American Association of Physicists in Medicine.)
- Published
- 2019
- Full Text
- View/download PDF
47. Development of a scanner-specific simulation framework for photon-counting computed tomography.
- Author
-
Abadi E, Harrawood B, Rajagopal JR, Sharma S, Kapadia A, Segars WP, Stierstorfer K, Sedlmair M, Jones E, and Samei E
- Abstract
The aim of this study was to develop and validate a simulation platform that generates photon-counting CT images of voxelized phantoms with detailed modeling of manufacturer-specific components including the geometry and physics of the x-ray source, source filtrations, anti-scatter grids, and photon-counting detectors. The simulator generates projection images accounting for both primary and scattered photons using a computational phantom, scanner configuration, and imaging settings. Beam hardening artifacts are corrected using a spectrum and threshold dependent water correction algorithm. Physical and computational versions of a clinical phantom (ACR) were used for validation purposes. The physical phantom was imaged using a research prototype photon-counting CT (Siemens Healthcare) with standard (macro) mode, at four dose levels and with two energy thresholds. The computational phantom was imaged with the developed simulator with the same parameters and settings used in the actual acquisition. Images from both the real and simulated acquisitions were reconstructed using a reconstruction software (FreeCT). Primary image quality metrics such as noise magnitude, noise ratio, noise correlation coefficients, noise power spectrum, CT number, in-plane modulation transfer function, and slice sensitivity profiles were extracted from both real and simulated data and compared. The simulator was further evaluated for imaging contrast materials (bismuth, iodine, and gadolinium) at three concentration levels and six energy thresholds. Qualitatively, the simulated images showed similar appearance to the real ones. Quantitatively, the average relative error in image quality measurements were all less than 4% across all the measurements. The developed simulator will enable systematic optimization and evaluation of the emerging photon-counting computed tomography technology.
- Published
- 2019
- Full Text
- View/download PDF
48. DukeSim: A Realistic, Rapid, and Scanner-Specific Simulation Framework in Computed Tomography.
- Author
-
Abadi E, Harrawood B, Sharma S, Kapadia A, Segars WP, and Samei E
- Subjects
- Algorithms, Humans, Monte Carlo Method, Phantoms, Imaging, Computer Simulation, Image Processing, Computer-Assisted methods, Software, Tomography, X-Ray Computed methods
- Abstract
The purpose of this study was to develop a CT simulation platform that is: 1) compatible with voxel-based computational phantoms; 2) capable of modeling the geometry and physics of commercial CT scanners; and 3) computationally efficient. Such a simulation platform is designed to enable the virtual evaluation and optimization of CT protocols and parameters for achieving a targeted image quality while reducing radiation dose. Given a voxelized computational phantom and a parameter file describing the desired scanner and protocol, the developed platform DukeSim calculates projection images using a combination of ray-tracing and Monte Carlo techniques. DukeSim includes detailed models for the detector quantum efficiency, quantum and electronic noise, detector crosstalk, subsampling of the detector and focal spot areas, focal spot wobbling, and the bowtie filter. DukeSim was accelerated using GPU computing. The platform was validated using physical and computational versions of a phantom (Mercury phantom). Clinical and simulated CT scans of the phantom were acquired at multiple dose levels using a commercial CT scanner (Somatom Definition Flash; Siemens Healthcare). The real and simulated images were compared in terms of image contrast, noise magnitude, noise texture, and spatial resolution. The relative error between the clinical and simulated images was less than 1.4%, 0.5%, 2.6%, and 3%, for image contrast, noise magnitude, noise texture, and spatial resolution, respectively, demonstrating the high realism of DukeSim. The runtime, dependent on the imaging task and the hardware, was approximately 2-3 minutes per rotation in our study using a computer with 4 GPUs. DukeSim, when combined with realistic human phantoms, provides the necessary toolset with which to perform large-scale and realistic virtual clinical trials in a patient and scanner-specific manner.
- Published
- 2019
- Full Text
- View/download PDF
49. Multiresolution spatiotemporal mechanical model of the heart as a prior to constrain the solution for 4D models of the heart.
- Author
-
Gullberg GT, Veress AI, Shrestha UM, Liu J, Ordovas K, Segars WP, and Seo Y
- Abstract
In several nuclear cardiac imaging applications (SPECT and PET), images are formed by reconstructing tomographic data using an iterative reconstruction algorithm with corrections for physical factors involved in the imaging detection process and with corrections for cardiac and respiratory motion. The physical factors are modeled as coefficients in the matrix of a system of linear equations and include attenuation, scatter, and spatially varying geometric response. The solution to the tomographic problem involves solving the inverse of this system matrix. This requires the design of an iterative reconstruction algorithm with a statistical model that best fits the data acquisition. The most appropriate model is based on a Poisson distribution. Using Bayes Theorem, an iterative reconstruction algorithm is designed to determine the maximum a posteriori estimate of the reconstructed image with constraints that maximizes the Bayesian likelihood function for the Poisson statistical model. The a priori distribution is formulated as the joint entropy (JE) to measure the similarity between the gated cardiac PET image and the cardiac MRI cine image modeled as a FE mechanical model. The developed algorithm shows the potential of using a FE mechanical model of the heart derived from a cardiac MRI cine scan to constrain solutions of gated cardiac PET images.
- Published
- 2019
- Full Text
- View/download PDF
50. Fetal XCMR: a numerical phantom for fetal cardiovascular magnetic resonance imaging.
- Author
-
Roy CW, Marini D, Segars WP, Seed M, and Macgowan CK
- Subjects
- Anatomic Landmarks, Female, Humans, Numerical Analysis, Computer-Assisted, Predictive Value of Tests, Pregnancy, Reproducibility of Results, Computer Simulation, Fetal Heart diagnostic imaging, Magnetic Resonance Imaging instrumentation, Models, Cardiovascular, Phantoms, Imaging, Prenatal Diagnosis instrumentation
- Abstract
Background: Validating new techniques for fetal cardiovascular magnetic resonance (CMR) is challenging due to random fetal movement that precludes repeat measurements. Consequently, fetal CMR development has been largely performed using physical phantoms or postnatal volunteers. In this work, we present an open-source simulation designed to aid in the development and validation of new approaches for fetal CMR. Our approach, fetal extended Cardiac-Torso cardiovascular magnetic resonance imaging (Fetal XCMR), builds on established methods for simulating CMR acquisitions but is tailored toward the dynamic physiology of the fetal heart and body. We present comparisons between the Fetal XCMR phantom and data acquired in utero, resulting in image quality, anatomy, tissue signals and contrast., Methods: Existing extended Cardiac-Torso models are modified to create maternal and fetal anatomy, combined according to simulated motion, mapped to CMR contrast, and converted to CMR data. To provide a comparison between the proposed simulation and experimental fetal CMR images acquired in utero, images from a typical scan of a pregnant woman are included and simulated acquisitions were generated using matching CMR parameters, motion and noise levels. Three reconstruction (static, real-time, and CINE), and two motion estimation methods (translational motion, fetal heart rate) from data acquired in transverse, sagittal, coronal, and short-axis planes of the fetal heart were performed to compare to in utero acquisitions and demonstrate feasibility of the proposed simulation framework., Results: Overall, CMR contrast, morphologies, and relative proportions of the maternal and fetal anatomy are well represented by the Fetal XCMR images when comparing the simulation to static images acquired in utero. Additionally, visualization of maternal respiratory and fetal cardiac motion is comparable between Fetal XCMR and in utero real-time images. Finally, high quality CINE image reconstructions provide excellent delineation of fetal cardiac anatomy and temporal dynamics for both data types., Conclusion: The fetal CMR phantom provides a new method for evaluating fetal CMR acquisition and reconstruction methods by simulating the underlying anatomy and physiology. As the field of fetal CMR continues to grow, new methods will become available and require careful validation. The fetal CMR phantom is therefore a powerful and convenient tool in the continued development of fetal cardiac imaging.
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.