150 results on '"Hsieh, Scott S."'
Search Results
2. Targeted Training Reduces Search Errors but Not Classification Errors for Hepatic Metastasis Detection at Contrast-Enhanced CT
- Author
-
Hsieh, Scott S., Inoue, Akitoshi, Yalon, Mariana, Cook, David A., Gong, Hao, Sudhir Pillai, Parvathy, Johnson, Matthew P., Fidler, Jeff L., Leng, Shuai, Yu, Lifeng, Carter, Rickey E., Holmes, David R., III, McCollough, Cynthia H., and Fletcher, Joel G.
- Published
- 2024
- Full Text
- View/download PDF
3. Aberration correction in diagnostic ultrasound: A review of the prior field and current directions
- Author
-
Ali, Rehman, Brevett, Thurston, Zhuang, Louise, Bendjador, Hanna, Podkowa, Anthony S., Hsieh, Scott S., Simson, Walter, Sanabria, Sergio J., Herickhoff, Carl D., and Dahl, Jeremy J.
- Published
- 2023
- Full Text
- View/download PDF
4. Existence, uniqueness, and efficiency of numerically unbiased attenuation pathlength estimators for photon counting detectors at low count rates.
- Author
-
Hsieh, Scott S. and Rajbhandary, Paurakh L.
- Subjects
- *
PHOTON detectors , *PHOTON counting , *VECTOR spaces , *COMPUTED tomography , *CONVEX functions - Abstract
Background Purpose Methods Results Conclusion The first step in computed tomography (CT) reconstruction is to estimate attenuation pathlength. Usually, this is done with a logarithm transformation, which is the direct solution to the Beer‐Lambert Law. At low signals, however, the logarithm estimator is biased. Bias arises both from the curvature of the logarithm and from the possibility of detecting zero counts, so a data substitution strategy may be employed to avoid the singularity of the logarithm. Recent progress has been made by Li et al. [
IEEE Trans Med Img 42:6, 2023] to modify the logarithm estimator to eliminate curvature bias, but the optimal strategy for mitigating bias from the singularity remains unknown.The purpose of this study was to use numerical techniques to construct unbiased attenuation pathlength estimators that are alternatives to the logarithm estimator, and to study the uniqueness and optimality of possible solutions, assuming a photon counting detector.Formally, an attenuation pathlength estimator is a mapping from integer detector counts to real pathlength values. We constrain our focus to only the small signal inputs that are problematic for the logarithm estimator, which we define as inputs of <100 counts, and we consider estimators that use only a single input and that are not informed by adjacent measurements (e.g., adaptive smoothing). The set of all possible pathlength estimators can then be represented as points in a 100‐dimensional vector space. Within this vector space, we use optimization to select the estimator that (1) minimizes mean squared error and (2) is unbiased. We define “unbiased” as satisfying the numerical condition that the maximum bias be less than 0.001 across a continuum of 1000 object thicknesses that span the desired operating range. Because the objective function is convex and the constraints are affine, optimization is tractable and guaranteed to converge to the global minimum. We further examine the nullspace of the constraint matrix to understand the uniqueness of possible solutions, and we compare the results to the Cramér‐Rao bound of the variance.We first show that an unbiased attenuation pathlength estimator does not exist if very low mean detector signals (equivalently, very thick objects) are permitted. It is necessary to select a minimum mean detector signal for which unbiased behavior is desired. If we select two counts, the optimal estimator is similar to Li's estimator. If we select one count, the optimal estimator becomes non‐monotonic. The oscillations cause the unbiased estimator to be noise amplifying. The nullspace of the constraint matrix is high‐dimensional, so that unbiased solutions are not unique. The Cramér‐Rao bound of the variance matches well with the expected I−0.5${{I}^{ - 0.5}}$ scaling law and cannot be attained.If arbitrarily thick objects are permitted, an unbiased attenuation pathlength estimator does not exist. If the maximum thickness is restricted, an unbiased estimator exists but is not unique. An optimal estimator can be selected that minimizes variance, but a bias‐variance tradeoff exists where a larger domain of unbiased behavior requires increased variance. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
5. Limitations of dual-energy CT in the detection of monosodium urate deposition in dense liquid tophi and calcified tophi
- Author
-
Ahn, Se Jin, Zhang, Dawen, Levine, Benjamin D., Dalbeth, Nicola, Pool, Bregina, Ranganath, Veena K., Benhaim, Prosper, Nelson, Scott D., Hsieh, Scott S., and FitzGerald, John D.
- Published
- 2021
- Full Text
- View/download PDF
6. 3D printed phantom with 12 000 submillimeter lesions to improve efficiency in CT detectability assessment.
- Author
-
Shunhavanich, Picha, Mei, Kai, Shapira, Nadav, Stayman, Joseph Webster, McCollough, Cynthia H., Gang, Grace, Leng, Shuai, Geagan, Michael, Yu, Lifeng, Noël, Peter B., and Hsieh, Scott S.
- Subjects
RECEIVER operating characteristic curves ,PRINTMAKING ,THREE-dimensional printing ,STANDARD deviations - Abstract
Background: The detectability performance of a CT scanner is difficult to precisely quantify when nonlinearities are present in reconstruction. An efficient detectability assessment method that is sensitive to small effects of dose and scanner settings is desirable. We previously proposed a method using a search challenge instrument: a phantom is embedded with hundreds of lesions at random locations, and a model observer is used to detect lesions. Preliminary tests in simulation and a prototype showed promising results. Purpose: In this work, we fabricated a full‐size search challenge phantom with design updates, including changes to lesion size, contrast, and number, and studied our implementation by comparing the lesion detectability from a nonprewhitening (NPW) model observer between different reconstructions at different exposure levels, and by estimating the instrument sensitivity to detect changes in dose. Methods: Designed to fit into QRM anthropomorphic phantoms, our search challenge phantom is a cylindrical insert 10 cm wide and 4 cm thick, embedded with 12 000 lesions (nominal width of 0.6 mm, height of 0.8 mm, and contrast of −350 HU), and was fabricated using PixelPrint, a 3D printing technique. The insert was scanned alone at a high dose to assess printing accuracy. To evaluate lesion detectability, the insert was placed in a QRM thorax phantom and scanned from 50 to 625 mAs with increments of 25 mAs, once per exposure level, and the average of all exposure levels was used as high‐dose reference. Scans were reconstructed with three different settings: filtered‐backprojection (FBP) with Br40 and Br59, and Sinogram Affirmed Iterative Reconstruction (SAFIRE) with strength level 5 and Br59 kernel. An NPW model observer was used to search for lesions, and detection performance of different settings were compared using area under the exponential transform of free response ROC curve (AUC). Using propagation of uncertainty, the sensitivity to changes in dose was estimated by the percent change in exposure due to one standard deviation of AUC, measured from 5 repeat scans at 100, 200, 300, and 400 mAs. Results: The printed insert lesions had an average position error of 0.20 mm compared to printing reference. As the exposure level increases from 50 mAs to 625 mAs, the lesion detectability AUCs increase from 0.38 to 0.92, 0.42 to 0.98, and 0.41 to 0.97 for FBP Br40, FBP Br59, and SAFIRE Br59, respectively, with a lower rate of increase at higher exposure level. FBP Br59 performed best with AUC 0.01 higher than SAFIRE Br59 on average and 0.07 higher than FBP Br40 (all P < 0.001). The standard deviation of AUC was less than 0.006, and the sensitivity to detect changes in mAs was within 2% for FBP Br59. Conclusions: Our 3D‐printed search challenge phantom with 12 000 submillimeter lesions, together with an NPW model observer, provide an efficient CT detectability assessment method that is sensitive to subtle effects in reconstruction and is sensitive to small changes in dose. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Spectral information content of Compton scattering events in silicon photon counting detectors.
- Author
-
Hsieh, Scott S. and Taguchi, Katsuyuki
- Subjects
- *
PHOTON detectors , *COMPTON effect , *THRESHOLD energy , *PHOTOELECTRIC effect , *SILICON , *WATER filters , *COMPTON scattering , *PHOTON counting - Abstract
Background: Silicon (Si) is a possible sensor material for photon counting detectors (PCDs). A major drawback of Si is that roughly two‐thirds of x‐ray interactions in the diagnostic energy range are Compton scattering. Because Compton scattering is an energy‐insensitive process, it is commonly assumed that Compton events retain little spectral information. Purpose: To quantify how much information can be recovered from Compton scattering events in models of Si PCDs. Methods: We built a simplified model of Si interactions including two interaction mechanisms: photoelectric effect and Compton scattering. We considered three different binning options that represent strategies for handling Compton events: in Compton censoring, all events under 38 keV (the maximum energy possible from Compton scattering for a 120 keV incident photon) were discarded; in Compton counting, all events between 1 and 38 keV were placed into a single bin; in Compton binning, all events were placed into energy bins of uniform width. These were compared to the ideal detector, which always recorded the correct energy (i.e., 100% photoelectric effect). Every photon was assumed to interact once and only once with Si, and the energy bin width was 5 keV. In the primary analysis, the Si detector was irradiated with a 120 kV spectrum filtered by 30 cm of water, with 99.5% of the arriving spectrum above 38 keV so that there was good separation between photoelectric effect and Compton scattering, and the figures of merit were the Cramér–Rao lower bound (CRLB) of the variance of iodine and water basis material decomposition images, as well as the CRLB of virtual monoenergetic images (i.e., linear combinations of material images) that maximize iodine CNR or water CNR. We also constructed a local linear estimator that attains the CRLB. In secondary analyses, we applied other sources of spectral distortion: (1) a nonzero minimum energy threshold; (2) coarser, 10 keV energy bins; and (3) a model of charge sharing. Results: With our chosen spectrum, 67% of the interactions were Compton scattering. Consistent with this, the material decomposition variance for the Compton censoring model, averaged over both basis materials, was 258% greater than the ideal detector. If Compton events carried no spectral information, the Compton counting model would show similar variance. Instead, its basis material variance was 103% greater than the ideal detector, implying that Compton counts indeed carry significant spectral information. The Compton binning model had a basis material variance 60% greater than the ideal detector. The Compton binning model was not affected by a 5 keV minimum energy threshold, but the variance increased from 60% to 107% when charge sharing was included and to 78% with coarser energy bins. For optimized CNR images, the average variance was 149%, 12%, and 10% higher than the ideal detector for the Compton censoring, counting, and binning models, reinforcing the hypothesis that Compton counts are useful for detection tasks and that precise energy assignments are not necessary. Conclusions: Substantial spectral information remains after Compton scattering events in silicon PCDs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Possible improvements in effective fill factor using X‐ray fluorescent interpixel reflectors.
- Author
-
Hsieh, Scott S.
- Subjects
- *
PHOTON counting , *MONTE Carlo method , *PHOTON detectors , *X-rays , *SPECTRAL sensitivity , *SCINTILLATORS , *ELECTRON energy loss spectroscopy - Abstract
Background: The spatial resolution of energy‐integrating diagnostic CT scanners is limited by interpixel reflectors on the detector, which optically isolate pixels but create dead space. Because the width of the reflector cannot easily be decreased, fill factor diminishes as resolution increases. Purpose: We propose loading (or mixing) a high‐Z element into the reflectors, causing the reflectors to be X‐ray fluorescent. Re‐emitted characteristic X‐rays could be detected in adjacent pixels, increasing the effective fill factor and compensating for fill factor loss with higher‐resolution detectors. The purpose of this work is to understand the physical principles of this approach and to analyze its effectiveness using Monte Carlo simulations. Methods: Detector pixels were modeled using the GEANT4 Monte Carlo package. The width of the reflector was kept constant at 0.1 mm throughout, and we considered pixel pitches between 0.5 and 1 mm. The pixelated scintillator material was gadolinium oxysulfide, 3 mm thick. The baseline reflector material was chosen to be acrylic, and varying concentrations of a high‐Z element were loaded into the material. We assumed that the optical characteristics of pixels were ideal (no absorption within pixels, perfect reflection at boundaries). The detector was irradiated uniformly with 10,000 X‐ray photons to estimate its spectral response. The figure of merit was the variance of the detector signal at zero frequency normalized to that of an ideal single‐bin photon‐counting detector with 100% fill factor. Sensitivity analyses were conducted to understand the effect of varying the high‐Z element concentration and the spectrum. Results: Initial simulations suggested that a k‐edge near 50 keV would be ideal. Gd was therefore selected as the high‐Z material. The relative variances for a conventional energy integrating detector without Gd at 1 mm pixel pitch (81% fill factor) and 0.5 mm pixel pitch (64% fill factor) were 1.38 and 1.74, compared to 1.00 for an ideal photon counting detector, implying a 26% variance penalty for 0.5 mm pitch. When 1 g/cm3 Gd was loaded into the interpixel reflector, the relative variance improved to 1.27 and 1.43, respectively, implying that the variance penalty for including Gd together with 0.5 mm pitch is only 4%. Performance was nearly maximized at 1.0 g/cm3 of Gd, but a concentration of 0.5 g/cm3 of Gd showed most of the benefit. Improvements depend weakly on kV, with lower kV associated with higher improvements. An external anti‐scatter grid was not modeled in our simulations and would reduce the expected benefit, depending greatly on the pitch and dimensionality of the anti‐scatter grid. Conclusions: The losses in fill factor associated with smaller pixel pitch can be reduced if Gd or a similar element could be loaded into the interpixel reflector. These improvements in noise efficiency are yet to be verified experimentally. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Direct energy binning for photon counting detectors: Simulation study.
- Author
-
Taguchi, Katsuyuki and Hsieh, Scott S.
- Subjects
- *
PHOTON counting , *PHOTON detectors , *PHOTON beams , *CONDITIONAL expectations , *LINE integrals , *DATA binning , *SPECTRAL imaging , *POISSON regression - Abstract
Background: Photon counting detectors (PCDs) for x‐ray computed tomography (CT) face spectral distortion from pulse pileup and charge sharing. The photon counting scheme used by many PCDs is threshold–subtract (TS) with pulse height analysis (PHA), where each counter counts up‐crossing events when pulses exceed an energy threshold. PCD data are not Poisson‐distributed due to charge sharing and pulse pileup, but the counting statistics have never been studied yet. Purpose: The objectives of this study were (1) to propose a modified photon counting scheme, direct energy binning (DB), that is expected to be robust against pulse pileup; (2) to assess the performance of DB compared to TS; and (3) to evaluate its counting statistics. Methods: With DB scheme, counter k starts a timer upon an up‐crossing event of energy threshold k, and adds a count only if the next higher energy threshold (k+1) was not crossed within a short time window (hence, the pulse peak belongs to the energy bin k). We used Monte Carlo (MC) simulation and assessed count‐rate curves and count‐rate‐dependent spectral imaging task performance for conventional CT imaging as well as water thickness estimation, water–bone material decomposition, and K‐edge imaging with tungsten as the K‐edge material. We also assessed count‐rate‐dependent measurement statistics such as expectation, variance, and covariance of total counts as well as energy bin outputs. The agreement with counting statistics models was also evaluated. Results: The DB scheme improved the count‐rate curve, that is, mean measured counts as a function of input count‐rate, and peaked with 59% higher count‐rate capability than the TS scheme (3.5 × 108 counts per second (cps)/mm2 versus 2.3 × 108 cps/mm2). The Cramér–Rao lower bounds (CRLB) of the variance of basis line integrals estimation for DB was better than those for TS by 2% for the conventional CT imaging, 30% for water–bone material decomposition, and 32% for K‐edge imaging at 1000 mA (at 7.3 × 107 cps/sub‐pixel after charge sharing). When count‐rates were lower, PCD data statistics were dominated by charge sharing: the variance of total counts and lower energy bins was larger than the mean counts; the covariance of bin data was positive and non‐zero. When count‐rates were higher, PCD data statistics were dominated by pulse pileup: the variance of data was lower than the mean; the covariance of bin data was negative. The transition between the two regimes occurred smoothly, and pulse pileup dominated the statistics ≥400 mA (when the count‐rate after charge sharing was 2.9 × 107 cps/sub‐pixel and the probability of count‐loss for DB was 37%). Both DB and TS had good agreement with Yu–Fessler's models of total counts; however, DB had a better agreement with Wang's variance and covariance models for energy bin data than TS did. Conclusions: The proposed DB scheme had several advantages over TS. At low to moderate flux, DB could improve the resilience of PCDs to pulse pileup. Counting statistics deviated from the Poisson distribution due to charge sharing for lower count‐rate conditions and pulse pileup for higher count‐rate conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Real-time tomosynthesis for radiation therapy guidance
- Author
-
Hsieh, Scott S. and Ng, Lydia W.
- Published
- 2017
- Full Text
- View/download PDF
11. In Vivo Prediction of Kidney Stone Fragility Using Radiomics-Based Regression Models.
- Author
-
Sudhir Pillai, Parvathy, Hsieh, Scott S., Vercnocke, Andrew J., Potretzke, Aaron M., Koo, Kevin, McCollough, Cynthia H., and Ferrero, Andrea
- Subjects
- *
KIDNEY stones , *PERCUTANEOUS nephrolithotomy , *LASER lithotripsy , *REGRESSION analysis , *STANDARD deviations , *URINARY calculi - Abstract
Introduction: The surgical technique for urinary stone removal is partly influenced by its fragility, as prognosticated by the clinician. This feasibility study aims to develop a linear regression model from CT-based radiomic markers to predict kidney stone comminution time in vivo with two ultrasonic lithotrites. Materials and Methods: Patients identified by urologists at our institution as eligible candidates for percutaneous nephrolithotomy were prospectively enrolled. The active engagement time of the lithotrite in breaking the stone during surgery denoted the comminution time of each stone. The comminution rate was computed as the stone volume disintegrated per minute. Stones were grouped into three fragility classes (fragile, moderate, hard), based on inverse of the comminution rates with respect to the mean. Multivariable linear regression models were trained with radiomic features extracted from clinical CT images to predict comminution times in vivo. The model with the least root mean squared error (RMSE) on comminution times and the fewest misclassification of fragility was finally selected. Results: Twenty-eight patients with 31 stones in total were included in this study. Stones in the cohort averaged 1557 (±2472) mm3 in volume and 5.3 (±7.4) minutes in comminution time. Ten stones had nonmoderate fragility. Linear regression of stone volume alone predicted comminution time with an RMSE of 6.8 minutes and missed all 10 stones with nonmoderate fragility. A fragility model that included stone volume, internal morphology, shape-based radiomics, and device type improved RMSE to below 3.3 minutes and correctly classified 20/21 moderate and 6/10 nonmoderate stones. Conclusions: CT metrics-based fragility models may provide information to surgeons regarding kidney stone fragility and facilitate the selection of stone removal procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Estimating the accuracy of dual energy chest radiography for coronary calcium detection with lateral or anteroposterior orientations.
- Author
-
Hsieh, Scott S. and Budoff, Matthew J.
- Subjects
- *
CORONARY artery calcification , *RADIOGRAPHY , *DUAL-energy X-ray absorptiometry , *CALCIUM , *CHEST X rays , *MEDICAL digital radiography - Abstract
Purpose: Coronary artery calcium (CAC) scoring with CT has been studied as a risk stratification tool for cardiovascular disease. However, concerns remain from the radiation dose, economic expense, and incidental findings associated with this exam. Dual energy chest X‐ray (DE CXR) has been proposed as an alternative, but validation of this technique remains limited. The purpose of this work was twofold: first, to estimate the sensitivity and specificity of DE CXR using simulation of patient datasets in a CAC screening cohort; second, to assess if sensitivity and specificity could be improved using a lateral instead of an anteroposterior (AP) orientation. Methods: We started from a cohort of 73 CAC scoring CT exams after exclusions for metal wires, data truncation, or with age outside 40–75 years. The fraction of CT CAC scores in the validation set of 0, 1–99, 100–299, and 300+ were 36, 25, 14, and 26%, respectively. CT datasets were decomposed on a voxel‐by‐voxel basis into mixtures of water and calcium according to CT number. DE CXR images were simulated using polyenergetic forward projection with scatter estimated from Monte Carlo. We assumed a technique of 60 and 120 kVp for the dual energy acquisition. The tube current was scaled such that the estimated radiation dose from DE CXR was 10 times less than CAC scoring CT. Patient motion was not simulated. Two readers read the validation set in a blinded, randomized fashion, and estimated the amount of CAC in each DE CXR image using a semiquantitative 4‐point scale. Although patients present on a spectrum of CAC severity, in the primary analysis, sensitivity and specificity were calculated by dichotomizing patients into two categories of CT CAC (Agatston) scores of either 0–99 or 100+. Results: From the lateral orientation, average sensitivity between two readers was 69% (range, 69–69%), specificity was 85% (range, 84–86%), and area under the curve (AUC) was 0.81 (range, 0.80–0.81). From the AP orientation, average sensitivity was 35% (range, 31–38%), average specificity was 70% (range, 66–73%), and AUC was 0.54 (range, 0.53–0.55). Reader DE CXR scores agreed within 1 point of the 4‐point scale on 97% of ratings from the lateral orientation and 80% from the AP orientation. From the lateral orientation, AUC increased when considering higher CT CAC score thresholds as disease positive; for thresholds of 1+, 300+, and 1000+, average AUC was 0.72, 0.81, and 0.92, respectively. From the AP orientation, AUC was 0.57, 0.55, and 0.61, respectively. Conclusions: DE CXR for CAC scoring may have higher diagnostic accuracy when acquired from the lateral orientation. The sensitivity and specificity of lateral DE CXR, when combined with its modest cost and radiation dose, suggest a possible role for this technique in screening coronary calcium in lower risk individuals. These estimates of diagnostic accuracy are derived from simulation of patient datasets and have not been corroborated with experimental or clinical images. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. A minimum SNR criterion for computed tomography object detection in the projection domain.
- Author
-
Hsieh, Scott S., Leng, Shuai, Yu, Lifeng, Huber, Nathan R., and McCollough, Cynthia H.
- Subjects
- *
COMPUTED tomography , *MONTE Carlo method , *GAUSSIAN beams , *FALSE positive error , *SIGNAL-to-noise ratio , *STANDARD deviations - Abstract
Background: A common rule of thumb for object detection is the Rose criterion, which states that a signal must be five standard deviations above background to be detectable to a human observer. The validity of the Rose criterion in CT imaging is limited due to the presence of correlated noise. Recent reconstruction and denoising methodologies are also able to restore apparent image quality in very noisy conditions, and the ultimate limits of these methodologies are not yet known. Purpose: To establish a lower bound on the minimum achievable signal‐to‐noise ratio (SNR) for object detection, below which detection performance is poor regardless of reconstruction or denoising methodology. Methods: We consider a numerical observer that operates on projection data and has perfect knowledge of the background and the objects to be detected, and determine the minimum projection SNR that is necessary to achieve predetermined lesion‐level sensitivity and case‐level specificity targets. We define a set of discrete signal objects O$\mathcal{O}$ that encompasses any lesion of interest and could include lesions of different sizes, shapes, and locations. The task is to determine which object of O$\mathcal{O}$ is present, or to state the null hypothesis that no object is present. We constrain each object in O$\mathcal{O}$ to have equivalent projection SNR and use Monte Carlo methods to calculate the required projection SNR necessary. Because our calculations are performed in projection space, they impose an upper limit on the performance possible from reconstructed images. We chose O$\mathcal{O}$ to be a collection of elliptical or circular low contrast metastases and simulated detection of these objects in a parallel beam system with Gaussian statistics. Unless otherwise stated, we assume a target of 80% lesion‐level sensitivity and 80% case‐level specificity and a search field of view that is 6 cm by 6 cm by 10 slices. Results: When O$\mathcal{O}$ contains only a single object, our problem is equivalent to two‐alternative forced choice (2AFC) and the required projection SNR is 1.7. When O$\mathcal{O}$ consists of circular 6‐mm lesions at different locations in space, the required projection SNR is 5.1. When O$\mathcal{O}$ is extended to include ellipses and circles of different sizes, the required projection SNR increases to 5.3. The required SNR increases if the sensitivity target, specificity target, or search field of view increases. Conclusions: Even with perfect knowledge of the background and target objects, the ideal observer still requires an SNR of approximately 5. This is a lower bound on the SNR that would be required in real conditions, where the background and target objects are not known perfectly. Algorithms that denoise lesions with less than 5 projection SNR, regardless of the denoising methodology, are expected to show vanishing effects or false positive lesions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. A simulated comparison of lung tumor target verification using stereoscopic tomosynthesis or radiography.
- Author
-
Hsieh, Scott S., Ng, Lydia W., Cao, Minsong, and Lee, Percy
- Subjects
- *
FIDUCIAL markers (Imaging systems) , *LUNGS , *LUNG tumors , *TOMOSYNTHESIS , *MONTE Carlo method , *RADIOGRAPHY - Abstract
Purpose: Mobile lung tumors are increasingly being treated with ablative radiotherapy, for which precise motion management is essential. In‐room stereoscopic radiography systems are able to guide ablative radiotherapy for stationary cranial lesions but not optimally for lung tumors unless fiducial markers are inserted. We propose augmenting stereoscopic radiographic systems with multiple small x‐ray sources to provide the capability of imaging with stereoscopic, single frame tomosynthesis. Methods: In single frame tomosynthesis, nine x‐ray sources are placed in a 3 × 3 configuration and energized simultaneously. The beams from these sources are collimated so that they converge on the tumor and then diverge to illuminate nine non‐overlapping sectors on the detector. These nine sector images are averaged together and filtered to create the tomosynthesis effect. Single frame tomosynthesis is intended to be an alternative imaging mode for existing stereoscopic systems with a field of view that is three times smaller and a temporal resolution equal to the frame rate of the detector. We simulated stereoscopic tomosynthesis and radiography using Monte Carlo techniques on 60 patients with early‐stage lung cancer from the NSCLC‐Radiomics dataset. Two board‐certified radiation oncologists reviewed these simulated images and rated them on a 4‐point scale (1: tumor not visible; 2: tumor visible but inadequate for motion management; 3: tumor visible and adequate for motion management; 4: tumor visibility excellent). Each tumor was independently presented four times (two viewing angles from radiography and two viewing angles from tomosynthesis) in a blinded fashion over two reading sessions. Results: The fraction of tumors that were rated as adequate or excellent for motion management (scores 3 or 4) from at least one viewing angle was 53% using radiography and 90% using tomosynthesis. From both viewing angles, the corresponding fractions were 7% for radiography and 48% for tomosynthesis. Readers agreed exactly on 62% of images and within 1 point on 98% of images. The acquisition technique was estimated to be 75 mAs at 120 kVp per treatment fraction assuming one verification image per breath, approximately one order of magnitude less than a standard dose cone beam CT. Conclusions: Stereoscopic tomosynthesis may provide a noninvasive, low dose, intrafraction motion verification technique for lung tumors treated by ablative radiotherapy. The system architecture is compatible with real‐time video capture at 30 frames per second. Simulations suggest that most, but not all, lung tumors can be adequately visualized from at least one viewing angle. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. An interactive eye‐tracking system for measuring radiologists' visual fixations in volumetric CT images: Implementation and initial eye‐tracking accuracy validation.
- Author
-
Gong, Hao, Hsieh, Scott S., Holmes, David R., Cook, David A., Inoue, Akitoshi, Bartlett, David J., Baffour, Francis, Takahashi, Hiroaki, Leng, Shuai, Yu, Lifeng, McCollough, Cynthia H., and Fletcher, Joel G.
- Subjects
- *
EYE tracking , *CONE beam computed tomography , *COMPUTED tomography , *THREE-dimensional imaging , *CROSS-sectional imaging , *RADIOLOGISTS - Abstract
Purpose: Eye‐tracking approaches have been used to understand the visual search process in radiology. However, previous eye‐tracking work in computer tomography (CT) has been limited largely to single cross‐sectional images or video playback of the reconstructed volume, which do not accurately reflect radiologists' visual search activities and their interactivity with three‐dimensional image data at a computer workstation (e.g., scroll, pan, and zoom) for visual evaluation of diagnostic imaging targets. We have developed a platform that integrates eye‐tracking hardware with in‐house‐developed reader workstation software to allow monitoring of the visual search process and reader‐image interactions in clinically relevant reader tasks. The purpose of this work is to validate the spatial accuracy of eye‐tracking data using this platform for different eye‐tracking data acquisition modes. Methods: An eye‐tracker was integrated with a previously developed workstation designed for reader performance studies. The integrated system captured real‐time eye movement and workstation events at 1000 Hz sampling frequency. The eye‐tracker was operated either in head‐stabilized mode or in free‐movement mode. In head‐stabilized mode, the reader positioned their head on a manufacturer‐provided chinrest. In free‐movement mode, a biofeedback tool emitted an audio cue when the head position was outside the data collection range (general biofeedback) or outside a narrower range of positions near the calibration position (strict biofeedback). Four radiologists and one resident were invited to participate in three studies to determine eye‐tracking spatial accuracy under three constraint conditions: head‐stabilized mode (i.e., with use of a chin rest), free movement with general biofeedback, and free movement with strict biofeedback. Study 1 evaluated the impact of head stabilization versus general or strict biofeedback using a cross‐hair target prior to the integration of the eye‐tracker with the image viewing workstation. In Study 2, after integration of the eye‐tracker and reader workstation, readers were asked to fixate on targets that were randomly distributed within a volumetric digital phantom. In Study 3, readers used the integrated system to scroll through volumetric patient CT angiographic images while fixating on the centerline of designated blood vessels (from the left coronary artery to dorsalis pedis artery). Spatial accuracy was quantified as the offset between the center of the intended target and the detected fixation using units of image pixels and the degree of visual angle. Results: The three head position constraint conditions yielded comparable accuracy in the studies using digital phantoms. For Study 1 involving the digital crosshairs, the median ± the standard deviation of offset values among readers were 15.2 ± 7.0 image pixels with the chinrest, 14.2 ± 3.6 image pixels with strict biofeedback, and 19.1 ± 6.5 image pixels with general biofeedback. For Study 2 using the random dot phantom, the median ± standard deviation offset values were 16.7 ± 28.8 pixels with use of a chinrest, 16.5 ± 24.6 pixels using strict biofeedback, and 18.0 ± 22.4 pixels using general biofeedback, which translated to a visual angle of about 0.8° for all three conditions. We found no obvious association between eye‐tracking accuracy and target size or view time. In Study 3 viewing patient images, use of the chinrest and strict biofeedback demonstrated comparable accuracy, while the use of general biofeedback demonstrated a slightly worse accuracy. The median ± standard deviation of offset values were 14.8 ± 11.4 pixels with use of a chinrest, 21.0 ± 16.2 pixels using strict biofeedback, and 29.7 ± 20.9 image pixels using general biofeedback. These corresponded to visual angles ranging from 0.7° to 1.3°. Conclusions: An integrated eye‐tracker system to assess reader eye movement and interactive viewing in relation to imaging targets demonstrated reasonable spatial accuracy for assessment of visual fixation. The head‐free movement condition with audio biofeedback performed similarly to head‐stabilized mode. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. Deep learning enabled ultra‐fast‐pitch acquisition in clinical X‐ray computed tomography.
- Author
-
Gong, Hao, Ren, Liqiang, Hsieh, Scott S., McCollough, Cynthia H., and Yu, Lifeng
- Subjects
COMPUTED tomography ,CONVOLUTIONAL neural networks ,INSPECTION & review ,ALGORITHMS ,DEEP learning ,X-rays - Abstract
Objective: In X‐raycomputed tomography (CT), many important clinical applications may benefit from a fast acquisition speed. The helical scan is the most widely used acquisition mode in clinical CT, where a fast helical pitch can improve the acquisition speed. However, on a typical single‐source helical CT (SSCT) system, the helical pitch p typically cannot exceed 1.5; otherwise, reconstruction artifacts will result from data insufficiency. The purpose of this work is to develop a deep convolutional neural network (CNN) to correct for artifacts caused by an ultra‐fast pitch, which can enable faster acquisition speed than what is currently achievable. Methods: A customized CNN (denoted as ultra‐fast‐pitch network (UFP‐net)) was developed to restore the underlying anatomical structure from the artifact‐corrupted post‐reconstruction data acquired from SSCT with ultra‐fast pitch (i.e., p ≥ 2). UFP‐net employed residual learning to capture the features of image artifacts. UFP‐net further deployed in‐house‐customized functional blocks with spatial‐domain local operators and frequency‐domain non‐local operators, to explore multi‐scale feature representation. Images of contrast‐enhanced patient exams (n = 83) with routine pitch setting (i.e., p < 1) were retrospectively collected, which were used as training and testing datasets. This patient cohort involved CT exams over different scan ranges of anatomy (chest, abdomen, and pelvis) and CT systems (Siemens Definition, Definition Flash, Definition AS+, Siemens Healthcare, Inc.), and the corresponding base CT scanning protocols used consistent settings of major scan parameters (e.g., collimation and pitch). Forward projection of the original images was calculated to synthesize helical CT scans with one regular pitch setting (p = 1) and two ultra‐fast‐pitch setting (p = 2 and 3). All patient images were reconstructed using the standard filtered‐back‐projection (FBP) algorithm. A customized multi‐stage training scheme was developed to incrementally optimize the parameters of UFP‐net, using ultra‐fast‐pitch images as network inputs and regular pitch images as labels. Visual inspection was conducted to evaluate image quality. Structural similarity index (SSIM) and relative root‐mean‐square error (rRMSE) were used as quantitative quality metrics. Results: The UFP‐net dramatically improved image quality over standard FBP at both ultra‐fast‐pitch settings. At p = 2, UFP‐net yielded higher mean SSIM (> 0.98) with lower mean rRMSE (< 2.9%), compared to FBP (mean SSIM < 0.93; mean rRMSE > 9.1%). Quantitative metrics at p = 3: UFP‐net—mean SSIM [0.86, 0.94] and mean rRMSE [5.0%, 8.2%]; FBP—mean SSIM [0.36, 0.61] and mean rRMSE [36.0%, 58.6%]. Conclusion: The proposed UFP‐net has the potential to enable ultra‐fast data acquisition in clinical CT without sacrificing image quality. This method has demonstrated reasonable generalizability over different body parts when the corresponding CT exams involved consistent base scan parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Rapid measurement of the low contrast detectability of CT scanners.
- Author
-
Omigbodun, Akinyinka, Vaishnav, J. Y., and Hsieh, Scott S.
- Subjects
COMPUTED tomography ,IMAGING phantoms ,SCANNING systems ,MATCHED filters ,RECEIVER operating characteristic curves ,ELECTRONIC data processing - Abstract
Purpose: Low contrast detectability (LCD) is a metric of fundamental importance in computed tomography (CT) imaging. In spite of this, its measurement is challenging in the context of nonlinear data processing. We introduce a new framework for objectively characterizing LCD with a single scan of a special‐purpose phantom and automated analysis software. The output of the analysis software is a "machine LCD" metric which is more representative of LCD than contrast‐noise ratio (CNR). It is not intended to replace human observer or model observer studies. Methods: Following preliminary simulations, we fabricated a phantom containing hundreds of low‐contrast beads. These beads are acrylic spheres (1.6 mm, net contrast ~10 HU) suspended and randomly dispersed in a background matrix of nylon pellets and isoattenuating saline. The task was to search for and localize the beads. A modified matched filter was used to automatically scan the reconstruction and select candidate bead localizations of varying confidence. These were compared to bead locations as determined from a high‐dose reference scan to produce free‐response ROC curves. We compared iterative reconstruction (IR) and filtered backpropagation (FBP) at multiple dose levels between 40 and 240 mAs. The scans at 60, 120, and 180 mAs were performed three times each to estimate uncertainty. Results: Experimental scans demonstrated the feasibility of our technique. Our metric for machine LCD was the area under the exponential transform of the FROC curve (AUC). AUC increased monotonically from 0.21 at 40 mAs to 0.84 at 240 mAs. The sample standard deviation of AUC was approximately 0.02. This measurement uncertainty in AUC corresponded to a change in tube current of 4% to 8%. Surprisingly, we found that AUCs for IR were slightly worse than AUCs for FBP. While the phantom was sufficient for these experiments, it contained small air bubbles and alternative fabrication methods will be necessary for widespread utilization. Conclusions: It is feasible to measure machine LCD using a search task on a phantom with hundreds of beads and to obtain tight error bars using only a single scan. Our method could facilitate routine quality assurance or possibly enable comparisons between different protocols and scanners. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. Improving Paralysis Compensation in Photon Counting Detectors.
- Author
-
Hsieh, Scott S. and Iniewski, Kris
- Subjects
- *
MONTE Carlo method , *PHOTON detectors , *PHOTON counting , *APPLICATION-specific integrated circuits , *PARALYSIS , *SPECTRAL sensitivity - Abstract
Photon counting detectors (PCDs) are classically described as being either paralyzable or nonparalyzable. When the PCD is paralyzed, it is no longer sensitive to the detection of additional flux. A recent strategy in PCD design has been to compensate for detector paralysis by embedding specialized paralysis compensation electronics into the application-specific integrated circuit (ASIC). One such compensation mechanism is the pileup trigger, which places an additional energy bin at very high energy that is triggered only during pileup. Another compensation mechanism is the retrigger architecture, which converts a paralyzable PCD into a nonparalyzable PCD. We propose a third mechanism that modifies the retrigger architecture using dedicated secondary counters. We studied the incremental benefit of these three paralysis compensation mechanisms in simulation. We modeled the spectral response using Monte Carlo simulations and then estimated the variance in basis material decomposition of a single pixel using the Cramér-Rao lower bound (CRLB). In the absence of paralysis compensation, noise in basis material images shows sharp increases at moderate flux (near the characteristic count rate) due to contrast inversion and again at high flux. The pileup trigger reduces noise at high flux but does not eliminate contrast inversion. The retrigger architecture eliminates contrast inversion but does not reduce noise at high flux. Our proposed retrigger architecture with dedicated secondary counters reduce noise at both moderate and high flux. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Systematic feasibility analysis of performing elastography using reduced dose CT lung image pairs.
- Author
-
Hasse, Katelyn, Hsieh, Scott S., O'Connell, Dylan, Stiehl, Bradley, Min, Yugang, Neylon, John, Low, Daniel A., and Santhanam, Anand P.
- Subjects
- *
VECTOR fields , *OPTICAL flow , *IMAGE registration , *LUNGS , *ALGORITHMS - Abstract
Purpose: Elastography using computer tomography (CT) is a promising methodology that can provide patient‐specific regional distributions of lung biomechanical properties. The purpose of this paper is to investigate the feasibility of performing elastography using simulated lower dose CT scans. Methods: A cohort of eight patient CT image pairs were acquired with a tube current–time product of 40 mAs for estimating baseline lung elastography results. Synthetic low mAs CT scans were generated from the baseline scans to simulate the additional noise that would be present in acquisitions at 30, 25, and 20 mAs, respectively. For the simulated low mAs scans, exhalation and inhalation datasets were registered using an in‐house optical flow deformable image registration algorithm. The registered deformation vector fields (DVFs) were taken to be ground truth for the elastography process. A model‐based elasticity estimation was performed for each of the reduced mAs datasets, in which the goal was to optimize the elasticity distribution that best represented their respective DVFs. The estimated elasticity and the DVF distributions of the reduced mAs scans were then compared with the baseline elasticity results for quantitative accuracy purposes. Results: The DVFs for the low mAs and baseline scans differed from each other by an average of 1.41 mm, which can be attributed to the noise added by the simulated reduction in mAs. However, the elastography results using the DVFs from the reduced mAs scans were similar from the baseline results, with an average elasticity difference of 0.65, 0.71, and 0.76 kPa, respectively. This illustrates that elastography can provide equivalent results using low‐dose CT scans. Conclusions: Elastography can be performed equivalently using CT image pairs acquired with as low as 20 mAs. This expands the potential applications of CT‐based elastography. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
20. Coincidence Counters for Charge Sharing Compensation in Spectroscopic Photon Counting Detectors.
- Author
-
Hsieh, Scott S.
- Subjects
- *
MONTE Carlo method , *PHOTON counting , *PHOTON detectors , *COINCIDENCE circuits , *WATER efficiency , *IODINE isotopes , *WAGES - Abstract
The performance of X-ray photon counting detectors (PCDs), especially on spectral tasks, is compromised by charge sharing. Existing mechanisms to compensate for charge sharing, such as charge summing circuitry or larger pixel sizes, increase and aggravate pileup effects. We propose a new mechanism, the coincidence counting bin (CCB), which does not increase pileup and which has implementation similarities to existing energy bins. The CCB is triggered by coincident events in adjacent pixels and provides an estimate of the double counts arising from charge sharing. Unlike charge summing, the CCB does not directly restore corrupted events. Nonetheless, knowledge of the number of coincident counts can be used by the estimator to reduce noise. We simulated a PCD with and without the CCB using Monte Carlo simulations, modeling PCD pixels as instantaneous charge collectors and X-ray energy deposition as producing a Gaussian charge cloud with 75 micron FWHM, independent of energy. With typical operating conditions and at low flux (120 kVp, incident count rate 1% of characteristic count rate, 30 cm object thickness, five energy bins, pixel pitch of 300 microns), the CCB improved dose efficiency of iodine and water basis material decomposition by 70% and 50%, respectively. An improvement of 20% was also seen in an iodine CNR task. These improvements are attenuated as incident flux increases and show moderate dependence on filtration and pixel size. At high flux, the CCB does not provide useful information and is discarded by the estimator. The CCB may be an effective and practical mechanism for charge sharing compensation in PCDs. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
21. Accelerating iterative coordinate descent using a stored system matrix.
- Author
-
Hsieh, Scott S., Hoffman, John M., and Noo, Frederic
- Subjects
- *
COORDINATES , *GRAPHICS processing units , *SPARSE matrices , *MODERN architecture - Abstract
Purpose: The computational burden associated with model‐based iterative reconstruction (MBIR) is still a practical limitation. Iterative coordinate descent (ICD) is an optimization approach for MBIR that has sometimes been thought to be incompatible with modern computing architectures, especially graphics processing units (GPUs). The purpose of this work is to accelerate the previously released open‐source FreeCT_ICD to include GPU acceleration and to demonstrate computational performance with ICD that is comparable with simultaneous update approaches. Methods: FreeCT_ICD uses a stored system matrix (SSM), which precalculates the forward projector in the form of a sparse matrix and then reconstructs on a rotating coordinate grid to exploit helical symmetry. In our GPU ICD implementation, we shuffle the sinogram memory ordering such that data access in the sinogram coalesce into fewer transactions. We also update NS voxels in the xy‐plane simultaneously to improve occupancy. Conventional ICD updates voxels sequentially (NS = 1). Using NS > 1 eliminates existing convergence guarantees. Convergence behavior in a clinical dataset was therefore studied empirically. Results: On a pediatric dataset with sinogram size of 736 × 16 × 13860 reconstructed to a matrix size of 512 × 512 × 128, our code requires about 20 s per iteration on a single GPU compared to 2300 s per iteration for a 6‐core CPU using FreeCT_ICD. After 400 iterations, the proposed and reference codes converge within 2 HU RMS difference (RMSD). Using a wFBP initialization, convergence within 10 HU RMSD is achieved within 4 min. Convergence is similar with NS values between 1 and 256, and NS = 16 was sufficient to achieve maximum performance. Divergence was not observed until NS > 1024. Conclusions: With appropriate modifications, ICD may be able to achieve computational performance competitive with simultaneous update algorithms currently used for MBIR. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. The effects of physics‐based data augmentation on the generalizability of deep neural networks: Demonstration on nodule false‐positive reduction.
- Author
-
Omigbodun, Akinyinka O., Noo, Frederic, McNitt‐Gray, Michael, Hsu, William, and Hsieh, Scott S.
- Subjects
ARTIFICIAL neural networks ,RANDOM noise theory ,PULMONARY nodules ,IMAGE databases ,ACQUISITION of data - Abstract
Purpose: An important challenge for deep learning models is generalizing to new datasets that may be acquired with acquisition protocols different from the training set. It is not always feasible to expand training data to the range encountered in clinical practice. We introduce a new technique, physics‐based data augmentation (PBDA), that can emulate new computed tomography (CT) data acquisition protocols. We demonstrate two forms of PBDA, emulating increases in slice thickness and reductions of dose, on the specific problem of false‐positive reduction in the automatic detection of lung nodules. Methods: We worked with CT images from the lung image database consortium (LIDC) collection. We employed a hybrid ensemble convolutional neural network (CNN), which consists of multiple CNN modules (VGG, DenseNet, ResNet), for a classification task of determining whether an image patch was a suspicious nodule or a false positive. To emulate a reduction in tube current, we injected noise by simulating forward projection, noise addition, and backprojection corresponding to 1.5 mAs (a "chest x‐ray" dose). To simulate thick slice CT scans from thin slice CT scans, we grouped and averaged spatially contiguous CT within thin slice data. The neural network was trained with 10% of the LIDC dataset that was selected to have either the highest tube current or the thinnest slices. The network was tested on the remaining data. We compared PBDA to a baseline with standard geometric augmentations (such as shifts and rotations) and Gaussian noise addition. Results: PBDA improved the performance of the networks when generalizing to the test dataset in a limited number of cases. We found that the best performance was obtained by applying augmentation at very low doses (1.5 mAs), about an order of magnitude less than most screening protocols. In the baseline augmentation, a comparable level of Gaussian noise was injected. For dose reduction PBDA, the average sensitivity of 0.931 for the hybrid ensemble network was not statistically different from the average sensitivity of 0.935 without PBDA. Similarly for slice thickness PBDA, the average sensitivity of 0.900 when augmenting with doubled simulated slice thicknesses was not statistically different from the average sensitivity of 0.895 without PBDA. While there were cases detailed in this paper in which we observed improvements, the overall picture was one that suggests PBDA may not be an effective data enrichment tool. Conclusions: PBDA is a newly proposed strategy for mitigating the performance loss of neural networks related to the variation of acquisition protocol between the training dataset and the data that is encountered in deployment or testing. We found that PBDA does not provide robust improvements with the four neural networks (three modules and the ensemble) tested and for the specific task of false‐positive reduction in nodule detection. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Accelerating Coordinate Descent in Iterative Reconstruction.
- Author
-
Hsieh, Scott S., Hoffman, John M., and Noo, Frederic
- Published
- 2019
- Full Text
- View/download PDF
24. Quantitative lung nodule detectability and dose reduction in low-dose chest tomosynthesis.
- Author
-
Sunghoon Choi, Seungyeon Choi, Hsieh, Scott S., Donghoon Lee, Junyoung Son, Haenghwa Lee, Chang-Woo Seo, and Hee-Joung Kim
- Published
- 2018
- Full Text
- View/download PDF
25. Fast low-dose compressed-sensing (CS) image reconstruction in four-dimensional digital tomosynthesis using on-board imager (OBI).
- Author
-
Sunghoon Choi, Hsieh, Scott S., Chang-Woo Seo, and Hee-Joung Kim
- Published
- 2018
- Full Text
- View/download PDF
26. Can image-domain filtering of FBP CT reconstructions match low-contrast performance of iterative reconstructions?
- Author
-
Divel, Sarah E., Hsieh, Scott S., Jia Wang, and Pelc, Norbert J.
- Published
- 2018
- Full Text
- View/download PDF
27. Focal spot rotation for improving CT resolution homogeneity.
- Author
-
Hsieh, Scott S.
- Published
- 2018
- Full Text
- View/download PDF
28. Implementation of a Piecewise-linear Dynamic Attenuator.
- Author
-
Shunhavanich, Picha, Bennett, N. Robert, Hsieh, Scott S., and Pelc, Norbert J.
- Published
- 2018
- Full Text
- View/download PDF
29. Fluid‐filled dynamic bowtie filter: Description and comparison with other modulators.
- Author
-
Shunhavanich, Picha, Hsieh, Scott S., and Pelc, Norbert J.
- Subjects
- *
IMAGE reconstruction , *RADIATION doses , *COMPUTER simulation , *PHOTON counting , *LIGHT modulators - Abstract
Purpose: A dynamic bowtie filter can modulate flux along both fan and view angles for reduced patient dose, scatter, and required photon flux, which is especially important for photon counting detectors (PCDs). Among the proposed dynamic bowtie designs, the piecewise‐linear attenuator (Hsieh and Pelc, Med Phys. 2013;40:031910) offers more flexibility than conventional filters, but relies on analog positioning of a limited number of wedges. In this work, we study our previously proposed dynamic attenuator design, the fluid‐filled dynamic bowtie filter (FDBF) that has digital control. Specifically, we use computer simulations to study fluence modulation, reconstructed image noise, and radiation dose and to compare it to other attenuators. FDBF is an array of small channels each of which, if it can be filled with dense fluid or emptied quickly, has a binary effect on the flux. The cumulative attenuation from each channel along the x‐ray path contributes to the FDBF total attenuation. Methods: An algorithm is proposed for selecting which FDBF channels should be filled. Two optimization metrics are considered: minimizing the maximum‐count‐rate for PCDs and minimizing peak‐variance for energy‐integrating detectors (EIDs) at fixed radiation dose (for optimizing dose efficiency). Using simulated chest, abdomen, and shoulder data, the performance is compared with a conventional bowtie and a piecewise‐linear attenuator. For minimizing peak‐variance, a perfect‐attenuator (hypothetical filter capable of adjusting the fluence of each ray individually) and flat‐variance attenuator are also included in the comparison. Two possible fluids, solutions of zinc bromide and gadolinium chloride, were tested. Results: To obtain the same SNR as routine clinical protocols, the proposed FDBF reduces the maximum‐count‐rate (across projection data, averaged over the test objects) of PCDs to 1.2 Mcps/mm2, which is 55.8 and 3.3 times lower than the max‐count‐rate of the conventional bowtie and the piecewise‐linear bowtie, respectively. (Averaged across objects for FDBF, the max‐count‐rate without object and FDBF is 2063.5 Mcps/mm2, and the max‐count‐rate with object without FDBF is 749.8 Mcps/mm2.) Moreover, for the peak‐variance analysis, the FDBF can reduce entrance‐energy‐fluence (sum of energy incident on objects, used as a surrogate for dose) to 34% of the entrance‐energy‐fluence from the conventional filter on average while achieving the same peak noise level. Its entrance‐energy‐fluence reduction performance is only 7% worse than the perfect‐attenuator on average and is 13% better than the piecewise‐linear filter for chest and shoulder. Furthermore, the noise‐map in reconstructed image domain from the FDBF is more uniform than the piecewise‐linear filter, with 3 times less variation across the object. For the dose reduction task, the zinc bromide solution performed slightly poorer than stainless steel but was better than the gadolinium chloride solution. Conclusions: The FDBF allows finer control over flux distribution compared to piecewise‐linear and conventional bowtie filters. It can reduce the required maximum‐count‐rate for PCDs to a level achievable by current detector designs and offers a high dose reduction factor. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. A training program to reduce reader search errors for liver metastasis detection in CT.
- Author
-
Hsieh, Scott S., Inoue, Akitoshi, Yalon, Mariana, Cook, David A., Fidler, Jeff L., Gong, Hao, Sudhir Pillai, Parvathy, Vercnocke, Andrew J., Johnson, Matthew P., Leng, Shuai, Yu, Lifeng, Holmes, David R., Carter, Rickey E., McCollough, Cynthia H., and Fletcher, Joel G.
- Published
- 2023
- Full Text
- View/download PDF
31. Classification of high-risk coronary plaques using radiomic analysis of multi-energy photon-counting-detector computed tomography (PCD-CT) images.
- Author
-
Dunning, Chelsea A. S., Rajiah, Prabhakar Shantha, Hsieh, Scott S., Esquivel, Andrea, Yalon, Mariana, Weber, Nikkole M., Gong, Hao, Fletcher, Joel G., McCollough, Cynthia H., and Leng, Shuai
- Published
- 2023
- Full Text
- View/download PDF
32. Improvements in dose efficiency with high resolution scan modes in photon counting CT.
- Author
-
Shunhavanich, Picha, Rajendran, Kishore, Fan, Mingdong, McCollough, Cynthia H., Leng, Shuai, Yu, Lifeng, and Hsieh, Scott S.
- Published
- 2023
- Full Text
- View/download PDF
33. Patient-specific uncertainty and bias quantification of non-transparent convolutional neural network model through knowledge distillation and Bayesian deep learning.
- Author
-
Gong, Hao, Yu, Lifeng, Leng, Shuai, Hsieh, Scott S., Fletcher, Joel G., and McCollough, Cynthia H.
- Published
- 2023
- Full Text
- View/download PDF
34. A dense search challenge phantom fabricated with pixel-based 3D printing for precise detectability assessment.
- Author
-
Hsieh, Scott S., Mei, Kai, Shapira, Nadav, Shunhavanich, Picha, Stayman, J. Webster, McCollough, Cynthia H., Gang, Grace, Leng, Shuai, Geagen, Michael, Yu, Lifeng, and Noël, Peter B.
- Published
- 2023
- Full Text
- View/download PDF
35. Real-time single frame tomosynthesis: prototype and radiotherapy applications.
- Author
-
Hsieh, Scott S., Ng, Lydia W., Cao, Minsong, and Lee, Percy
- Published
- 2023
- Full Text
- View/download PDF
36. Charge sharing correction for photon counting detectors with coincidence counters.
- Author
-
Taguchi, Katsuyuki and Hsieh, Scott S.
- Published
- 2023
- Full Text
- View/download PDF
37. Digital count summing vs analog charge summing for photon counting detectors: A performance simulation study.
- Author
-
Hsieh, Scott S. and Sjolin, Martin
- Subjects
- *
PHOTON counting , *PHOTON detectors , *SIMULATION methods & models , *CADMIUM telluride detectors , *ELECTRIC charge - Abstract
Purpose: Charge sharing is a significant problem for CdTe‐based photon counting detectors (PCDs) and can cause high‐energy photons to be misclassified as one or more low‐energy events. Charge sharing is especially problematic in PCDs for CT because the high flux necessitates small pixels, which increase the magnitude of charge sharing. Analog charge summing (ACS) is a powerful solution to reduce spectral distortion arising from charge sharing but may be difficult to implement. We investigate correction of the signal after digitization by the comparator (“digital count summing”), which is only able to correct a subset of charge sharing events but may have implementation advantages. We compare and quantify the relative performance of digital and ACS in simulations. Methods: Transport of photons in CdTe was modeled using Monte Carlo simulations. Energy deposited in the CdTe substrate was converted to electrical charges of a predetermined shape, and all charges within the detector pixel are assumed to be perfectly collected. In ACS, the maximum charge received over any 2 × 2 block of pixels was grouped together prior to digitization. In digital count summing (DCS), the charge was digitized in each pixel, and subsequently, adjacent pixels that detected events grouped their charge to record a single, higher energy event. All simulations were performed at the limit of low flux (no pileup). The default tube voltage was 120 kVp, object thickness was 20 cm of water, pixel pitch was 250 μm, and charge cloud modeled as a Gaussian with σ = 40 μm. Variation of these parameters was examined in a sensitivity analysis. Results: Detectors that used no correction, DCS, and ACS misclassified 51%, 39%, and 15% of incident photons, respectively. For iodine basis material imaging, DCS exhibited 100% greater dose efficiency compared to uncorrected, and ACS exhibited an additional 111% greater dose efficiency compared to digital charge summing. For a nonspectral task, the dose efficiency improvement as estimated by improvement of zero‐frequency detective quantum efficiency, DQE(0) was 10% for DCS compared to uncorrected and 10% for ACS compared to DCS. A sensitivity analysis showed that DCS generally achieved half the benefit of ACS over a range of conditions, although the benefit was markedly less if the charge cloud was instead modeled as a small sphere. Conclusions: Summing of counts after digitization may be a simpler alternative to summing of charge prior to digitization due to the relative complexity of analog circuit design. Over most conditions studied, it provides roughly half the benefit of ACS and may offer certain implementation advantages. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. Effect of Spectral Degradation and Spatio-Energy Correlation in X-Ray PCD for Imaging.
- Author
-
Rajbhandary, Paurakh L., Hsieh, Scott S., and Pelc, Norbert J.
- Subjects
- *
X-rays , *MONTE Carlo method , *MONOENERGETIC radiation , *APERTURE-coupled microstrip antennas , *PHOTON counting - Abstract
Charge sharing, scatter, and fluorescence events in a photon counting detector can result in counting of a single incident photon in multiple neighboring pixels, each at a fraction of the true energy. This causes energy distortion and correlation of data across energy bins in neighboring pixels (spatio-energy correlation), with the severity depending on the detector pixel size and detector material. If a “macro-pixel” is formed by combining the counts from multiple adjacent small pixels, it will exhibit correlations across its energy bins. Understanding these effects can be crucial for detector design and for model-based imaging applications. This paper investigates the impact of these effects in basis material and effective monoenergetic estimates using the Cramér-Rao Lower Bound. To do so, we derive a correlation model for the multi-counting events. CdTe detectors with grids of pixels with side length of $250~\mu \text{m}$ , $500~\mu \text{m}$ , and 1 mm were compared, with binning of $4\times4$ , $2\times2$ , and $1\times1$ pixels, respectively, to keep the same net 1 mm2 aperture constant. The same flux was applied to each. The mean and covariance matrix of measured photon counts were derived analytically using spatio-energy response functions precomputed from Monte Carlo simulations. Our results show that a 1 mm2 macro-pixel with $250\times 250\,\,\mu \text{m}^{\textsf {2}}$ sub-pixels shows 35% higher standard deviation than a single 1 mm2 pixel for material-specific imaging, while the penalty for effective monoenergetic imaging is <10% compared with a single 1 mm $^{\textsf {2}}$ pixel. Potential benefits of sub-pixels (higher spatial resolution and lower pulse pile-up effects) are important but were not investigated here. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
39. Technical Note: FreeCT_ICD: An open‐source implementation of a model‐based iterative reconstruction method using coordinate descent optimization for CT imaging investigations.
- Author
-
Hoffman, John M., Noo, Frédéric, Young, Stefano, Hsieh, Scott S., and McNitt‐Gray, Michael
- Subjects
COMPUTED tomography ,IMAGE reconstruction ,COMPUTER-aided design ,ATTENUATION coefficients ,DIAGNOSTIC imaging - Abstract
Purpose: To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open‐source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open‐source implementation of a model‐based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Methods: Model‐based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open‐source implementation of a model‐based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on trilinear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column‐wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and released under the open‐source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric thoracic scan. Results: For the ACR phantom, image quality was comparable to clinical reconstructions as well as reconstructions using open‐source FreeCT_wFBP software. The pediatric thoracic scan also yielded acceptable results. In addition, we did not observe any deleterious impact in image quality associated with the utilization of rotating slices. These evaluations also demonstrated reasonable tradeoffs in storage requirements and computational demands. Conclusion: FreeCT_ICD is an open‐source implementation of a model‐based iterative reconstruction method that extends the capabilities of previously released open‐source reconstruction software and provides the ability to perform vendor‐independent reconstructions of clinically acquired raw projection data. This implementation represents a reasonable tradeoff between storage and computational requirements and has demonstrated acceptable image quality in both simulated and clinical image datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
40. Spectral resolution and high‐flux capability tradeoffs in CdTe detectors for clinical CT.
- Author
-
Hsieh, Scott S., Rajbhandary, Paurakh L., and Pelc, Norbert J.
- Subjects
- *
COMPUTED tomography , *PHOTON counting , *PHOTON detectors , *MEDICAL radiography , *MONTE Carlo method - Abstract
Purpose: Photon‐counting detectors using CdTe or CZT substrates are promising candidates for future CT systems but suffer from a number of nonidealities, including charge sharing and pulse pileup. By increasing the pixel size of the detector, the system can improve charge sharing characteristics at the expense of increasing pileup. The purpose of this work is to describe these considerations in the optimization of the detector pixel pitch. Methods: The transport of x rays through the CdTe substrate was simulated in a Monte Carlo fashion using GEANT4. Deposited energy was converted into charges distributed as a Gaussian function with size dependent on interaction depth to capture spreading from diffusion and Coulomb repulsion. The charges were then collected in a pixelated fashion. Pulse pileup was incorporated separately with Monte Carlo simulation. The Cramér–Rao lower bound (CRLB) of the measurement variance was numerically estimated for the basis material projections. Noise in these estimates was propagated into CT images. We simulated pixel pitches of 250, 350, and 450 microns and compared the results to a photon counting detector with pileup but otherwise ideal energy response and an ideal dual‐energy system (80/140 kVp with tin filtration). The modeled CdTe thickness was 2 mm, the incident spectrum was 140 kVp and 500 mA, and the effective dead time was 67 ns. Charge summing circuitry was not modeled. We restricted our simulations to objects of uniform thickness and did not consider the potential advantage of smaller pixels at high spatial frequencies. Results: At very high x‐ray flux, pulse pileup dominates and small pixel sizes perform best. At low flux or for thick objects, charge sharing dominates and large pixel sizes perform best. At low flux and depending on the beam hardness, the CRLB of variance in basis material projections tasks can be 32%–55% higher with a 250 micron pixel pitch compared to a 450 micron pixel pitch. However, both are about four times worse in variance than the ideal photon counting detector. The optimal pixel size depends on a number of factors such as x‐ray technique and object size. At high technique (140 kVp/500 mA), the ratio of variance for a 450 micron pixel compared to a 250 micron pixel size is 2126%, 200%, 97%, and 78% when imaging 10, 15, 20, and 25 cm of water, respectively. If 300 mg/cm2 of iodine is also added to the object, the variance ratio is 117%, 91%, 74%, and 72%, respectively. Nonspectral tasks, such as equivalent monoenergetic imaging, are less sensitive to spectral distortion. Conclusions: The detector pixel size is an important design consideration in CdTe detectors. Smaller pixels allow for improved capabilities at high flux but increase charge sharing, which in turn compromises spectral performance. The optimal pixel size will depend on the specific task and on the charge shaping time. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. Segmented targeted least squares estimator for material decomposition in multibin photon-counting detectors.
- Author
-
Rajbhandary, Paurakh L., Hsieh, Scott S., and Pelc, Norbert J.
- Published
- 2017
- Full Text
- View/download PDF
42. Improving pulse detection in multibin photon-counting detectors.
- Author
-
Hsieh, Scott S. and Pelc, Norbert J.
- Published
- 2016
- Full Text
- View/download PDF
43. A limit on dose reduction possible with CT reconstruction algorithms without prior knowledge of the scan subject.
- Author
-
Hsieh, Scott S., Chesler, David A., Fleischmann, Dominik, and Pelc, Norbert J.
- Subjects
- *
PRIOR learning , *ALGORITHMS , *MEDICAL technology , *MEDICAL model , *MEDICAL research - Abstract
Purpose: To find an upper bound on the maximum dose reduction possible for any reconstruction algorithm, analytic or iterative, that result from the inclusion of the data statistics. The authors do not analyze noise reduction possible from prior knowledge or assumptions about the object. Methods: The authors examined the task of estimating the density of a circular lesion in a cross section. Raw data were simulated by forward projection of existing images and numerical phantoms. To assess an upper bound on the achievable dose reduction by any algorithm, the authors assume that both the background and the shape of the lesion are completely known. Under these conditions, the best possible estimate of the density can be determined by solving a weighted least squares problem directly in the raw data domain. Any possible reconstruction algorithm that does not use prior knowledge or make assumptions about the object, including filtered backprojection (FBP) or iterative reconstruction methods with this constraint, must be no better than this least squares solution. The authors simulated 10 000 sets of noisy data and compared the variance in density from the least squares solution with those from FBP. Density was estimated from FBP images using either averaging within a ROI, or streak-adaptive averaging with better noise performance. Results: The bound on the possible dose reduction depends on the degree to which the observer can read through the possibly streaky noise. For the described low contrast detection task with the signal shape and background known exactly, the average dose reduction possible compared to FBP with streak-adaptive averaging was 42% and it was 64% if only the ROI average is used with FBP. The exact amount of dose reduction also depends on the background anatomy, with statistically inhomogeneous backgrounds showing greater benefits. Conclusions: The dose reductions from new, statistical reconstruction methods can be bounded. Larger dose reductions in the density estimation task studied here are only possible with the introduction of prior knowledge, which can introduce bias. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
44. Individualized and generalized models for predicting observer performance on liver metastasis detection using CT.
- Author
-
Pillai, Parvathy Sudhir, Holmes III, David R., Carter, Rickey, Inoue, Akitoshi, Cook, David A., Karwoski, Ron, Fidler, Jeff L., Fletcher, Joel G., Leng, Shuai, Yu, Lifeng, McCollough, Cynthia H., and Hsieh, Scott S.
- Published
- 2022
- Full Text
- View/download PDF
45. A Dynamic Attenuator Improves Spectral Imaging With Energy-Discriminating, Photon Counting Detectors.
- Author
-
Hsieh, Scott S. and Pelc, Norbert J.
- Subjects
- *
X-ray imaging , *SPECTRAL imaging , *PHOTON counting , *PHOTONICS , *COMPUTED tomography , *ELECTRONIC modulation - Abstract
Energy-discriminating, photon counting (EDPC) detectors have high potential in spectral imaging applications but exhibit degraded performance when the incident count rate approaches or exceeds the characteristic count rate of the detector. In order to reduce the requirements on the detector, we explore the strategy of modulating the X-ray flux field using a recently proposed dynamic, piecewise-linear attenuator. A previous paper studied this modulation for photon counting detectors but did not explore the impact on spectral applications. In this work, we modeled detection with a bipolar triangular pulse shape (Taguchi , 2011) and estimated the Cramer-Rao lower bound (CRLB) of the variance of material selective and equivalent monoenergetic images, assuming deterministic errors at high flux could be corrected. We compared different materials for the dynamic attenuator and found that rare earth elements, such as erbium, outperformed previously proposed materials such as iron in spectral imaging. The redistribution of flux reduces the variance or dose, consistent with previous studies on benefits with conventional detectors. Numerical simulations based on DICOM datasets were used to assess the impact of the dynamic attenuator for detectors with several different characteristic count rates. The dynamic attenuator reduced the peak incident count rate by a factor of 4 in the thorax and 44 in the pelvis, and a 10 Mcps/mm^2 EDPC detector with dynamic attenuator provided generally superior image quality to a 100 Mcps/mm^2 detector with reference bowtie filter for the same dose. The improvement is more pronounced in the material images. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
46. An inverse geometry CT system with stationary source arrays.
- Author
-
Hsieh, Scott S., Heanue, Joseph A., Funk, Tobias, Hinshaw, Waldo S., and Pelc, Norbert J.
- Published
- 2011
- Full Text
- View/download PDF
47. A 25-reader performance study for hepatic metastasis detection: lessons from unsupervised learning.
- Author
-
Hsieh, Scott S., Inoue, Akitoshi, Sudhir Pillai, Parvathy, Gong, Hao, Holmes, David R., Cook, David A., Leng, Shuai, Yu, Lifeng, Carter, Rickey E., Fletcher, Joel G., and McCollough, Cynthia H.
- Published
- 2021
- Full Text
- View/download PDF
48. Estimating the minimum SNR necessary for object detection in the projection domain.
- Author
-
Hsieh, Scott S., Yu, Lifeng, Huber, Nathan R., Leng, Shuai, and McCollough, Cynthia H.
- Published
- 2021
- Full Text
- View/download PDF
49. Implementation and initial experience with an interactive eye-tracking system for measuring radiologists' visual search in diagnostic tasks using volumetric CT images.
- Author
-
Gong, Hao, Hsieh, Scott S., Holmes, David, Cook, David, Inoue, Akitoshi, Bartlett, David, Baffour, Francis, Takahashi, Hiroaki, Leng, Shuai, Yu, Lifeng, Fletcher, Joel, and McCollough, Cynthia
- Published
- 2021
- Full Text
- View/download PDF
50. Design of a digital, motion-free mechanism for fluence field modulation.
- Author
-
Hsieh, Scott S., Leng, Shuai, Yu, Lifeng, McCollough, Cynthia H., and Wang, Adam S.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.