8 results on '"Hsieh, Scott S."'
Search Results
2. Targeted Training Reduces Search Errors but Not Classification Errors for Hepatic Metastasis Detection at Contrast-Enhanced CT
- Author
-
Hsieh, Scott S., Inoue, Akitoshi, Yalon, Mariana, Cook, David A., Gong, Hao, Sudhir Pillai, Parvathy, Johnson, Matthew P., Fidler, Jeff L., Leng, Shuai, Yu, Lifeng, Carter, Rickey E., Holmes, David R., III, McCollough, Cynthia H., and Fletcher, Joel G.
- Published
- 2024
- Full Text
- View/download PDF
3. Existence, uniqueness, and efficiency of numerically unbiased attenuation pathlength estimators for photon counting detectors at low count rates.
- Author
-
Hsieh, Scott S. and Rajbhandary, Paurakh L.
- Subjects
- *
PHOTON detectors , *PHOTON counting , *VECTOR spaces , *COMPUTED tomography , *CONVEX functions - Abstract
Background Purpose Methods Results Conclusion The first step in computed tomography (CT) reconstruction is to estimate attenuation pathlength. Usually, this is done with a logarithm transformation, which is the direct solution to the Beer‐Lambert Law. At low signals, however, the logarithm estimator is biased. Bias arises both from the curvature of the logarithm and from the possibility of detecting zero counts, so a data substitution strategy may be employed to avoid the singularity of the logarithm. Recent progress has been made by Li et al. [
IEEE Trans Med Img 42:6, 2023] to modify the logarithm estimator to eliminate curvature bias, but the optimal strategy for mitigating bias from the singularity remains unknown.The purpose of this study was to use numerical techniques to construct unbiased attenuation pathlength estimators that are alternatives to the logarithm estimator, and to study the uniqueness and optimality of possible solutions, assuming a photon counting detector.Formally, an attenuation pathlength estimator is a mapping from integer detector counts to real pathlength values. We constrain our focus to only the small signal inputs that are problematic for the logarithm estimator, which we define as inputs of <100 counts, and we consider estimators that use only a single input and that are not informed by adjacent measurements (e.g., adaptive smoothing). The set of all possible pathlength estimators can then be represented as points in a 100‐dimensional vector space. Within this vector space, we use optimization to select the estimator that (1) minimizes mean squared error and (2) is unbiased. We define “unbiased” as satisfying the numerical condition that the maximum bias be less than 0.001 across a continuum of 1000 object thicknesses that span the desired operating range. Because the objective function is convex and the constraints are affine, optimization is tractable and guaranteed to converge to the global minimum. We further examine the nullspace of the constraint matrix to understand the uniqueness of possible solutions, and we compare the results to the Cramér‐Rao bound of the variance.We first show that an unbiased attenuation pathlength estimator does not exist if very low mean detector signals (equivalently, very thick objects) are permitted. It is necessary to select a minimum mean detector signal for which unbiased behavior is desired. If we select two counts, the optimal estimator is similar to Li's estimator. If we select one count, the optimal estimator becomes non‐monotonic. The oscillations cause the unbiased estimator to be noise amplifying. The nullspace of the constraint matrix is high‐dimensional, so that unbiased solutions are not unique. The Cramér‐Rao bound of the variance matches well with the expected I−0.5${{I}^{ - 0.5}}$ scaling law and cannot be attained.If arbitrarily thick objects are permitted, an unbiased attenuation pathlength estimator does not exist. If the maximum thickness is restricted, an unbiased estimator exists but is not unique. An optimal estimator can be selected that minimizes variance, but a bias‐variance tradeoff exists where a larger domain of unbiased behavior requires increased variance. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
4. 3D printed phantom with 12 000 submillimeter lesions to improve efficiency in CT detectability assessment.
- Author
-
Shunhavanich, Picha, Mei, Kai, Shapira, Nadav, Stayman, Joseph Webster, McCollough, Cynthia H., Gang, Grace, Leng, Shuai, Geagan, Michael, Yu, Lifeng, Noël, Peter B., and Hsieh, Scott S.
- Subjects
RECEIVER operating characteristic curves ,PRINTMAKING ,THREE-dimensional printing ,STANDARD deviations - Abstract
Background: The detectability performance of a CT scanner is difficult to precisely quantify when nonlinearities are present in reconstruction. An efficient detectability assessment method that is sensitive to small effects of dose and scanner settings is desirable. We previously proposed a method using a search challenge instrument: a phantom is embedded with hundreds of lesions at random locations, and a model observer is used to detect lesions. Preliminary tests in simulation and a prototype showed promising results. Purpose: In this work, we fabricated a full‐size search challenge phantom with design updates, including changes to lesion size, contrast, and number, and studied our implementation by comparing the lesion detectability from a nonprewhitening (NPW) model observer between different reconstructions at different exposure levels, and by estimating the instrument sensitivity to detect changes in dose. Methods: Designed to fit into QRM anthropomorphic phantoms, our search challenge phantom is a cylindrical insert 10 cm wide and 4 cm thick, embedded with 12 000 lesions (nominal width of 0.6 mm, height of 0.8 mm, and contrast of −350 HU), and was fabricated using PixelPrint, a 3D printing technique. The insert was scanned alone at a high dose to assess printing accuracy. To evaluate lesion detectability, the insert was placed in a QRM thorax phantom and scanned from 50 to 625 mAs with increments of 25 mAs, once per exposure level, and the average of all exposure levels was used as high‐dose reference. Scans were reconstructed with three different settings: filtered‐backprojection (FBP) with Br40 and Br59, and Sinogram Affirmed Iterative Reconstruction (SAFIRE) with strength level 5 and Br59 kernel. An NPW model observer was used to search for lesions, and detection performance of different settings were compared using area under the exponential transform of free response ROC curve (AUC). Using propagation of uncertainty, the sensitivity to changes in dose was estimated by the percent change in exposure due to one standard deviation of AUC, measured from 5 repeat scans at 100, 200, 300, and 400 mAs. Results: The printed insert lesions had an average position error of 0.20 mm compared to printing reference. As the exposure level increases from 50 mAs to 625 mAs, the lesion detectability AUCs increase from 0.38 to 0.92, 0.42 to 0.98, and 0.41 to 0.97 for FBP Br40, FBP Br59, and SAFIRE Br59, respectively, with a lower rate of increase at higher exposure level. FBP Br59 performed best with AUC 0.01 higher than SAFIRE Br59 on average and 0.07 higher than FBP Br40 (all P < 0.001). The standard deviation of AUC was less than 0.006, and the sensitivity to detect changes in mAs was within 2% for FBP Br59. Conclusions: Our 3D‐printed search challenge phantom with 12 000 submillimeter lesions, together with an NPW model observer, provide an efficient CT detectability assessment method that is sensitive to subtle effects in reconstruction and is sensitive to small changes in dose. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Spectral information content of Compton scattering events in silicon photon counting detectors.
- Author
-
Hsieh, Scott S. and Taguchi, Katsuyuki
- Subjects
- *
PHOTON detectors , *COMPTON effect , *THRESHOLD energy , *PHOTOELECTRIC effect , *SILICON , *WATER filters , *COMPTON scattering , *PHOTON counting - Abstract
Background: Silicon (Si) is a possible sensor material for photon counting detectors (PCDs). A major drawback of Si is that roughly two‐thirds of x‐ray interactions in the diagnostic energy range are Compton scattering. Because Compton scattering is an energy‐insensitive process, it is commonly assumed that Compton events retain little spectral information. Purpose: To quantify how much information can be recovered from Compton scattering events in models of Si PCDs. Methods: We built a simplified model of Si interactions including two interaction mechanisms: photoelectric effect and Compton scattering. We considered three different binning options that represent strategies for handling Compton events: in Compton censoring, all events under 38 keV (the maximum energy possible from Compton scattering for a 120 keV incident photon) were discarded; in Compton counting, all events between 1 and 38 keV were placed into a single bin; in Compton binning, all events were placed into energy bins of uniform width. These were compared to the ideal detector, which always recorded the correct energy (i.e., 100% photoelectric effect). Every photon was assumed to interact once and only once with Si, and the energy bin width was 5 keV. In the primary analysis, the Si detector was irradiated with a 120 kV spectrum filtered by 30 cm of water, with 99.5% of the arriving spectrum above 38 keV so that there was good separation between photoelectric effect and Compton scattering, and the figures of merit were the Cramér–Rao lower bound (CRLB) of the variance of iodine and water basis material decomposition images, as well as the CRLB of virtual monoenergetic images (i.e., linear combinations of material images) that maximize iodine CNR or water CNR. We also constructed a local linear estimator that attains the CRLB. In secondary analyses, we applied other sources of spectral distortion: (1) a nonzero minimum energy threshold; (2) coarser, 10 keV energy bins; and (3) a model of charge sharing. Results: With our chosen spectrum, 67% of the interactions were Compton scattering. Consistent with this, the material decomposition variance for the Compton censoring model, averaged over both basis materials, was 258% greater than the ideal detector. If Compton events carried no spectral information, the Compton counting model would show similar variance. Instead, its basis material variance was 103% greater than the ideal detector, implying that Compton counts indeed carry significant spectral information. The Compton binning model had a basis material variance 60% greater than the ideal detector. The Compton binning model was not affected by a 5 keV minimum energy threshold, but the variance increased from 60% to 107% when charge sharing was included and to 78% with coarser energy bins. For optimized CNR images, the average variance was 149%, 12%, and 10% higher than the ideal detector for the Compton censoring, counting, and binning models, reinforcing the hypothesis that Compton counts are useful for detection tasks and that precise energy assignments are not necessary. Conclusions: Substantial spectral information remains after Compton scattering events in silicon PCDs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Possible improvements in effective fill factor using X‐ray fluorescent interpixel reflectors.
- Author
-
Hsieh, Scott S.
- Subjects
- *
PHOTON counting , *MONTE Carlo method , *PHOTON detectors , *X-rays , *SPECTRAL sensitivity , *SCINTILLATORS , *ELECTRON energy loss spectroscopy - Abstract
Background: The spatial resolution of energy‐integrating diagnostic CT scanners is limited by interpixel reflectors on the detector, which optically isolate pixels but create dead space. Because the width of the reflector cannot easily be decreased, fill factor diminishes as resolution increases. Purpose: We propose loading (or mixing) a high‐Z element into the reflectors, causing the reflectors to be X‐ray fluorescent. Re‐emitted characteristic X‐rays could be detected in adjacent pixels, increasing the effective fill factor and compensating for fill factor loss with higher‐resolution detectors. The purpose of this work is to understand the physical principles of this approach and to analyze its effectiveness using Monte Carlo simulations. Methods: Detector pixels were modeled using the GEANT4 Monte Carlo package. The width of the reflector was kept constant at 0.1 mm throughout, and we considered pixel pitches between 0.5 and 1 mm. The pixelated scintillator material was gadolinium oxysulfide, 3 mm thick. The baseline reflector material was chosen to be acrylic, and varying concentrations of a high‐Z element were loaded into the material. We assumed that the optical characteristics of pixels were ideal (no absorption within pixels, perfect reflection at boundaries). The detector was irradiated uniformly with 10,000 X‐ray photons to estimate its spectral response. The figure of merit was the variance of the detector signal at zero frequency normalized to that of an ideal single‐bin photon‐counting detector with 100% fill factor. Sensitivity analyses were conducted to understand the effect of varying the high‐Z element concentration and the spectrum. Results: Initial simulations suggested that a k‐edge near 50 keV would be ideal. Gd was therefore selected as the high‐Z material. The relative variances for a conventional energy integrating detector without Gd at 1 mm pixel pitch (81% fill factor) and 0.5 mm pixel pitch (64% fill factor) were 1.38 and 1.74, compared to 1.00 for an ideal photon counting detector, implying a 26% variance penalty for 0.5 mm pitch. When 1 g/cm3 Gd was loaded into the interpixel reflector, the relative variance improved to 1.27 and 1.43, respectively, implying that the variance penalty for including Gd together with 0.5 mm pitch is only 4%. Performance was nearly maximized at 1.0 g/cm3 of Gd, but a concentration of 0.5 g/cm3 of Gd showed most of the benefit. Improvements depend weakly on kV, with lower kV associated with higher improvements. An external anti‐scatter grid was not modeled in our simulations and would reduce the expected benefit, depending greatly on the pitch and dimensionality of the anti‐scatter grid. Conclusions: The losses in fill factor associated with smaller pixel pitch can be reduced if Gd or a similar element could be loaded into the interpixel reflector. These improvements in noise efficiency are yet to be verified experimentally. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Direct energy binning for photon counting detectors: Simulation study.
- Author
-
Taguchi, Katsuyuki and Hsieh, Scott S.
- Subjects
- *
PHOTON counting , *PHOTON detectors , *PHOTON beams , *CONDITIONAL expectations , *LINE integrals , *DATA binning , *SPECTRAL imaging , *POISSON regression - Abstract
Background: Photon counting detectors (PCDs) for x‐ray computed tomography (CT) face spectral distortion from pulse pileup and charge sharing. The photon counting scheme used by many PCDs is threshold–subtract (TS) with pulse height analysis (PHA), where each counter counts up‐crossing events when pulses exceed an energy threshold. PCD data are not Poisson‐distributed due to charge sharing and pulse pileup, but the counting statistics have never been studied yet. Purpose: The objectives of this study were (1) to propose a modified photon counting scheme, direct energy binning (DB), that is expected to be robust against pulse pileup; (2) to assess the performance of DB compared to TS; and (3) to evaluate its counting statistics. Methods: With DB scheme, counter k starts a timer upon an up‐crossing event of energy threshold k, and adds a count only if the next higher energy threshold (k+1) was not crossed within a short time window (hence, the pulse peak belongs to the energy bin k). We used Monte Carlo (MC) simulation and assessed count‐rate curves and count‐rate‐dependent spectral imaging task performance for conventional CT imaging as well as water thickness estimation, water–bone material decomposition, and K‐edge imaging with tungsten as the K‐edge material. We also assessed count‐rate‐dependent measurement statistics such as expectation, variance, and covariance of total counts as well as energy bin outputs. The agreement with counting statistics models was also evaluated. Results: The DB scheme improved the count‐rate curve, that is, mean measured counts as a function of input count‐rate, and peaked with 59% higher count‐rate capability than the TS scheme (3.5 × 108 counts per second (cps)/mm2 versus 2.3 × 108 cps/mm2). The Cramér–Rao lower bounds (CRLB) of the variance of basis line integrals estimation for DB was better than those for TS by 2% for the conventional CT imaging, 30% for water–bone material decomposition, and 32% for K‐edge imaging at 1000 mA (at 7.3 × 107 cps/sub‐pixel after charge sharing). When count‐rates were lower, PCD data statistics were dominated by charge sharing: the variance of total counts and lower energy bins was larger than the mean counts; the covariance of bin data was positive and non‐zero. When count‐rates were higher, PCD data statistics were dominated by pulse pileup: the variance of data was lower than the mean; the covariance of bin data was negative. The transition between the two regimes occurred smoothly, and pulse pileup dominated the statistics ≥400 mA (when the count‐rate after charge sharing was 2.9 × 107 cps/sub‐pixel and the probability of count‐loss for DB was 37%). Both DB and TS had good agreement with Yu–Fessler's models of total counts; however, DB had a better agreement with Wang's variance and covariance models for energy bin data than TS did. Conclusions: The proposed DB scheme had several advantages over TS. At low to moderate flux, DB could improve the resilience of PCDs to pulse pileup. Counting statistics deviated from the Poisson distribution due to charge sharing for lower count‐rate conditions and pulse pileup for higher count‐rate conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Imaging performance of a LaBr3:Ce scintillation detector for photon counting x‐ray computed tomography: Simulation study.
- Author
-
Taguchi, Katsuyuki, Schaart, Dennis R., Goorden, Marlies C., and Hsieh, Scott S.
- Abstract
Background Purpose Methods Results Conclusion Photon counting detectors (PCDs) for x‐ray computed tomography (CT) are the future of CT imaging. At present, semiconductor‐based PCDs such as cadmium telluride (CdTe), cadmium zinc telluride, and silicon have been either used or investigated for clinical PCD CT. Unfortunately, all of them have the same major challenges, namely high cost and limited spectral signal‐to‐noise ratio (SNR). Recent studies showed that some high‐quality scintillators, such as lanthanum bromide doped with cerium (LaBr3:Ce), are less expensive and almost as fast as CdTe.The objective of this study is to assess the performance of a LaBr3:Ce PCD for clinical x‐ray CT.We performed Monte Carlo simulations and compared the performance of 3 mm thick LaBr3:Ce and 2 mm thick CdTe for PCD CT with x‐rays at 120 kVp and 20–1000 mA. The two PCDs were operated with either a threshold–subtract (TS) counting scheme or a direct energy binning (DB) counting scheme. The performance was assessed in terms of the accuracy of registered spectra, counting capability, and count‐rate‐dependent spectral imaging‐task performance, for conventional CT imaging, water–bone material decomposition, and K‐edge imaging with tungsten as the K‐edge material. The performance for these imaging‐tasks was quantified by nCRLB, that is, the Cramér–Rao lower bound on the variance of basis line‐integral estimation, normalized by the corresponding value of CdTe at 20 mA.The spectrum recorded by CdTe was distorted significantly due to charge sharing, whereas the spectra recorded by LaBr3:Ce better matched the incident spectrum. The dead time, estimated by fitting a paralyzable detector model to the count‐rate curves, was 20.7, 15.0, 37.2, and 13.0 ns for CdTe with TS, CdTe with DB, LaBr3:Ce with TS, and LaBr3:Ce with DB, respectively. Conventional CT imaging showed an adverse effect of reduced geometrical efficiency due to optical reflectors in LaBr3:Ce PCD. The nCRLBs (a lower value indicates a better SNR) for CdTe with TS, CdTe with DB, LaBr3:Ce with TS, LaBr3:Ce with DB, and the ideal PCD, were 1.00 ± 0.01, 1.00 ± 0.01, 1.18 ± 0.02, 1.18 ± 0.02, and 0.79 ± 0.01, respectively, at 20 mA. The nCRLBs for water–bone material decomposition, in the same order, were 1.00 ± 0.02, 1.00 ± 0.02, 0.85 ± 0.02, 0.85 ± 0.02, and 0.24 ± 0.02, respectively, at 20 mA; and 0.98 ± 0.02, 0.98 ± 0.02, 1.09 ± 0.02, 0.83 ± 0.02, and 0.24 ± 0.02, respectively, at 1000 mA. Finally, the nCRLBs for K‐edge imaging, the most demanding task among the five, were 1.00 ± 0.02, 1.00 ± 0.02, 0.55 ± 0.02, 0.55 ± 0.02, and 0.13 ± 0.02, respectively, at 20 mA; and 2.45 ± 0.02, 2.29 ± 0.02, 3.12 ± 0.02, 2.11 ± 0.02, and 0.13 ± 0.02, respectively, at 1,000 mA.The Monte Carlo simulations showed that, compared to CdTe with either TS or DB, LaBr3:Ce with DB provided more accurate spectra, comparable or better counting capability, and superior spectral imaging‐task performances, that is, water–bone material decomposition and K‐edge imaging. CdTe had a better performance than LaBr3:Ce for the conventional CT imaging task due to its higher geometrical efficiency. LaBr3:Ce PCD with DB scheme may be an excellent alternative option for CdTe PCD. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.