1,474 results
Search Results
2. Anniversary Paper: Development of x-ray computed tomography: The role ofMedical PhysicsandAAPMfrom the 1970s to present
- Author
-
Patrick J. La Riviere, Jeffrey H. Siewerdsen, Willi A. Kalender, and Xiaochuan Pan
- Subjects
medicine.medical_specialty ,Cone beam computed tomography ,medicine.diagnostic_test ,business.industry ,X ray computed ,Medical imaging ,Medicine ,Medical physics ,Computed tomography ,General Medicine ,Tomography ,business ,Tomosynthesis - Abstract
The AAPM, through its members, meetings, and its flagship journal Medical Physics, has played an important role in the development and growth of x-ray tomography in the last 50 years. From a spate of early articles in the 1970s characterizing the first commercial computed tomography (CT) scanners through the "slice wars" of the 1990s and 2000s, the history of CT and related techniques such as tomosynthesis can readily be traced through the pages of Medical Physics and the annals of the AAPM and RSNA/AAPM Annual Meetings. In this article, the authors intend to give a brief review of the role of Medical Physics and the AAPM in CT and tomosynthesis imaging over the last few decades.
- Published
- 2008
3. Anniversary Paper: Role of medical physicists and the AAPM in improving geometric aspects of treatment accuracy and precision
- Author
-
Paul J. Keall, Ellen Yorke, and Frank Verhaegen
- Subjects
Medical physicist ,medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,External beam radiation ,Medical imaging ,Medicine ,Medical physics ,Computed tomography ,General Medicine ,Ultrasonography ,business ,Image guidance - Abstract
The last 50 years have seen great advances in the accuracy of external beam radiation therapy. Geometrical uncertainties have been reduced from a centimeter or more in presimulation, skin-mark guided days to 1-2 mm in today's image-guided radiation therapy treatments. Medical physicists, with the support and guidance of the American Association of Physicists in Medicine (AAPM), have been, and continue to be, at the forefront of research, development and clinical implementation in this area. This article reviews some of the major contributions of physicists to the improvement of treatment accuracy and precision, and speculates as to what the future may bring.
- Published
- 2008
4. 3D‐printed iodine‐ink CT phantom for radiomics feature extraction ‐ advantages and challenges.
- Author
-
Bach, Michael, Aberle, Christoph, Depeursinge, Adrien, Jimenez‐del‐Toro, Oscar, Schaer, Roger, Flouris, Kyriakos, Konukoglu, Ender, Müller, Henning, Stieltjes, Bram, and Obmann, Markus M.
- Subjects
IMAGING phantoms ,FEATURE extraction ,RADIOMICS ,COMPUTED tomography ,NO-tillage ,DISTRIBUTION (Probability theory) - Abstract
Background: To test and validate novel CT techniques, such as texture analysis in radiomics, repeat measurements are required. Current anthropomorphic phantoms lack fine texture and true anatomic representation. 3D‐printing of iodinated ink on paper is a promising phantom manufacturing technique. Previously acquired or artificially created CT data can be used to generate realistic phantoms. Purpose: To present the design process of an anthropomorphic 3D‐printed iodine ink phantom, highlighting the different advantages and pitfalls in its use. To analyze the phantom's X‐ray attenuation properties, and the influences of the printing process on the imaging characteristics, by comparing it to the original input dataset. Methods: Two patient CT scans and artificially generated test patterns were combined in a single dataset for phantom printing and cropped to a size of 26 × 19 × 30 cm3. This DICOM dataset was printed on paper using iodinated ink. The phantom was CT‐scanned and compared to the original image dataset used for printing the phantom. The water‐equivalent diameter of the phantom was compared to that of a patient cohort (N = 104). Iodine concentrations in the phantom were measured using dual‐energy CT. 86 radiomics features were extracted from 10 repeat phantom scans and the input dataset. Features were compared using a histogram analysis and a PCA individually and overall, respectively. The frequency content was compared using the normalized spectrum modulus. Results: Low density structures are depicted incorrectly, while soft tissue structures show excellent visual accordance with the input dataset. Maximum deviations of around 30 HU between the original dataset and phantom HU values were observed. The phantom has X‐ray attenuation properties comparable to a lightweight adult patient (∼54 kg, BMI 19 kg/m2). Iodine concentrations in the phantom varied between 0 and 50 mg/ml. PCA of radiomics features shows different tissue types separate in similar areas of PCA representation in the phantom scans as in the input dataset. Individual feature analysis revealed systematic shift of first order radiomics features compared to the original dataset, while some higher order radiomics features did not. The normalized frequency modulus |f(ω)| of the phantom data agrees well with the original data. However, all frequencies systematically occur more frequently in the phantom compared to the maximum of the spectrum modulus than in the original data set, especially for mid‐frequencies (e.g., for ω = 0.3942 mm−1, |f(ω)|original = 0.09 * |fmax|original and |f(ω)|phantom = 0.12 * |fmax|phantom). Conclusions: 3D‐iodine‐ink‐printing technology can be used to print anthropomorphic phantoms with a water‐equivalent diameter of a lightweight adult patient. Challenges include small residual air enclosures and the fidelity of HU values. For soft tissue, there is a good agreement between the HU values of the phantom and input data set. Radiomics texture features of the phantom scans are similar to the input data set, but systematic shifts of radiomics features in first order features, due to differences in HU values, need to be considered. The paper substrate influences the spatial frequency distribution of the phantom scans. This phantom type is of very limited use for dual‐energy CT analyses. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Anniversary Paper: Development of x-ray computed tomography: The role of Medical Physics and AAPM from the 1970s to present.
- Author
-
Xiaochuan Pan, Siewerdsen, Jeffrey, La Riviere, Patrick J., and Kalender, Willi A.
- Subjects
- *
X-rays , *TOMOGRAPHY , *MEDICAL radiography , *SCANNING systems - Abstract
The AAPM, through its members, meetings, and its flagship journal Medical Physics, has played an important role in the development and growth of x-ray tomography in the last 50 years. From a spate of early articles in the 1970s characterizing the first commercial computed tomography (CT) scanners through the “slice wars” of the 1990s and 2000s, the history of CT and related techniques such as tomosynthesis can readily be traced through the pages of Medical Physics and the annals of the AAPM and RSNA/AAPM Annual Meetings. In this article, the authors intend to give a brief review of the role of Medical Physics and the AAPM in CT and tomosynthesis imaging over the last few decades. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
6. Highly cited papers in Medical Physics
- Author
-
David J. Eaton
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,Computer science ,medicine.medical_treatment ,Monte Carlo method ,Brachytherapy ,Computed tomography ,General Medicine ,Iterative reconstruction ,Scintigraphy ,Biomagnetism ,Radiation therapy ,medicine ,Dosimetry ,Medical physics - Published
- 2014
7. Announcement: Medical Physics publishes first Focus Series of papers
- Author
-
William R. Hendee
- Subjects
Image formation ,medicine.medical_specialty ,Focus (computing) ,Series (mathematics) ,medicine.diagnostic_test ,Computer science ,medicine ,Medical physics ,Computed tomography ,General Medicine ,Tomography ,Iterative reconstruction - Published
- 2011
8. STEDNet: Swin transformer‐based encoder–decoder network for noise reduction in low‐dose CT.
- Author
-
Zhu, Linlin, Han, Yu, Xi, Xiaoqi, Fu, Huijuan, Tan, Siyu, Liu, Mengnan, Yang, Shuangzhan, Liu, Chang, Li, Lei, and Yan, Bin
- Subjects
TRANSFORMER models ,IMAGE denoising ,COMPUTED tomography ,RADIATION exposure ,NOISE control ,SPATIAL resolution - Abstract
Background: Low‐dose computed tomography (LDCT) can reduce the dose of X‐ray radiation, making it increasingly significant for routine clinical diagnosis and treatment planning. However, the noise introduced by low‐dose X‐ray exposure degrades the quality of CT images, affecting the accuracy of clinical diagnosis. Purpose: The noises, artifacts, and high‐frequency components are similarly distributed in LDCT images. Transformer can capture global context information in an attentional manner to create distant dependencies on targets and extract more powerful features. In this paper, we reduce the impact of image errors on the ability to retain detailed information and improve the noise suppression performance by fully mining the distribution characteristics of image information. Methods: This paper proposed an LDCT noise and artifact suppressing network based on Swin Transformer. The network includes a noise extraction sub‐network and a noise removal sub‐network. The noise extraction and removal capability are improved using a coarse extraction network of high‐frequency features based on full convolution. The noise removal sub‐network improves the network's ability to extract relevant image features by using a Swin Transformer with a shift window as an encoder–decoder and skip connections for global feature fusion. Also, the perceptual field is extended by extracting multi‐scale features of the images to recover the spatial resolution of the feature maps. The network uses a loss constraint with a combination of L1 and MS‐SSIM to improve and ensure the stability and denoising effect of the network. Results: The denoising ability and clinical applicability of the methods were tested using clinical datasets. Compared with DnCNN, RED‐CNN, CBDNet and TSCN, the STEDNet method shows a better denoising effect on RMSE and PSNR. The STEDNet method effectively removes image noise and preserves the image structure to the maximum extent, making the reconstructed image closest to the NDCT image. The subjective and objective analysis of several sets of experiments shows that the method in this paper can effectively maintain the structure, edges, and textures of the denoised images while having good noise suppression performance. In the real data evaluation, the RMSE of this method is reduced by 18.82%, 15.15%, 2.25%, and 1.10% on average compared with DnCNN, RED‐CNN, CBDNet, and TSCNN, respectively. The average improvement of PSNR is 9.53%, 7.33%, 2.65%, and 3.69%, respectively. Conclusions: This paper proposed a LDCT image denoising algorithm based on end‐to‐end training. The method in this paper can effectively improve the diagnostic performance of CT images by constraining the details of the images and restoring the LDCT image structure. The problem of increased noise and artifacts in CT images can be solved while maintaining the integrity of CT image tissue structure and pathological information. Compared with other algorithms, this method has better denoising effects both quantitatively and qualitatively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Detector shifting and deep learning based ring artifact correction method for low‐dose CT.
- Author
-
Liu, Yuedong, Wei, Cunfeng, and Xu, Qiong
- Subjects
DEEP learning ,DETECTORS ,NOISE control ,IMAGE recognition (Computer vision) ,COMPUTED tomography ,RING networks ,PHOTON counting ,X-rays - Abstract
Background: In x‐ray computed tomography (CT), the gain inconsistency of detector units leads to ring artifacts in the reconstructed images, seriously destroys the image structure, and is not conducive to image recognition. In addition, to reduce radiation dose and scanning time, especially photon counting CT, low‐dose CT is required, so it is important to reduce the noise and suppress ring artifacts in low‐dose CT images simultaneously. Purpose: Deep learning is an effective method to suppress ring artifacts, but there are still residual artifacts in corrected images. And the feature recognition ability of the network for ring artifacts decreases due to the effect of noise in the low‐dose CT images. In this paper, a method is proposed to achieve noise reduction and ring artifact removal simultaneously. Methods: To solve these problems, we propose a ring artifact correction method for low‐dose CT based on detector shifting and deep learning in this paper. Firstly, at the CT scanning stage, the detector horizontally shifts randomly at each projection to alleviate the ring artifacts as front processing. Thus, the ring artifacts are transformed into dispersed noise in front processed images. Secondly, deep learning is used for dispersed noise and statistical noise reduction. Results: Both simulation and real data experiments are conducted to evaluate the proposed method. Compared to other methods, the results show that the proposed method in this paper has better effect on removing ring artifacts in the low‐dose CT images. Specifically, the RMSEs and SSIMs of the two sets of simulated and experiment data are better compared to the raw images significantly. Conclusions: The method proposed in this paper combines detector shifting and deep learning to remove ring artifacts and statistical noise simultaneously. The results show that the proposed method is able to get better performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Segmentation and volume quantification of epicardial adipose tissue in computed tomography images.
- Author
-
Li, Yifan, Song, Shuni, Sun, Yu, Bao, Nan, Yang, Benqiang, and Xu, Lisheng
- Subjects
DEEP learning ,COMPUTED tomography ,ADIPOSE tissues ,THRESHOLDING algorithms ,PEARSON correlation (Statistics) - Abstract
Background: Many cardiovascular diseases are closely related to the composition of epicardial adipose tissue (EAT). Accurate segmentation of EAT can provide a reliable reference for doctors to diagnose the disease. The distribution and composition of EAT often have significant individual differences, and the traditional segmentation methods are not effective. In recent years, deep learning method has been gradually introduced into EAT segmentation task. Purpose: The existing EAT segmentation methods based on deep learning have a large amount of computation and the segmentation accuracy needs to be improved. Therefore, the purpose of this paper is to develop a lightweight EAT segmentation network, which can obtain higher segmentation accuracy with less computation and further alleviate the problem of false‐positive segmentation. Methods: First, the obtained computed tomography was preprocessed. That is, the threshold range of EAT was determined to be −190, −30 HU according to prior knowledge, and the non‐adipose pixels were excluded by threshold segmentation to reduce the difficulty of training. Second, the image obtained after thresholding was input into the lightweight RDU‐Net network to perform the training, validating, and testing process. RDU‐Net uses a residual multi‐scale dilated convolution block in order to extract a wider range of information without changing the current resolution. At the same time, the form of residual connection is adopted to avoid the problem of gradient expansion or gradient explosion caused by too deep network, which also makes the learning easier. In order to optimize the training process, this paper proposes PNDiceLoss, which takes both positive and negative pixels as learning targets, fully considers the class imbalance problem, and appropriately highlights the status of positive pixels. Results: In this paper, 50 CCTA images were randomly selected from the hospital, and the commonly used Dice similarity coefficient (DSC), Jaccard similarity, accuracy (ACC), specificity (SP), precision (PC), and Pearson correlation coefficient are used as evaluation metrics. Bland–Altman analysis results show that the extracted EAT volume is consistent with the actual volume. Compared with the existing methods, the segmentation results show that the proposed method achieves better performance on these metrics, achieving the DSC of 0.9262. The number of false‐positive pixels has been reduced by more than half. Pearson correlation coefficient reached 0.992, and linear regression coefficient reached 0.977 when measuring the volume of EAT obtained. In order to verify the effectiveness of the proposed method, experiments are carried out in the cardiac fat database of VisualLab. On this database, the proposed method also achieved good results, and the DSC value reached 0.927 in the case of only 878 slices. Conclusions: A new method to segment and quantify EAT is proposed. Comprehensive experiments show that compared with some classical segmentation algorithms, the proposed method has the advantages of shorter time‐consuming, less memory required for operations, and higher segmentation accuracy. The code is available at https://github.com/lvanlee/EAT_Seg/tree/main/EAT_seg. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Spatial and temporal motion characterization for x‐ray CT.
- Author
-
Hsieh, Jiang
- Subjects
- *
ORGANS (Anatomy) , *IMAGE reconstruction , *INSPECTION & review , *CONE beam computed tomography , *BLOOD vessels , *ACQUISITION of data , *X-rays , *COMPUTED tomography - Abstract
Background: Motion induced image artifacts have been the focus of many investigations for x‐ray computed tomography (CT). Methodologies of combating patient motion include the use of gating devices to optimize the data acquisition, reduction in patient scan time via faster gantry rotation and large detector coverage, and the development of advanced reconstruction and post‐processing algorithms to minimize motion artifacts. Purpose: Previously proposed approaches are generally "global" in nature in that motion is characterized for the entire image. It is well known, however, that the presence of motion artifact in a CT image is highly nonuniform. When there is a lack of automated and quantitative local measure indicating the presence and the severity of motion artifacts in a local region, the quality of the reconstructed images depends heavily on the CT operator's rigor and experience. Even when an operator is informed of the presence of motion, little information is provided about the nature of the motion artifact to understand its relevance to the clinical task at hand. In this paper, we propose an image‐space spatial‐ and temporal‐consistency metric (CM) to detect and characterize the local motion. Method: In a non‐rigid human organ, such as the lung, there are many small and rigid objects (target objects), such as blood vessels and nodules, distributed throughout the organ. If motion can be characterized for these target objects, we obtain a complete motion map for the organ. To accomplish this, a preliminary image reconstruction is carried out to identify the target objects and establish region‐of‐interests for consistency‐metric calculation. The CM is then obtained based on the backprojected intensity difference between the object region and its circular background. For a stationary object, the accumulation of this quantity over views is linear. When a target object moves, nonlinear behavior exhibits and a quantitative measure of linearity indicates the severity of motion. Results: Extensive computer simulation was utilized to confirm the validity of the theory. These tests stress the sensitivity of the proposed CM to the target object size, object shape, in‐plane motion, cross‐plane motion, cone‐beam effect, and complex background. Results confirm that the proposed approach is robust under different testing conditions. The proposed CM is further validated using a cardiac scan of a swine, and the proposed CM correlates well with the visual inspection of the artifact in the reconstructed images. Conclusions: In this paper, we have demonstrated the efficacy of the proposed CM for motion detection. Unlike previously proposed approaches where the consistency condition is derived for the entire image or the entire imaging volume, the proposed metric is well localized so that different zones in a patient anatomy can be individually characterized. In addition, the proposed CM provides a quantitative measure on a view‐by‐view basis so that the severity of motion is consistently estimated over time. Such information can be used to optimize the image reconstruction process and minimize the motion artifact. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Multi‐scale consistent self‐training network for semi‐supervised orbital tumor segmentation.
- Author
-
Wang, Keyi, Jin, Kai, Cheng, Zhiming, Liu, Xindi, Wang, Changjun, Guan, Xiaojun, Xu, Xiaojun, Ye, Juan, Wang, Wenyu, and Wang, Shuai
- Subjects
- *
SUPERVISED learning , *COMPUTED tomography , *EYE diseases , *TUMOR diagnosis , *GAUSSIAN mixture models ,EYE-socket tumors - Abstract
Purpose: Segmentation of orbital tumors in CT images is of great significance for orbital tumor diagnosis, which is one of the most prevalent diseases of the eye. However, the large variety of tumor sizes and shapes makes the segmentation task very challenging, especially when the available annotation data is limited. Methods: To this end, in this paper, we propose a multi‐scale consistent self‐training network (MSCINet) for semi‐supervised orbital tumor segmentation. Specifically, we exploit the semantic‐invariance features by enforcing the consistency between the predictions of different scales of the same image to make the model more robust to size variation. Moreover, we incorporate a new self‐training strategy, which adopts iterative training with an uncertainty filtering mechanism to filter the pseudo‐labels generated by the model, to eliminate the accumulation of pseudo‐label error predictions and increase the generalization of the model. Results: For evaluation, we have built two datasets, the orbital tumor binary segmentation dataset (Orbtum‐B) and the orbital multi‐organ segmentation dataset (Orbtum‐M). Experimental results on these two datasets show that our proposed method can both achieve state‐of‐the‐art performance. In our datasets, there are a total of 55 patients containing 602 2D images. Conclusion: In this paper, we develop a new semi‐supervised segmentation method for orbital tumors, which is designed for the characteristics of orbital tumors and exhibits excellent performance compared to previous semi‐supervised algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Highly cited papers in Medical Physics.
- Author
-
Eaton, David J.
- Subjects
- *
MEDICAL physics , *PERIODICAL publishing , *RADIOISOTOPE brachytherapy , *PERIODICAL articles , *NUCLEAR magnetic resonance spectroscopy - Published
- 2014
- Full Text
- View/download PDF
14. Spectral CT image reconstruction using a constrained optimization approach—An algorithm for AAPM 2022 spectral CT grand challenge and beyond.
- Author
-
Hu, Xiaoyu and Jia, Xun
- Subjects
- *
IMAGE reconstruction , *CONSTRAINED optimization , *SPECTRAL imaging , *COMPUTED tomography , *STANDARD deviations , *ALGORITHMS - Abstract
Background: CT reconstruction is of essential importance in medical imaging. In 2022, the American Association of Physicists in Medicine (AAPM) sponsored a Grand Challenge to investigate the challenging inverse problem of spectral CT reconstruction, with the aim of achieving the most accurate reconstruction results. The authors of this paper participated in the challenge and won as a runner‐up team. Purpose: This paper reports details of our PROSPECT algorithm (Prior‐based Restricted‐variable Optimization for SPEctral CT) and follow‐up studies regarding the algorithm's accuracy and enhancement of its convergence speed. Methods: We formulated the reconstruction task as an optimization problem. PROSPECT employed a one‐step backward iterative scheme to solve this optimization problem by allowing estimation of and correction for the difference between the actual polychromatic projection model and the monochromatic model used in the optimization problem. PROSPECT incorporated various forms of prior information derived by analyzing training data provided by the Grand Challenge to reduce the number of unknown variables. We investigated the impact of projection data precision on the resulting solution accuracy and improved convergence speed of the PROSPECT algorithm by incorporating a beam‐hardening correction (BHC) step in the iterative process. We also studied the algorithm's performance under noisy projection data. Results: Prior knowledge allowed a reduction of the number of unknown variables by 85.9%$85.9\%$. PROSPECT algorithm achieved the average root of mean square error (RMSE) of 3.3×10−6$3.3\,\times \,10^{-6}$ in the test data set provided by the Grand Challenge. Performing the reconstruction with the same algorithm but using double‐precision projection data reduced RMSE to 1.2×10−11$1.2\,\times \,10^{-11}$. Including the BHC step in the PROSPECT algorithm accelerated the iteration process with a 40% reduction in computation time. Conclusions: PROSPECT algorithm achieved a high degree of accuracy and computational efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Few‐shot segmentation framework for lung nodules via an optimized active contour model.
- Author
-
Yang, Lin, Shao, Dan, Huang, Zhenxing, Geng, Mengxiao, Zhang, Na, Chen, Long, Wang, Xi, Liang, Dong, Pang, Zhi‐Feng, and Hu, Zhanli
- Subjects
- *
ARTIFICIAL neural networks , *PULMONARY nodules , *NONSMOOTH optimization , *DEEP learning , *ACTIVE learning - Abstract
Background: Accurate segmentation of lung nodules is crucial for the early diagnosis and treatment of lung cancer in clinical practice. However, the similarity between lung nodules and surrounding tissues has made their segmentation a longstanding challenge. Purpose: Existing deep learning and active contour models each have their limitations. This paper aims to integrate the strengths of both approaches while mitigating their respective shortcomings. Methods: In this paper, we propose a few‐shot segmentation framework that combines a deep neural network with an active contour model. We introduce heat kernel convolutions and high‐order total variation into the active contour model and solve the challenging nonsmooth optimization problem using the alternating direction method of multipliers. Additionally, we use the presegmentation results obtained from training a deep neural network on a small sample set as the initial contours for our optimized active contour model, addressing the difficulty of manually setting the initial contours. Results: We compared our proposed method with state‐of‐the‐art methods for segmentation effectiveness using clinical computed tomography (CT) images acquired from two different hospitals and the publicly available LIDC dataset. The results demonstrate that our proposed method achieved outstanding segmentation performance according to both visual and quantitative indicators. Conclusion: Our approach utilizes the output of few‐shot network training as prior information, avoiding the need to select the initial contour in the active contour model. Additionally, it provides mathematical interpretability to the deep learning, reducing its dependency on the quantity of training samples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Low‐dose CT denoising with a high‐level feature refinement and dynamic convolution network.
- Author
-
Yang, Sihan, Pu, Qiang, Lei, Chunting, Zhang, Qiao, Jeon, Seunggil, and Yang, Xiaomin
- Subjects
COMPUTED tomography ,DEEP learning ,SIGNAL-to-noise ratio ,DIAGNOSIS ,TOMOGRAPHY - Abstract
Background: Since the potential health risks of the radiation generated by computer tomography (CT), concerns have been expressed on reducing the radiation dose. However, low‐dose CT (LDCT) images contain complex noise and artifacts, bringing uncertainty to medical diagnosis. Purpose: Existing deep learning (DL)‐based denoising methods are difficult to fully exploit hierarchical features of different levels, limiting the effect of denoising. Moreover, the standard convolution kernel is parameter sharing and cannot be adjusted dynamically with input change. This paper proposes an LDCT denoising network using high‐level feature refinement and multiscale dynamic convolution to mitigate these problems. Methods: The dual network structure proposed in this paper consists of the feature refinement network (FRN) and the dynamic perception network (DPN). The FDN extracts features of different levels through residual dense connections. The high‐level hierarchical information is transmitted to DPN to improve the low‐level representations. In DPN, the two networks' features are fused by local channel attention (LCA) to assign weights in different regions and handle CT images' delicate tissues better. Then, the dynamic dilated convolution (DDC) with multibranch and multiscale receptive fields is proposed to enhance the expression and processing ability of the denoising network. The experiments were trained and tested on the dataset "NIH‐AAPM‐Mayo Clinic Low‐Dose CT Grand Challenge," consisting of 10 anonymous patients with normal‐dose abdominal CT and LDCT at 25% dose. In addition, external validation was performed on the dataset "Low Dose CT Image and Projection Data," which included 300 chest CT images at 10% dose and 300 head CT images at 25% dose. Results: Proposed method compared with seven mainstream LDCT denoising algorithms. On the Mayo dataset, achieved peak signal‐to‐noise ratio (PSNR): 46.3526 dB (95% CI: 46.0121–46.6931 dB) and structural similarity (SSIM): 0.9844 (95% CI: 0.9834–0.9854). Compared with LDCT, the average increase was 3.4159 dB and 0.0239, respectively. The results are relatively optimal and statistically significant compared with other methods. In external verification, our algorithm can cope well with ultra‐low‐dose chest CT images at 10% dose and obtain PSNR: 28.6130 (95% CI: 28.1680–29.0580 dB) and SSIM: 0.7201 (95% CI: 0.7101–0.7301). Compared with LDCT, PSNR/SSIM is increased by 3.6536dB and 0.2132, respectively. In addition, the quality of LDCT can also be improved in head CT denoising. Conclusions: This paper proposes a DL‐based LDCT denoising algorithm, which utilizes high‐level features and multiscale dynamic convolution to optimize the network's denoising effect. This method can realize speedy denoising and performs well in noise suppression and detail preservation, which can be helpful for the diagnosis of LDCT. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Likelihood‐based bilateral filters for pre‐estimated basis sinograms using photon‐counting CT.
- Author
-
Lee, Okkyun
- Subjects
COMPUTED tomography ,SPATIAL resolution ,INFERENTIAL statistics - Abstract
Background: Noise amplification in material decomposition is an issue for exploiting photon‐counting computed tomography (PCCT). Regularization techniques and neighborhood filters have been widely used, but degraded spatial resolution and bias are concerns. Purpose: This paper proposes likelihood‐based bilateral filters that can be applied to pre‐estimated basis sinograms to reduce the noise while minimally affecting spatial resolution and accuracy. Methods: The proposed method needs system models (e.g., incident spectrum, detector response) to calculate the likelihood. First, it performs maximum likelihood (ML)‐based estimation in the projection domain to obtain basis sinograms. The estimated basis sinograms suffer from severe noise but are asymptotically unbiased without degrading spatial resolution. Then it calculates the neighborhood likelihoods for a given measurement at the center pixel using the neighborhood estimates and designs the weights based on the distance of likelihoods. It is also analyzed in terms of statistical inference, and then two variations of the filter are introduced: one that requires the significance level instead of the empirical hyperparameter. The other is a measurement‐based filter, which can be applied when accurate estimates are given without the system models. The proposed methods were validated by analyzing the local property of noise and spatial resolution and the global trend of noise and bias using numerical thorax and abdominal phantoms for a two‐material decomposition (water and bone). They were compared to the conventional neighborhood filters and the model‐based iterative reconstruction with an edge‐preserving penalty applied in the basis images. Results: The proposed method showed comparable or superior performance for the local and global properties to conventional methods in many cases. The thorax phantom: The full width at half maximum (FWHM) decreased by −2%–31% (−2 indicates that it increased by 2% compared to the best performance from conventional methods), and the global bias was reduced by 2%–19% compared to other methods for similar noise levels (local: 51% of the ML, global: 49%) in the water basis image. The FWHM decreased by 8%–31%, and the global bias was reduced by 9%–44% for similar noise levels (local: 44% of the ML, global: 36%) in the CT image at 65 keV. The abdominal phantom: The FWHM decreased by 10%–32%, and the global bias was reduced by 3%–35% compared to other methods for similar noise levels (local: 66% of the ML, global: 67%) in the water basis image. The FWHM decreased by up to −11%–47%, and the global bias was reduced by 13%–35% for similar noise levels (local: 71% of the ML, global: 70%) in the CT image at 60 keV. Conclusions: This paper introduced the likelihood‐based bilateral filters as a post‐processing method applied to the ML‐based estimates of basis sinograms. The proposed filters effectively reduced the noise in the basis images and the synthesized monochromatic CT images. It showed the potential of using likelihood‐based filters in the projection domain as a substitute for conventional regularization or filtering methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Automated lung tumor delineation on positron emission tomography/computed tomography via a hybrid regional network.
- Author
-
Lei, Yang, Wang, Tonghe, Jeong, Jiwoong J., Janopaul‐Naylor, James, Kesarwala, Aparna H., Roper, Justin, Tian, Sibo, Bradley, Jeffrey D., Liu, Tian, Higgins, Kristin, and Yang, Xiaofeng
- Subjects
POSITRON emission tomography ,COMPUTED tomography ,LUNG tumors ,NON-small-cell lung carcinoma ,PEARSON correlation (Statistics) ,LUNGS - Abstract
Background: Multimodality positron emission tomography/computed tomography (PET/CT) imaging combines the anatomical information of CT with the functional information of PET. In the diagnosis and treatment of many cancers, such as non‐small cell lung cancer (NSCLC), PET/CT imaging allows more accurate delineation of tumor or involved lymph nodes for radiation planning. Purpose: In this paper, we propose a hybrid regional network method of automatically segmenting lung tumors from PET/CT images. Methods: The hybrid regional network architecture synthesizes the functional and anatomical information from the two image modalities, whereas the mask regional convolutional neural network (R‐CNN) and scoring fine‐tune the regional location and quality of the output segmentation. This model consists of five major subnetworks, that is, a dual feature representation network (DFRN), a regional proposal network (RPN), a specific tumor‐wise R‐CNN, a mask‐Net, and a score head. Given a PET/CT image as inputs, the DFRN extracts feature maps from the PET and CT images. Then, the RPN and R‐CNN work together to localize lung tumors and reduce the image size and feature map size by removing irrelevant regions. The mask‐Net is used to segment tumor within a volume‐of‐interest (VOI) with a score head evaluating the segmentation performed by the mask‐Net. Finally, the segmented tumor within the VOI was mapped back to the volumetric coordinate system based on the location information derived via the RPN and R‐CNN. We trained, validated, and tested the proposed neural network using 100 PET/CT images of patients with NSCLC. A fivefold cross‐validation study was performed. The segmentation was evaluated with two indicators: (1) multiple metrics, including the Dice similarity coefficient, Jacard, 95th percentile Hausdorff distance, mean surface distance (MSD), residual mean square distance, and center‐of‐mass distance; (2) Bland–Altman analysis and volumetric Pearson correlation analysis. Results: In fivefold cross‐validation, this method achieved Dice and MSD of 0.84 ± 0.15 and 1.38 ± 2.2 mm, respectively. A new PET/CT can be segmented in 1 s by this model. External validation on The Cancer Imaging Archive dataset (63 PET/CT images) indicates that the proposed model has superior performance compared to other methods. Conclusion: The proposed method shows great promise to automatically delineate NSCLC tumors on PET/CT images, thereby allowing for a more streamlined clinical workflow that is faster and reduces physician effort. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Recognition of honeycomb lung in CT images based on improved MobileNet model.
- Author
-
Gang, Li, Haixuan, Zhang, Linning, E, Ling, Zhang, Yu, Li, and Juming, Zhao
- Subjects
COMPUTED tomography ,DEEP learning ,PROBLEM solving ,LUNGS - Abstract
Purpose: The research is to improve the efficiency and accuracy of recognition of honeycomb lung in CT images. Methods: Deep learning methods are used to achieve automatic recognition of honeycomb lung in CT images, however, are time consuming and less accurate due to the large amount of structural parameters. In this paper, a novel recognition method based on MobileNetV1 network, multiscale feature fusion method (MSFF), and dilated convolution is explored to deal with honeycomb lung in CT image classification. Firstly, the dilated convolution with different dilated rate is used to extract features to obtain receptive fields of different sizes, and then fuse the features of different scales at multiscale feature fusion block is used to solve the problem of feature loss and incomplete feature extraction. After that, by using linear activation functions (Sigmoid) instead of nonlinear activation functions (ReLu) in the improved deep separable convolution blocks to retain the feature information of each channel. Finally, by reducing the number of improved deep separable blocks to reduce the computation and resource consumption of the model. Results: The experimental results show that improved MobileNet model has the best performance and the potential for recognition of honeycomb lung image datasets, which includes 6318 images. By comparing with 4 traditional models (SVM, RF, decision tree, and KNN) and 11 deep learning models (LeNet‐5, AlexNet, VGG‐16, GoogleNet, ResNet18, DenseNet121, SENet18, InceptionV3, InceptionV4, Xception, and MobileNetV1), our model achieved the performance with an accuracy of 99.52%, a sensitivity of 99.35%, and a specificity of 99.89%. Conclusion: Improved MobileNet model is designed for the automatic recognition and classification of honeycomb lung in CT images. Through experiments comparative analysis of other models of machine learning and deep learning, it is proved that the proposed improved MobileNet method has the best recognition accuracy with fewer the model parameters and less the calculation time. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Streak artefact removal in x‐ray dark‐field computed tomography using a convolutional neural network.
- Author
-
Kumschier, Tom, Thalhammer, Johannes, Schmid, Clemens, Haeusele, Jakob, Koehler, Thomas, Pfeiffer, Franz, Lasser, Tobias, and Schaff, Florian
- Subjects
- *
CONVOLUTIONAL neural networks , *COMPUTED tomography , *SMALL-angle X-ray scattering , *OPTICAL gratings , *RADIOLOGY , *SMALL-angle scattering , *X-rays - Abstract
Background Purpose Methods Results Conclusions Computed tomography (CT) relies on the attenuation of x‐rays, and is, hence, of limited use for weakly attenuating organs of the body, such as the lung. X‐ray dark‐field (DF) imaging is a recently developed technology that utilizes x‐ray optical gratings to enable small‐angle scattering as an alternative contrast mechanism. The DF signal provides structural information about the micromorphology of an object, complementary to the conventional attenuation signal. A first human‐scale x‐ray DF CT has been developed by our group. Despite specialized processing algorithms, reconstructed images remain affected by streaking artifacts, which often hinder image interpretation. In recent years, convolutional neural networks have gained popularity in the field of CT reconstruction, amongst others for streak artefact removal.Reducing streak artifacts is essential for the optimization of image quality in DF CT, and artefact free images are a prerequisite for potential future clinical application. The purpose of this paper is to demonstrate the feasibility of CNN post‐processing for artefact reduction in x‐ray DF CT and how multi‐rotation scans can serve as a pathway for training data.We employed a supervised deep‐learning approach using a three‐dimensional dual‐frame UNet in order to remove streak artifacts. Required training data were obtained from the experimental x‐ray DF CT prototype at our institute. Two different operating modes were used to generate input and corresponding ground truth data sets. Clinically relevant scans at dose‐compatible radiation levels were used as input data, and extended scans with substantially fewer artifacts were used as ground truth data. The latter is neither dose‐, nor time‐compatible and, therefore, unfeasible for clinical imaging of patients.The trained CNN was able to greatly reduce streak artifacts in DF CT images. The network was tested against images with entirely different, previously unseen image characteristics. In all cases, CNN processing substantially increased the image quality, which was quantitatively confirmed by increased image quality metrics. Fine details are preserved during processing, despite the output images appearing smoother than the ground truth images.Our results showcase the potential of a neural network to reduce streak artifacts in x‐ray DF CT. The image quality is successfully enhanced in dose‐compatible x‐ray DF CT, which plays an essential role for the adoption of x‐ray DF CT into modern clinical radiology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A nonlinear scaling‐based normalized metal artifact reduction to reduce low‐frequency artifacts in energy‐integrating and photon‐counting CT.
- Author
-
Anhaus, Julian A., Killermann, Philipp, Mahnken, Andreas H., and Hofmann, Christian
- Subjects
COMPUTED tomography ,IMAGE reconstruction ,TRACE metals ,TOTAL hip replacement ,IMAGE intensifiers ,METALS - Abstract
Background: Metal within the scan plane can cause severe artifacts when reconstructing X‐ray computed tomography (CT) scans. Both in clinical use and recent research, normalized metal artifact reduction (NMAR) has established as the reference method for correcting metal artifacts, but NMAR introduces inconsistencies within the sinogram, which can cause additional low‐frequency artifacts after image reconstruction. Purpose: This paper introduces an extension to NMAR by applying a nonlinear scaling function (NLS‐NMAR) to reduce low‐frequency artifacts, which get introduced by the reconstruction of interpolation‐edge‐related sinogram inconsistencies in the normalized sinogram domain. Methods: After linear interpolation of the metal trace, an NLS function is applied in the prior‐normalized sinogram domain to reduce the impact of the interpolation edges during filtered backprojection. After sinogram denormalization and image reconstruction, the low frequencies of the NLS image are combined with different high frequencies to restore anatomic details. An anthropomorphic dental phantom with removable metal inserts was utilized on two different CT systems to quantitatively assess the artifact reduction performance in terms of HU deviations and the root‐mean‐square‐error within relevant regions of interest. Clinical dental examples were assessed to qualitatively demonstrate the problem of the interpolation‐related blooming as well as to demonstrate the performance of the NLS function to reduce respective artifacts. To quantitatively prove HU consistency, HU values were assessed in central ROIs in the clinical cases. In addition, single clinical cases of a hip replacement and pedicle screws in the spine are shown to demonstrate the method's results in other body regions. Results: The NLS‐NMAR can minimize the effect of interpolation‐related sinogram inconsistencies and thus reduce resulting hyperdense blooming artifacts. In the phantom results, the reconstructions with the NLS‐NMAR‐corrected low frequencies demonstrate the lowest error. In the qualitative assessment of the clinical data, the NLS‐NMAR shows a tremendous enhancement in image quality, also performing best within all assessed images series. Conclusion: The NLS‐NMAR provides a small yet effective extension to conventional NMAR by reducing low‐frequency hyperdense metal trace‐interpolation‐related artifacts in computed tomography. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Pulmonary arteries segmentation from CT images using PA‐Net with attention module and contour loss.
- Author
-
Yuan, Chengyan, Song, Shuni, Yang, Jinzhong, Sun, Yu, Yang, Benqiang, and Xu, Lisheng
- Subjects
COMPUTED tomography ,PULMONARY artery diseases ,IMAGE segmentation ,PULMONARY embolism ,COMPUTER-assisted image analysis (Medicine) - Abstract
Background: Pulmonary embolism is a kind of cardiovascular disease that threatens human life and health. Since pulmonary embolism exists in the pulmonary artery, improving the segmentation accuracy of pulmonary artery is the key to the diagnosis of pulmonary embolism. Traditional medical image segmentation methods have limited effectiveness in pulmonary artery segmentation. In recent years, deep learning methods have been gradually adopted to solve complex problems in the field of medical image segmentation. Purpose: Due to the irregular shape of the pulmonary artery and the adjacent‐complex tissues, the accuracy of the existing pulmonary artery segmentation methods based on deep learning needs to be improved. Therefore, the purpose of this paper is to develop a segmentation network, which can obtain higher segmentation accuracy and further improve the diagnosis effect. Methods: In this study, the pulmonary artery segmentation performance from the network model and loss function is improved, proposing a pulmonary artery segmentation network (PA‐Net) to segment the pulmonary artery region from 2D CT images. Reverse Attention and edge attention are used to enhance the expression ability of the boundary. In addition, to better use feature information, the channel attention module is introduced in the decoder to highlight the important channel features and suppress the unimportant channels. Due to blurred boundaries, pixels near the boundaries of the pulmonary artery may be difficult to segment. Therefore, a new contour loss function based on the active contour model is proposed in this study to segment the target region by assigning dynamic weights to false positive and false negative regions and accurately predict the boundary structure. Results: The experimental results show that the segmentation accuracy of this proposed method is significantly improved in comparison with state‐of‐the‐art segmentation methods, and the Dice coefficient is 0.938 ± 0.035, which is also confirmed from the 3D reconstruction results. Conclusions: Our proposed method can accurately segment pulmonary artery structure. This new development will provide the possibility for further rapid diagnosis of pulmonary artery diseases such as pulmonary embolism. Code is available at https://github.com/Yuanyan19/PA‐Net. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. SynthRAD2023 Grand Challenge dataset: Generating synthetic CT for radiotherapy.
- Author
-
Thummerer, Adrian, van der Bijl, Erik, Galapon, Arthur, Verhoeff, Joost J. C., Langendijk, Johannes A., Both, Stefan, van den Berg, Cornelis A. T., and Maspero, Matteo
- Subjects
MAGNETIC resonance imaging ,COMPUTED tomography ,DIAGNOSTIC imaging ,ACADEMIC medical centers ,RADIOTHERAPY ,CONE beam computed tomography - Abstract
Purpose: Medical imaging has become increasingly important in diagnosing and treating oncological patients, particularly in radiotherapy. Recent advances in synthetic computed tomography (sCT) generation have increased interest in public challenges to provide data and evaluation metrics for comparing different approaches openly. This paper describes a dataset of brain and pelvis computed tomography (CT) images with rigidly registered cone‐beam CT (CBCT) and magnetic resonance imaging (MRI) images to facilitate the development and evaluation of sCT generation for radiotherapy planning. Acquisition and Validation Methods: The dataset consists of CT, CBCT, and MRI of 540 brains and 540 pelvic radiotherapy patients from three Dutch university medical centers. Subjects' ages ranged from 3 to 93 years, with a mean age of 60. Various scanner models and acquisition settings were used across patients from the three data‐providing centers. Details are available in a comma separated value files provided with the datasets. Data Format and Usage Notes: The data is available on Zenodo (https://doi.org/10.5281/zenodo.7260704, https://doi.org/10.5281/zenodo.7868168) under the SynthRAD2023 collection. The images for each subject are available in nifti format. Potential Applications: This dataset will enable the evaluation and development of image synthesis algorithms for radiotherapy purposes on a realistic multi‐center dataset with varying acquisition protocols. Synthetic CT generation has numerous applications in radiation therapy, including diagnosis, treatment planning, treatment monitoring, and surgical planning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. A database of 40 patient‐based computational models for benchmarking organ dose estimates in CT.
- Author
-
Samei, Ehsan, Ria, Francesco, Tian, Xiaoyu, and Segars, Paul W.
- Subjects
RADIATION dosimetry ,COMPUTED tomography ,CHEST examination ,MONTE Carlo method ,DATABASES - Abstract
Purpose: Patient radiation burden in computed tomography (CT) can best be characterized through risk estimates derived from organ doses. Organ doses can be estimated by Monte Carlo simulations of the CT procedures on computational phantoms assumed to emulate the patients. However, the results are subject to uncertainties related to how accurately the patient and CT procedure are modeled. Different methods can lead to different results. This paper, based on decades of organ dosimetry research, offers a database of CT scans, scan specifics, and organ doses computed using a validated Monte Carlo simulation of each patient and acquisition. It is aimed that the database can serve as means to benchmark different organ dose estimation methods against a benchmark dataset. Acquisition and validation methods: Organ doses were estimated for 40 adult patients (22 male, 18 female) who underwent chest and abdominopelvic CT examinations. Patient‐based computational models were created for each patient including 26 organs for female and 25 organs for male cases. A Monte Carlo code, previously validated experimentally, was applied to calculate organ doses under constant and two modulated tube current conditions. Data format and usage notes: The generated database reports organ dose values for chest and abdominopelvic examinations per patient and imaging condition. Patient information and images and scan specifications (energy spectrum, bowtie filter specification, and tube current profiles) are provided. The database is available at publicly accessible digital repositories. Potential applications: Consistency in patient risk estimation, and associated justification and optimization requires accuracy and consistency in organ dose estimation. The database provided in this paper is a helpful tool to benchmark different organ dose estimation methodologies to facilitate comparisons, assess uncertainties, and improve risk assessment of CT scans based on organ dose. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
25. Technical Note: Development of a 3D printed subresolution sandwich phantom for validation of brain SPECT analysis.
- Author
-
Negus, Ian S., Holmes, Robin B., Jordan, Kirsty C., Nash, David A., Thorne, Gareth C., and Saunders, Margaret
- Subjects
IMAGING phantoms ,THREE-dimensional printing ,MOLECULAR diagnosis ,IMAGE reconstruction ,BRAIN imaging - Abstract
Purpose: To make an adaptable, head shaped radionuclide phantom to simulate molecular imaging of the brain using clinical acquisition and reconstruction protocols. This will allow the characterization and correction of scanner characteristics, and improve the accuracy of clinical image analysis, including the application of databases of normal subjects. Methods: A fused deposition modeling 3D printer was used to create a head shaped phantom made up of transaxial slabs, derived from a simulated MRI dataset. The attenuation of the printed polylactide (PLA), measured by means of the Hounsfield unit on CT scanning, was set to match that of the brain by adjusting the proportion of plastic filament and air (fill ratio). Transmission measurements were made to verify the attenuation of the printed slabs. The radionuclide distribution within the phantom was created by adding 99mTc pertechnetate to the ink cartridge of a paper printer and printing images of gray and white matter anatomy, segmented from the same MRI data. The complete subresolution sandwich phantom was assembled from alternate 3D printed slabs and radioactive paper sheets, and then imaged on a dual headed gamma camera to simulate an HMPAO SPECT scan. Results: Reconstructions of phantom scans successfully used automated ellipse fitting to apply attenuation correction. This removed the variability inherent in manual application of attenuation correction and registration inherent in existing cylindrical phantom designs. The resulting images were assessed visually and by count profiles and found to be similar to those from an existing elliptical PMMA phantom. Conclusions: The authors have demonstrated the ability to create physically realistic HMPAO SPECT simulations using a novel head-shaped 3D printed subresolution sandwich method phantom. The phantom can be used to validate all neurological SPECT imaging applications. A simple modification of the phantom design to use thinner slabs would make it suitable for use in PET. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
26. Adapting low‐dose CT denoisers for texture preservation using zero‐shot local noise‐level matching.
- Author
-
Ko, Youngjun, Song, Seongjong, Baek, Jongduk, and Shim, Hyunjung
- Subjects
- *
IMAGE denoising , *COMPUTED tomography , *SUPERCONDUCTING quantum interference devices , *DEEP learning , *RADIOLOGISTS - Abstract
Background: On enhancing the image quality of low‐dose computed tomography (LDCT), various denoising methods have achieved meaningful improvements. However, they commonly produce over‐smoothed results; the denoised images tend to be more blurred than the normal‐dose targets (NDCTs). Furthermore, many recent denoising methods employ deep learning(DL)‐based models, which require a vast amount of CT images (or image pairs). Purpose: Our goal is to address the problem of over‐smoothed results and design an algorithm that works regardless of the need for a large amount of training dataset to achieve plausible denoising results. Over‐smoothed images negatively affect the diagnosis and treatment since radiologists had developed clinical experiences with NDCT. Besides, a large‐scale training dataset is often not available in clinical situations. To overcome these limitations, we propose locally‐adaptive noise‐level matching (LANCH), emphasizing the output should retain the same noise‐level and characteristics to that of the NDCT without additional training. Methods: We represent the NDCT image as the pixel‐wisely weighted sum of an over‐smoothed output from off‐the‐shelf denoiser (OSD) and the difference between the LDCT image and the OSD output. Herein, LANCH determines a 2D ratio map (i.e., pixel‐wise weight matrix) by locally matching the noise‐level of output and NDCT, where the LDCT‐to‐NDCT device flux (mAs) ratio reveals the NDCT noise‐level. Thereby, LANCH can preserve important details in LDCT, and enhance the sharpness of the noise‐free regions. Note that LANCH can enhance any LDCT denoisers without additional training data (i.e., zero‐shot). Results: The proposed method is applicable to any OSD denoisers, reporting significant texture plausibility development over the baseline denoisers in quantitative and qualitative manners. It is surprising that the denoising accuracy achieved by our method with zero‐shot denoiser was comparable or superior to that of the best training‐based denoisers; our result showed 1% and 33% gains in terms of SSIM and DISTS, respectively. Reader study with experienced radiologists shows significant image quality improvements, a gain of + 1.18 on a five‐point mean opinion score scale. Conclusions: In this paper, we propose a technique to enhance any low‐dose CT denoiser by leveraging the fundamental physical relationship between the x‐ray flux and noise variance. Our method is capable of operating in a zero‐shot condition, which means that only a single low‐dose CT image is required for the enhancement process. We demonstrate that our approach is comparable or even superior to supervised DL‐based denoisers that are trained using numerous CT images. Extensive experiments illustrate that our method consistently improves the performance of all tested LDCT denoisers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Deep learning‐based harmonization of trabecular bone microstructures between high‐ and low‐resolution CT imaging.
- Author
-
Guha, Indranil, Nadeem, Syed Ahmed, Zhang, Xiaoliu, DiCamillo, Paul A., Levy, Steven M., Wang, Ge, and Saha, Punam K.
- Subjects
- *
CANCELLOUS bone , *COMPUTED tomography , *GENERATIVE adversarial networks , *BONE density , *TRAINING of volunteers , *VOLUNTEER recruitment , *DEEP learning , *OPTICAL scanners - Abstract
Background: Osteoporosis is a bone disease related to increased bone loss and fracture‐risk. The variability in bone strength is partially explained by bone mineral density (BMD), and the remainder is contributed by bone microstructure. Recently, clinical CT has emerged as a viable option for in vivo bone microstructural imaging. Wide variations in spatial‐resolution and other imaging features among different CT scanners add inconsistency to derived bone microstructural metrics, urging the need for harmonization of image data from different scanners. Purpose: This paper presents a new deep learning (DL) method for the harmonization of bone microstructural images derived from low‐ and high‐resolution CT scanners and evaluates the method's performance at the levels of image data as well as derived microstructural metrics. Methods: We generalized a three‐dimensional (3D) version of GAN‐CIRCLE that applies two generative adversarial networks (GANs) constrained by the identical, residual, and cycle learning ensemble (CIRCLE). Two GAN modules simultaneously learn to map low‐resolution CT (LRCT) to high‐resolution CT (HRCT) and vice versa. Twenty volunteers were recruited. LRCT and HRCT scans of the distal tibia of their left legs were acquired. Five‐hundred pairs of LRCT and HRCT image blocks of 64×64×64$64 \times 64 \times 64 $ voxels were sampled for each of the twelve volunteers and used for training in supervised as well as unsupervised setups. LRCT and HRCT images of the remaining eight volunteers were used for evaluation. LRCT blocks were sampled at 32 voxel intervals in each coordinate direction and predicted HRCT blocks were stitched to generate a predicted HRCT image. Results: Mean ± standard deviation of structural similarity (SSIM) values between predicted and true HRCT using both 3DGAN‐CIRCLE‐based supervised (0.84 ± 0.03) and unsupervised (0.83 ± 0.04) methods were significantly (p < 0.001) higher than the mean SSIM value between LRCT and true HRCT (0.75 ± 0.03). All Tb measures derived from predicted HRCT by the supervised 3DGAN‐CIRCLE showed higher agreement (CCC ∈$ \in $ [0.956 0.991]) with the reference values from true HRCT as compared to LRCT‐derived values (CCC ∈$ \in $ [0.732 0.989]). For all Tb measures, except Tb plate‐width (CCC = 0.866), the unsupervised 3DGAN‐CIRCLE showed high agreement (CCC ∈$ \in $ [0.920 0.964]) with the true HRCT‐derived reference measures. Moreover, Bland‐Altman plots showed that supervised 3DGAN‐CIRCLE predicted HRCT reduces bias and variability in residual values of different Tb measures as compared to LRCT and unsupervised 3DGAN‐CIRCLE predicted HRCT. The supervised 3DGAN‐CIRCLE method produced significantly improved performance (p < 0.001) for all Tb measures as compared to the two DL‐based supervised methods available in the literature. Conclusions: 3DGAN‐CIRCLE, trained in either unsupervised or supervised fashion, generates HRCT images with high structural similarity to the reference true HRCT images. The supervised 3DGAN‐CIRCLE improves agreements of computed Tb microstructural measures with their reference values and outperforms the unsupervised 3DGAN‐CIRCLE. 3DGAN‐CIRCLE offers a viable DL solution to retrospectively improve image resolution, which may aid in data harmonization in multi‐site longitudinal studies where scanner mismatch is unavoidable. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Early CT physics research at Massachusetts General Hospital.
- Author
-
Pelc, Norbert J. and Chesler, David A.
- Subjects
PHYSICS research ,COMPUTED tomography ,HOSPITALS ,IMAGE reconstruction - Abstract
Although CT imaging was introduced at Massachusetts General Hospital (MGH) quite early, with its first CT scanner installed in 1973, CT research at MGH started years earlier. The goal of this paper is to describe some of this innovative work and related accomplishments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. A quality‐checked and physics‐constrained deep learning method to estimate material basis images from single‐kV contrast‐enhanced chest CT scans.
- Author
-
Li, Yinsheng, Tie, Xin, Li, Ke, Zhang, Ran, Qi, Zhihua, Budde, Adam, Grist, Thomas M., and Chen, Guang‐Hong
- Subjects
DEEP learning ,COMPUTED tomography ,STANDARD deviations ,CONSTRAINTS (Physics) - Abstract
Background: Single‐kV CT imaging is one of the primary imaging methods in radiology practices. However, it does not provide material basis images for some subtle lesion characterization tasks in clinical diagnosis. Purpose: To develop a quality‐checked and physics‐constrained deep learning (DL) method to estimate material basis images from single‐kV CT data without resorting to dual‐energy CT acquisition schemes. Methods: Single‐kV CT images are decomposed into two material basis images using a deep neural network. The role of this network is to generate a feature space with 64 template features with the same matrix dimensions of the input single‐kV CT image. These 64 template image features are then combined to generate the desired material basis images with different sets of combination coefficients, one for each material basis image. Dual‐energy CT image acquisitions with two separate kVs were curated to generate paired training data between a single‐kV CT image and the corresponding two material basis images. To ensure the obtained two material basis images are consistent with the encoded spectral information in the actual projection data, two physics constraints, that is, (1) effective energy of each measured projection datum that characterizes the beam hardening in data acquisitions and (2) physical factors of scanners such as detector and tube characteristics, are incorporated into the end‐to‐end training. The entire architecture is referred to as Deep‐En‐Chroma in this paper. In the application stage, the generated material basis images are sent to a deep quality check (Deep‐QC) network to assess the quality of estimated images and to report the pixel‐wise estimation errors for users. The models were developed using 5592 training and validation pairs generated from 48 clinical cases. Additional 1526 CT images from another 13 patients were used to evaluate the quantitative accuracy of water and iodine basis images estimated by Deep‐En‐Chroma. Results: For the iodine basis images estimated by Deep‐En‐Chroma, the mean difference with respect to dual‐energy CT is −0.25 mg/mL, and the agreement limits are [−0.75 mg/mL, +0.24 mg/mL]. For the water basis images estimated by Deep‐En‐Chroma, the mean difference with respect to dual‐energy CT is 0.0 g/mL, and the agreement limits are [−0.01 g/mL, 0.01 g/mL]. Across the test cohort, the median [25th, 75th percentiles] root mean square errors between the Deep‐En‐Chroma and dual‐energy material images are 14 [12, 16] mg/mL for the water images and 0.73 [0.64, 0.80] mg/mL for the iodine images. When significant errors are present in the estimated material basis images, Deep‐QC can capture these errors and provide pixel‐wise error maps to inform users whether the DL results are trustworthy. Conclusions: The Deep‐En‐Chroma network provides a new pathway to estimating the clinically relevant material basis images from single‐kV CT data and the Deep‐QC module to inform end‐users of the accuracy of the DL material basis images in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Automatic segmentation of the tumor in nonsmall‐cell lung cancer by combining coarse and fine segmentation.
- Author
-
Zhang, Fuli, Wang, Qiusheng, Fan, Enyu, Lu, Na, Chen, Diandian, Jiang, Huayong, and Wang, Yadi
- Subjects
LUNG cancer ,DEEP learning ,LUNG tumors ,CONVOLUTIONAL neural networks ,CHEST (Anatomy) ,COMPUTED tomography - Abstract
Objectives: Radiotherapy plays an important role in the treatment of nonsmall‐cell lung cancer (NSCLC). Accurate delineation of tumor is the key to successful radiotherapy. Compared with the commonly used manual delineation ways, which are time‐consuming and laborious, the automatic segmentation methods based on deep learning can greatly improve the treatment efficiency. Methods: In this paper, we introduce an automatic segmentation method by combining coarse and fine segmentations for NSCLC. Coarse segmentation network is the first level, identifing the rough region of the tumor. In this network, according to the tissue structure distribution of the thoracic cavity where tumor is located, we designed a competition method between tumors and organs at risk (OARs), which can increase the proportion of the identified tumor covering the ground truth and reduce false identification. Fine segmentation network is the second level, carrying out precise segmentation on the results of the coarse level. These two networks are independent of each other during training. When they are used, morphological processing of small scale corrosion and large scale expansion is used for the coarse segmentation results, and the outcomes are sent to the fine segmentation part as input, so as to achieve the complementary advantages of the two networks. Results: In the experiment, CT images of 200 patients with NSCLC are used to train the network, and CT images of 60 patients are used to test. Finally, our method produced the Dice similarity coefficient of 0.78 ± 0.10. Conclusions: The experimental results show that the proposed method can accurately segment the tumor with NSCLC, and can also provide support for clinical diagnosis and treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. 3D gray density coding feature for benign‐malignant pulmonary nodule classification on chest CT.
- Author
-
Zheng, BingBing, Yang, Dawei, Zhu, Yu, Liu, Yatong, Hu, Jie, and Bai, Chunxue
- Subjects
COMPUTED tomography ,PULMONARY nodules ,GRAY codes ,FEATURE extraction ,COMPUTER-aided diagnosis ,RANDOM forest algorithms - Abstract
Purpose: Early detection is significant to reduce lung cancer‐related death. Computer‐aided detection system (CADs) can help radiologists to make an early diagnosis. In this paper, we propose a novel 3D gray density coding feature (3D GDC) and fuse it with extracted geometric features. The fusion feature and random forest are used for benign–malignant pulmonary nodule classification on Chest CT. Methods: First, a dictionary model is created to acquire codebook. It is used to obtain feature descriptors and includes 3D block database (BD) and distance matrix clustering centers. 3D BD is balanced and randomly selecting from benign and malignant pulmonary nodules of training data. Clustering centers is got by clustering the distance matrix, which is the distance between every two blocks in 3D BD. Then, feature descriptor is obtained by coding the pulmonary nodule with codebook, and 3D GDC feature is the result of histogram statistics on feature descriptor. Second, geometric features are extracted for fusion feature. Finally, random forest is performed for benign–malignant pulmonary nodule classification with fusion feature of the 3D gray density coding feature and the geometric features. Results: We verify the effectiveness of our method on the public LIDC‐IDRI dataset and the private ZSHD dataset. For LIDC‐IDRI dataset, compared with other state‐of‐the‐art methods, we achieve more satisfactory results with 93.17 ± 1.94% for accuracy and 97.53 ± 1.62% for AUC. As for private ZSHD dataset, it contains a total of 238 lung nodules from 203 patients. The accuracy and AUC achieved by our method are 90.0% and 93.15%. Conclusions: The results show that our method can provide doctors with more accurate results of benign–malignant pulmonary nodule classification for auxiliary diagnosis, and our method is more interpretable than 3D CNN methods, which can provide doctors with more auxiliary information. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. Explainability and controllability of patient‐specific deep learning with attention‐based augmentation for markerless image‐guided radiotherapy.
- Author
-
Terunuma, Toshiyuki, Sakae, Takeji, Hu, Yachao, Takei, Hideyuki, Moriya, Shunsuke, Okumura, Toshiyuki, and Sakurai, Hideyuki
- Subjects
IMAGE-guided radiation therapy ,DEEP learning ,COMPUTED tomography ,DATA augmentation ,CONVOLUTIONAL neural networks ,LUNG tumors - Abstract
Background: We reported the concept of patient‐specific deep learning (DL) for real‐time markerless tumor segmentation in image‐guided radiotherapy (IGRT). The method was aimed to control the attention of convolutional neural networks (CNNs) by artificial differences in co‐occurrence probability (CoOCP) in training datasets, that is, focusing CNN attention on soft tissues while ignoring bones. However, the effectiveness of this attention‐based data augmentation has not been confirmed by explainable techniques. Furthermore, compared to reasonable ground truths, the feasibility of tumor segmentation in clinical kilovolt (kV) X‐ray fluoroscopic (XF) images has not been confirmed. Purpose: The first aim of this paper was to present evidence that the proposed method provides an explanation and control of DL behavior. The second purpose was to validate the real‐time lung tumor segmentation in clinical kV XF images for IGRT. Methods: This retrospective study included 10 patients with lung cancer. Patient‐specific and XF angle‐specific image pairs comprising digitally reconstructed radiographs (DRRs) and projected‐clinical‐target‐volume (pCTV) images were calculated from four‐dimensional computer tomographic data and treatment planning information. The training datasets were primarily augmented by random overlay (RO) and noise injection (NI): RO aims to differentiate positional CoOCP in soft tissues and bones, and NI aims to make a difference in the frequency of occurrence of local and global image features. The CNNs for each patient‐and‐angle were automatically optimized in the DL training stage to transform the training DRRs into pCTV images. In the inference stage, the trained CNNs transformed the test XF images into pCTV images, thus identifying target positions and shapes. Results: The visual analysis of DL attention heatmaps for a test image demonstrated that our method focused CNN attention on soft tissue and global image features rather than bones and local features. The processing time for each patient‐and‐angle‐specific dataset in the training stage was ∼30 min, whereas that in the inference stage was 8 ms/frame. The estimated three‐dimensional 95 percentile tracking error, Jaccard index, and Hausdorff distance for 10 patients were 1.3–3.9 mm, 0.85–0.94, and 0.6–4.9 mm, respectively. Conclusions: The proposed attention‐based data augmentation with both RO and NI made the CNN behavior more explainable and more controllable. The results obtained demonstrated the feasibility of real‐time markerless lung tumor segmentation in kV XF images for IGRT. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Online geometry calibration for retrofit computed tomography from a mouse rotation system and a small‐animal imager.
- Author
-
Zhou, Huanyi, Reeves, Stanley, Chou, Cheng‐Ying, Brannen, Andrew, and Panizzi, Peter
- Subjects
COMPUTED tomography ,CONE beam computed tomography ,X-ray imaging ,CALIBRATION ,IMAGING systems ,IMAGE reconstruction algorithms ,GEOMETRIC tomography ,MICE - Abstract
Background: Computed tomography (CT) generates a three‐dimensional rendering that can be used to interrogate a given region or desired structure from any orientation. However, in preclinical research, its deployment remains limited due to relatively high upfront costs. Existing integrated imaging systems that provide merged planar X‐ray also dwarfs CT popularity in small laboratories due to their added versatility. Purpose: In this paper, we sought to generate CT‐like data using an existing small‐animal X‐ray imager with a specialized specimen rotation system, or MiSpinner. This setup conforms to the cone‐beam CT (CBCT) geometry, which demands high spatial calibration accuracy. Therefore, a simple but robust geometry calibration algorithm is necessary to ensure that the entire imaging system works properly and accurately. Methods: Because the rotation system is not permanently affixed, we propose a structure tensor‐based two‐step online (ST‐TSO) geometry calibration algorithm. Specifically, two datasets are needed, namely, calibration and actual measurements. A calibration measurement detects the background of the system forward X‐ray projections. A study on the background image reveals the characteristics of the X‐ray photon distribution, and thus, provides a reliable estimate of the imaging geometry origin. Actual measurements consisted of an X‐ray of the intended object, including possible geometry errors. A comprehensive image processing technique helps to detect spatial misalignment information. Accordingly, the first processing step employs a modified projection matrix‐based calibration algorithm to estimate the relevant geometric parameters. Predicted parameters are then fine‐tuned in a second processing step by an iterative strategy based on the symmetry property of the sum of projections. Virtual projections calculated from the parameters after two‐step processing compensate for the scanning errors and are used for CT reconstruction. Experiments on phantom and mouse imaging data were performed to validate the calibration algorithm. Results: Once system correction was conducted, CBCT of a CT bar phantom and a cohort of euthanized mice were analyzed. No obvious structure error or spatial artifacts were observed, validating the accuracy of the proposed geometry calibration method. Digital phantom simulation indicated that compared with the preset spatial values, errors in the final estimated parameters could be reduced to 0.05° difference in dominant angle and 0.5‐pixel difference in dominant axis bias. The in‐plane resolution view of the CT‐bar phantom revealed that the resolution approaches 150 μ$\umu$m. Conclusions: A constrained two‐step online geometry calibration algorithm has been developed to calibrate an integrated X‐ray imaging system, defined by a first‐step analytical estimation and a second‐step iterative fine‐tuning. Test results have validated its accuracy in system correction, thus demonstrating the potential of the described system to be modified and adapted for preclinical research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Domain‐adaptive denoising network for low‐dose CT via noise estimation and transfer learning.
- Author
-
Wang, Jiping, Tang, Yufei, Wu, Zhongyi, Tsui, Benjamin M. W., Chen, Wei, Yang, Xiaodong, Zheng, Jian, and Li, Ming
- Subjects
IMAGE denoising ,TRANSFER of training ,NOISE ,IMAGING phantoms ,INSPECTION & review ,COMPUTED tomography ,DEEP learning - Abstract
Background: In recent years, low‐dose computed tomography (LDCT) has played an important role in the diagnosis CT to reduce the potential adverse effects of X‐ray radiation on patients, while maintaining the same diagnostic image quality. Purpose: Deep learning (DL)‐based methods have played an increasingly important role in the field of LDCT imaging. However, its performance is highly dependent on the consistency of feature distributions between training data and test data. Due to patient's breathing movements during data acquisition, the paired LDCT and normal dose CT images are difficult to obtain from realistic imaging scenarios. Moreover, LDCT images from simulation or clinical CT examination often have different feature distributions due to the pollution by different amounts and types of image noises. If a network model trained with a simulated dataset is used to directly test clinical patients' LDCT data, its denoising performance may be degraded. Based on this, we propose a novel domain‐adaptive denoising network (DADN) via noise estimation and transfer learning to resolve the out‐of‐distribution problem in LDCT imaging. Methods: To overcome the previous adaptation issue, a novel network model consisting of a reconstruction network and a noise estimation network was designed. The noise estimation network based on a double branch structure is used for image noise extraction and adaptation. Meanwhile, the U‐Net‐based reconstruction network uses several spatially adaptive normalization modules to fuse multi‐scale noise input. Moreover, to facilitate the adaptation of the proposed DADN network to new imaging scenarios, we set a two‐stage network training plan. In the first stage, the public simulated dataset is used for training. In the second transfer training stage, we will continue to fine‐tune the network model with a torso phantom dataset, while some parameters are frozen. The main reason using the two‐stage training scheme is based on the fact that the feature distribution of image content from the public dataset is complex and diverse, whereas the feature distribution of noise pattern from the torso phantom dataset is closer to realistic imaging scenarios. Results: In an evaluation study, the trained DADN model is applied to both the public and clinical patient LDCT datasets. Through the comparison of visual inspection and quantitative results, it is shown that the proposed DADN network model can perform well in terms of noise and artifact suppression, while effectively preserving image contrast and details. Conclusions: In this paper, we have proposed a new DL network to overcome the domain adaptation problem in LDCT image denoising. Moreover, the results demonstrate the feasibility and effectiveness of the application of our proposed DADN network model as a new DL‐based LDCT image denoising method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Advances in digital and physical anthropomorphic breast phantoms for x‐ray imaging.
- Author
-
Glick, Stephen J. and Ikejimba, Lynda C.
- Subjects
IMAGING phantoms ,BREAST cancer treatment ,CLINICAL trials ,TOMOSYNTHESIS ,COMPUTED tomography ,MAMMOGRAMS - Abstract
Purpose: With the advent of three‐dimensional (3D) breast imaging modalities such as digital breast tomosynthesis (DBT) and dedicated breast CT (bCT), research into new anthropomorphic breast phantoms has accelerated. These breast phantoms are important for the optimization of new breast imaging systems, assessing new regulatory submissions to prove safety and effectiveness, and for developing new approaches to acceptance and constancy testing of 3D breast imaging systems. This paper provides a review of current research investigating both digital and physical breast phantom development for use in x‐ray based imaging. Methods: Two approaches for designing anthropomorphic, digital breast phantoms are discussed, procedural model‐based phantom generation, where breast features are expressed using mathematical models, and patient‐based generation, where breast structures from tissue specimens or patient‐based breast MR or CT volumes are segmented. Following this discussion, a review of physical anthropomorphic phantoms is given, with emphasis on the advantages and disadvantages present with each approach. Conclusions: This paper provides a summary of the state‐of‐the‐art in anthropomorphic breast phantom development for x‐ray breast imaging. The primary advantage of model‐based digital phantoms is that an unlimited number of phantoms with varying size, shape, and density can be generated. Current research on model‐based breast phantoms is producing more and more realistic breast models; however, they probably are not yet able to pass the so‐called "fool the radiologist" visualization test. Empirical patient‐based breast phantoms are typically based on clinical breast CT data and look more realistic. However, clinical breast CT images have limited spatial resolution and thus do not always portray the finer details in the breast. A number of innovative solutions have been proposed for fabricating physical anthropomorphic breast phantoms based on digital phantom models; however, a number of challenges remain, including realistic modeling of x‐ray attenuation properties and accurately representing high‐frequency structures within breast. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
36. MultiIB‐TransUNet: Transformer with multiple information bottleneck blocks for CT and ultrasound image segmentation.
- Author
-
Li, Guangju, Jin, Dehu, Yu, Qi, Zheng, Yuanjie, and Qi, Meng
- Subjects
- *
TRANSFORMER models , *ULTRASONIC imaging , *COMPUTED tomography , *SURGICAL diagnosis , *CONVOLUTIONAL neural networks - Abstract
Background: Accurate medical image segmentation is crucial for disease diagnosis and surgical planning. Transformer networks offer a promising alternative for medical image segmentation as they can learn global features through self‐attention mechanisms. To further enhance performance, many researchers have incorporated more Transformer layers into their models. However, this approach often results in the model parameters increasing significantly, causing a potential rise in complexity. Moreover, the datasets of medical image segmentation usually have fewer samples, which leads to the risk of overfitting of the model. Purpose: This paper aims to design a medical image segmentation model that has fewer parameters and can effectively alleviate overfitting. Methods: We design a MultiIB‐Transformer structure consisting of a single Transformer layer and multiple information bottleneck (IB) blocks. The Transformer layer is used to capture long‐distance spatial relationships to extract global feature information. The IB block is used to compress noise and improve model robustness. The advantage of this structure is that it only needs one Transformer layer to achieve the state‐of‐the‐art (SOTA) performance, significantly reducing the number of model parameters. In addition, we designed a new skip connection structure. It only needs two 1× 1 convolutions, the high‐resolution feature map can effectively have both semantic and spatial information, thereby alleviating the semantic gap. Results: The proposed model is on the Breast UltraSound Images (BUSI) dataset, and the IoU and F1 evaluation indicators are 67.75 and 87.78. On the Synapse multi‐organ segmentation dataset, the Param, Hausdorff Distance (HD) and Dice Similarity Cofficient (DSC) evaluation indicators are 22.30, 20.04 and 81.83. Conclusions: Our proposed model (MultiIB‐TransUNet) achieved superior results with fewer parameters compared to other models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Synthetization of high‐dose images using low‐dose CT scans.
- Author
-
Hsieh, Jiang
- Subjects
- *
COMPUTED tomography , *NOISE control , *RADIATION exposure , *DEEP learning , *INSPECTION & review , *POWER spectra , *IMAGE processing - Abstract
Background: Radiation dose reduction has been the focus of many research activities in x‐ray CT. Various approaches were taken to minimize the dose to patients, ranging from the optimization of clinical protocols, refinement of the scanner hardware design, and development of advanced reconstruction algorithms. Although significant progress has been made, more advancements in this area are needed to minimize the radiation risks to patients. Purpose: Reconstruction algorithm‐based dose reduction approaches focus mainly on the suppression of noise in the reconstructed images while preserving detailed anatomical structures. Such an approach effectively produces synthesized high‐dose images (SHD) from the data acquired with low‐dose scans. A representative example is the model‐based iterative reconstruction (MBIR). Despite its widespread deployment, its full adoption in a clinical environment is often limited by an undesirable image texture. Recent studies have shown that deep learning image reconstruction (DLIR) can overcome this shortcoming. However, the limited availability of high‐quality clinical images for training and validation is often the bottleneck for its development. In this paper, we propose a novel approach to generate SHD with existing low‐dose clinical datasets that overcomes both the noise texture issue and the data availability issue. Methods: Our approach is based on the observation that noise in the image can be effectively reduced by performing image processing orthogonal to the imaging plane. This process essentially creates an equivalent thick‐slice image (TSI), and the characteristics of TSI depend on the nature of the image processing. An advantage of this approach is its potential to reduce impact on the noise texture. The resulting image, however, is likely corrupted by the anatomical structural degradation due to partial volume effects. Careful examination has shown that the differential signal between the original and the processed image contains sufficient information to identify regions where anatomical structures are modified. The differential signal, unfortunately, contains significant noise and has to be removed. The noise removal can be accomplished by performing iterative noise reduction to preserve structural information. The processed differential signal is subsequently subtracted from TSI to arrive at SHD. Results: The algorithm was evaluated extensively with phantom and clinical datasets. For better visual inspection, difference images between the original and SHD were generated and carefully examined. Negligible residual structure could be observed. In addition to the qualitative inspection, quantitative analyses were performed on clinical images in terms of the CT number consistency and the noise reduction characteristics. Results indicate that no CT number bias is introduced by the proposed algorithm. In addition, noise reduction capability is consistent across different patient anatomical regions. Further, simulated water phantom scans were utilized in the generation of the noise power spectrum (NPS) to demonstrate the preservation of the noise‐texture. Conclusions: We present a method to generate SHD datasets from regularly acquired low‐dose CT scans. Images produced with the proposed approach exhibit excellent noise‐reduction with the desired noise‐texture. Extensive clinical and phantom studies have demonstrated the efficacy and robustness of our approach. Potential limitations of the current implementation are discussed and further research topics are outlined. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Photon count rates estimated from 1980 clinical CT scans.
- Author
-
Szczykutowicz, Timothy P., Bujila, Robert, Yin, Zhye, Slavic, Scott, and Maltz, Jonathan
- Subjects
X-rays ,COMPUTED tomography ,PHOTON counting ,PHOTON detectors ,CLINICAL indications ,LUNGS ,TORSO - Abstract
Background: All photon counting detectors have a characteristic count rate over which their performance degrades. Degradation in the clinical setting takes the form of increased noise, reduced material quantification accuracy, and image artifacts. Count rate is a function of patient attenuation, beam filtration, scanner geometry, and X‐ray technique. Purpose: To guide protocol and technology development in the photon counting space, knowledge of clinical count rates spanning the complete range of clinical indications and patient sizes is needed. In this paper, we use clinical data to characterize the range of computed tomography (CT) count rates. Methods: We retrospectively gathered 1980 patient exams spanning the entire body (head/neck/chest/abdomen/extremity) and sampled 36 951 axial image slices. We assigned the tissue labels air/lung/fat/soft tissue/bone to each voxel for each slice using CT number thresholds. We then modeled four different bowtie filters, 70/80/100/120/140 kV spectra, and a range of mA values. We forward‐projected each slice to obtain detector‐incident count rates, using the geometry of a GE Revolution Apex scanner. Our analysis divided the detector into thirds: the central one‐third, one‐third of the detector split into two equal regions adjacent to the central third, and the final one‐third divided equally between the outer detector edges. We report the 99th percentile of counts to mimic the upper limits of count rates making passing through a patient as a function of patient water equivalent diameter. We also report the percentage of patient scans, by body region, over different count rate thresholds for all combinations of bowtie and beam energy. Results: For routine exam types, we recorded count rates of approximately 3.5 × 108 counts/mm2/s in the torso, extremities, and brain. For neck scans, we observed count rates near 6 × 108 counts/mm2/s. Our simulations of 1000 mA, appropriately mimicking the mA needs for fast pediatric, fast thoracic, and cardiac scanning, resulted in count rates of over 10 × 108 counts/mm2/s for the torso, extremities, and brain. At 1000 mA, for the neck region, we observed count rates close to 2 × 109 counts/mm2/s. Importantly, we saw only a small change in maximum count rate needs over patient size, which we attribute to patient mis‐positioning with respect to the bowtie filters. As expected, combinations of kV and bowtie filter with higher beam energies and wider/less attenuating bowtie fluence profiles lead to higher count rates relative to lower energies. The 99th–50th percentile count rate changed the most for the torso region, with a maximum variation of 3.9 × 108 to 1.2 × 107 counts/mm2/s. The head/neck/extremity regions had less than a 50% change in count rate from the 99th to 50th percentiles. Conclusions: Our results are the first to use a large patient cohort spanning all body regions to characterize count rates in CT. Our results should be useful in helping researchers understand count rates as a function of body region and mA for various combinations of bowtie filter designs and beam energies. Our results indicate clinical rates >1 × 109 counts/mm2/s, but they do not predict the image quality impact of using a detector with lower characteristic count rates. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Diagnosis after zooming in: A multilabel classification model by imitating doctor reading habits to diagnose brain diseases.
- Author
-
Wang, Ruiqian, Fu, Guanghui, Li, Jianqiang, and Pei, Yan
- Subjects
BRAIN diseases ,DEEP learning ,PHYSICIANS ,MACHINE learning ,COMPUTED tomography ,DIAGNOSIS - Abstract
Purpose: Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors. Methods: In this paper, we propose a model called DrCT2 that can detect brain diseases without using image‐level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open‐access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human–computer cooperation in different aspects. Results: The method achieved F1‐scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency. Conclusions: We proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Decoupled pyramid correlation network for liver tumor segmentation from CT images.
- Author
-
Zhang, Yao, Yang, Jiawei, Liu, Yang, Tian, Jiang, Wang, Siyun, Zhong, Cheng, Shi, Zhongchao, Zhang, Yang, and He, Zhiqiang
- Subjects
LIVER tumors ,COMPUTED tomography ,PYRAMIDS ,MULTILEVEL models ,IMAGE segmentation ,DIAGNOSTIC imaging ,EYE tracking - Abstract
Purpose: Automated liver tumor segmentation from computed tomography (CT) images is a necessary prerequisite in the interventions of hepatic abnormalities and surgery planning. However, accurate liver tumor segmentation remains challenging due to the large variability of tumor sizes and inhomogeneous texture. Recent advances based on fully convolutional network (FCN) for medical image segmentation drew on the success of learning discriminative pyramid features. In this paper, we propose a decoupled pyramid correlation network (DPC‐Net) that exploits attention mechanisms to fully leverage both low‐ and high‐level features embedded in FCN to segment liver tumor. Methods: We first design a powerful pyramid feature encoder (PFE) to extract multilevel features from input images. Then we decouple the characteristics of features concerning spatial dimension (i.e., height, width, depth) and semantic dimension (i.e., channel). On top of that, we present two types of attention modules, spatial correlation (SpaCor) and semantic correlation (SemCor) modules, to recursively measure the correlation of multilevel features. The former selectively emphasizes global semantic information in low‐level features with the guidance of high‐level ones. The latter adaptively enhance spatial details in high‐level features with the guidance of low‐level ones. Results: We evaluate the DPC‐Net on MICCAI 2017 LiTS Liver Tumor Segmentation (LiTS) challenge data set. Dice similarity coefficient (DSC) and average symmetric surface distance (ASSD) are employed for evaluation. The proposed method obtains a DSC of 76.4% and an ASSD of 0.838 mm for liver tumor segmentation, outperforming the state‐of‐the‐art methods. It also achieves a competitive result with a DSC of 96.0% and an ASSD of 1.636 mm for liver segmentation. Conclusions: The experimental results show promising performance of DPC‐Net for liver and tumor segmentation from CT images. Furthermore, the proposed SemCor and SpaCor can effectively model the multilevel correlation from both semantic and spatial dimensions. The proposed attention modules are lightweight and can be easily extended to other multilevel methods in an end‐to‐end manner. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Recent advances on the development of phantoms using 3D printing for imaging with CT, MRI, PET, SPECT, and ultrasound.
- Author
-
Filippou, Valeria and Tsoumpas, Charalampos
- Subjects
THREE-dimensional printing ,IMAGING phantoms ,COMPUTED tomography ,MAGNETIC resonance imaging ,POSITRON emission tomography - Abstract
Purpose: Printing technology, capable of producing three‐dimensional (3D) objects, has evolved in recent years and provides potential for developing reproducible and sophisticated physical phantoms. 3D printing technology can help rapidly develop relatively low cost phantoms with appropriate complexities, which are useful in imaging or dosimetry measurements. The need for more realistic phantoms is emerging since imaging systems are now capable of acquiring multimodal and multiparametric data. This review addresses three main questions about the 3D printers currently in use, and their produced materials. The first question investigates whether the resolution of 3D printers is sufficient for existing imaging technologies. The second question explores if the materials of 3D‐printed phantoms can produce realistic images representing various tissues and organs as taken by different imaging modalities such as computer tomography (CT), positron emission tomography (PET), single‐photon emission computed tomography (SPECT), magnetic resonance imaging (MRI), ultrasound (US), and mammography. The emergence of multimodal imaging increases the need for phantoms that can be scanned using different imaging modalities. The third question probes the feasibility and easiness of “printing” radioactive or nonradioactive solutions during the printing process. Methods: A systematic review of medical imaging studies published after January 2013 is performed using strict inclusion criteria. The databases used were Scopus and Web of Knowledge with specific search terms. In total, 139 papers were identified; however, only 50 were classified as relevant for this paper. In this review, following an appropriate introduction and literature research strategy, all 50 articles are presented in detail. A summary of tables and example figures of the most recent advances in 3D printing for the purposes of phantoms across different imaging modalities are provided. Results: All 50 studies printed and scanned phantoms in either CT, PET, SPECT, mammography, MRI, and US—or a combination of those modalities. According to the literature, different parameters were evaluated depending on the imaging modality used. Almost all papers evaluated more than two parameters, with the most common being Hounsfield units, density, attenuation and speed of sound. Conclusions: The development of this field is rapidly evolving and becoming more refined. There is potential to reach the ultimate goal of using 3D phantoms to get feedback on imaging scanners and reconstruction algorithms more regularly. Although the development of imaging phantoms is evident, there are still some limitations to address: One of which is printing accuracy, due to the printer properties. Another limitation is the materials available to print: There are not enough materials to mimic all the tissue properties. For example, one material can mimic one property—such as the density of real tissue—but not any other property, like speed of sound or attenuation. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
42. Technical Note: Emission expectation maximization look‐alike algorithms for x‐ray CT and other applications.
- Author
-
Zeng, Gengsheng L.
- Subjects
POSITRON emission tomography ,ALGORITHMS ,COMPUTED tomography ,COMPUTER simulation ,ANALYSIS of variance ,STATISTICAL weighting - Abstract
Purpose: In emission tomography, the expectation maximization (EM) algorithm is easy to use with only one parameter to adjust ― the number of iterations. On the other hand, the EM algorithms for transmission tomography are not so user‐friendly and have many problems. This paper develops a new transmission algorithm similar to the emission EM algorithm. Methods: This paper develops a family of emission‐EM‐look‐alike algorithms by expressing the emission EM algorithm in the additive form and changing the weighting factor. One of the family members can be applied to transmission tomography such as the x‐ray computed tomography (CT). Results: Computer simulations are performed and compared with a similar algorithm by a different group using the transmission CT noise model. Our algorithm has the same convergence rate as theirs, and our algorithm provides better contrast‐to‐noise ratio for lesion detection. Conclusions: For any noise variance function, an emission‐EM‐look‐alike algorithm can be derived. This algorithm preserves many properties of the emission EM algorithm such as multiplicative update, non‐negativity, faster convergence rate for the bright objects, and ease of implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
43. Low-dose CT reconstruction using spatially encoded nonlocal penalty.
- Author
-
Kim, Kyungsang, El Fakhri, Georges, and Li, Quanzheng
- Subjects
COMPUTED tomography ,IMAGE reconstruction algorithms ,IMAGE quality analysis ,DNA damage ,MEDICAL physics - Abstract
Purpose: Computed tomography (CT) is one of the most used imaging modalities for imaging both symptomatic and asymptomatic patients. However, because of the high demand for lower radiation dose during CT scans, the reconstructed image can suffer from noise and artifacts due to the trade-off between the image quality and the radiation dose. The purpose of this paper is to improve the image quality of quarter dose images and to select the best hyperparameters using the regular dose image as ground truth. Methods: We first generated the axially stacked two-dimensional sinograms from the multislice raw projections with flying focal spots using a single slice rebinning method, which is an axially approximate method to provide simple implementation and efficient memory usage. To improve the image quality, a cost function containing the Poisson log-likelihood and spatially encoded nonlocal penalty is proposed. Specifically, an ordered subsets separable quadratic surrogates (OS-SQS) method for the log-likelihood is exploited and the patch-based similarity constraint with a spatially variant factor is developed to reduce the noise significantly while preserving features. Furthermore, we applied the Nesterov's momentum method for acceleration and the diminishing number of subsets strategy for noise consistency. Fast nonlocal weight calculation is also utilized to reduce the computational cost. Results: Datasets given by the Low Dose CT Grand Challenge were used for the validation, exploiting the training datasets with the regular and quarter dose data. The most important step in this paper was to fine-tune the hyperparameters to provide the best image for diagnosis. Using the regular dose filtered back-projection (FBP) image as ground truth, we could carefully select the hyperparameters by conducting a bias and standard deviation study, and we obtained the best images in a fixed number of iterations. We demonstrated that the proposed method with well selected hyperparameters improved the image quality using quarter dose data. The quarter dose proposed method was compared with the regular dose FBP, quarter dose FBP, and quarter dose l
1 -based 3-D TV method. We confirmed that the quarter dose proposed image was comparable to the regular dose FBP image and was better than images using other quarter dose methods. The reconstructed test images of the accreditation (ACR) CT phantom and 20 patients data were evaluated by radiologists at the Mayo clinic, and this method was awarded first place in the Low Dose CT Grand Challenge. Conclusion: We proposed the iterative CT reconstruction method using a spatially encoded nonlocal penalty and ordered subsets separable quadratic surrogates with the Nesterov's momentum and diminishing number of subsets. The results demonstrated that the proposed method with fine-tuned hyperparameters can significantly improve the image quality and provide accurate diagnostic features at quarter dose. The performance of the proposed method should be further improved for small lesions, and a more thorough evaluation using additional clinical data is required in the future. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
44. Nonlinearly scaled prior image‐controlled frequency split for high‐frequency metal artifact reduction in computed tomography.
- Author
-
Anhaus, Julian A., Killermann, Philipp, Sedlmair, Martin, Winter, Jonas, Mahnken, Andreas H., and Hofmann, Christian
- Subjects
COMPUTED tomography ,METALS in surgery ,DENTAL fillings ,ENDOSSEOUS dental implants ,DENTAL implants ,THERAPEUTIC embolization - Abstract
Purpose: This paper introduces a new approach for the dedicated reduction of high‐frequency metal artifacts, which applies a nonlinear scaling (NLS) transfer function on the high‐frequency projection domain to reduce artifacts, while preserving edge information and anatomic detail by incorporating prior image information. Methods: An NLS function is applied to suppress high‐frequency streak artifacts, but to restrict the correction to metal projections only, scaling is performed in the sinogram domain. Anatomic information should be preserved and is excluded from scaling by incorporating a prior image from tissue classification. The corrected high‐frequency sinogram is reconstructed and combined with the low‐frequency component of a normalized metal artifact reduction (NMAR) image. Scans of different anthropomorphic phantoms were acquired (unilateral hip, bilateral hip, dental implants, and embolization coil). Multiple regions of interest (ROIs) were drawn around the metal implants and hounsfield unit (HU) deviations were analyzed. Clinical data sets including single image slices of dental fillings, a bilateral hip implant, spinal fixation screws, and an aneurysm coil were reconstructed and assessed. Results: The prior image‐controlled NLS can remove streak artifacts while preserving anatomic detail within the bone and soft tissue. The qualitative analysis of clinical cases showed a tremendous enhancement within dental fillings and neuro coils, and a significant enhancement within spinal screws or hip implants. The phantom scan measurements support this observation. In all phantom setups, the NLS‐corrected result showed lowest HU derivation and the best visualization of the data. Conclusions: The prior image‐controlled NLS provides a method to reduce high‐frequency streaks in metal‐corrupted computed tomography (CT) data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Multiscale unsupervised domain adaptation for automatic pancreas segmentation in CT volumes using adversarial learning.
- Author
-
Zhu, Yan, Hu, Peijun, Li, Xiang, Tian, Yu, Bai, Xueli, Liang, Tingbo, and Li, Jingsong
- Subjects
PANCREAS ,PANCREATIC diseases ,COMPUTED tomography ,LEARNING strategies ,NETWORK performance - Abstract
Purpose: Computer‐aided automatic pancreas segmentation is essential for early diagnosis and treatment of pancreatic diseases. However, the annotation of pancreas images requires professional doctors and considerable expenditure. Due to imaging differences among various institution population, scanning devices, imaging protocols, and so on, significant degradation in the performance of model inference results is prone to occur when models trained with domain‐specific (usually institution‐specific) datasets are directly applied to new (other centers/institutions) domain data. In this paper, we propose a novel unsupervised domain adaptation method based on adversarial learning to address pancreas segmentation challenges with the lack of annotations and domain shift interference. Methods: A 3D semantic segmentation model with attention module and residual module is designed as the backbone pancreas segmentation model. In both segmentation model and domain adaptation discriminator network, a multiscale progressively weighted structure is introduced to acquire different field of views. Features of labeled data and unlabeled data are fed in pairs into the proposed multiscale discriminator to learn domain‐specific characteristics. Then the unlabeled data features with pseudodomain label are fed to the discriminator to acquire domain‐ambiguous information. With this adversarial learning strategy, the performance of the segmentation network is enhanced to segment unseen unlabeled data. Results: Experiments were conducted on two public annotated datasets as source datasets, respectively, and one private dataset as target dataset, where annotations were not used for the training process but only for evaluation. The 3D segmentation model achieves comparative performance with state‐of‐the‐art pancreas segmentation methods on source domain. After implementing our domain adaptation architecture, the average dice similarity coefficient (DSC) of the segmentation model trained on the NIH‐TCIA source dataset increases from 58.79% to 72.73% on the local hospital dataset, while the performance of the target domain segmentation model transferred from the medical segmentation decathlon (MSD) source dataset rises from 62.34% to 71.17%. Conclusions: Correlations of features across data domains are utilized to train the pancreas segmentation model on unlabeled data domain, improving the generalization of the model. Our results demonstrate that the proposed method enables the segmentation model to make meaningful segmentation for unseen data of the training set. In the future, the proposed method has the potential to apply segmentation model trained on public dataset to clinical unannotated CT images from local hospital, effectively assisting radiologists in clinical practice. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Fast beta‐emitter Monte Carlo simulations and full patient dose calculations of targeted radionuclide therapy: introducing egs_mird.
- Author
-
Martinov, Martin P., Opara, Chidera, Thomson, Rowan M., and Lee, Ting‐Yim
- Subjects
MONTE Carlo method ,POSITRON emission tomography ,RADIOISOTOPES ,PROSTATE ,RECTUM ,COMPUTED tomography ,ELECTRON transport ,PROTON magnetic resonance spectroscopy - Abstract
Background: Targeted radionuclide therapy (TRT) is a fast‐growing field garnering much interest, with several clinical trials currently underway, that has a steady increase in development of treatment techniques. Unfortunately, within the field and many clinical trials, the dosimetry calculation techniques used remain relatively simple, often using a mix of S‐value calculations and kernel convolutions. Purpose: The common TRT calculation techniques, although very quick, can often ignore important aspects of patient anatomy and radionuclide distribution, as well as the interplay there‐in. This paper introduces egs_mird, a new Monte Carlo (MC) application built in EGSnrc which allows users to model full patient tissue and density (using clinical CT images) and radionuclide distribution (using clinical PET images) for fast and detailed dose TRT calculation. Methods: The novel application egs_mird is introduced along with a general outline of the structure of egs_mird simulations. The general structure of the code, and the track‐length (TL) estimator scoring implementation for variance reduction, is described. A new egs++ source class egs_internal_source, created to allow detailed patient‐wide source distribution, and a modified version of egs_radionuclide_source, changed to be able to work with egs_internal_source, are also described. The new code is compared to other MC calculations of S‐values kernels of 131I, 90Y, and 177Lu in the literature, along with further self‐validation using a histogram variant of the electron Fano test. Several full patient prostate 177Lu TRT prostate cancer treatment simulations are performed using a single set of patient DICOM CT and [18F]‐DCFPyL PET data. Results: Good agreement is found between S‐value kernels calculated using egs_mird with egs_internal_source and those found in the literature. Calculating 1000 doses (individual voxel uncertainties of ∼0.05%) in a voxel grid Fano test for monoenergetic 500 keV electrons and 177Lu electrons results in 94% and 99% of the doses being within 0.1% of the expected dose, respectively. For a hypothetical 177Lu treatment, patient prostate, rectum, bone marrow, and bladder dose volume histograms (DVHs) results did not vary significantly when using the TL estimator and not modeling electron transport, modeling bone marrow explicitly (rather than using generic tissue compositions), and reducing activity to voxels containing partial or full calcifications to half or none, respectively. Dose profiles through different regions demonstrate there are some differences with model choices not seen in the DVH. Simulations using the TL estimator can be completed in under 15 min (∼30 min when using standard interaction scoring). Conclusion: This work shows egs_mird to be a reliable MC code for computing TRT doses as realistically as the patient Computed Tomography (CT) and Positron Emission Tomography (PET) data allow. Furthermore, the code can compute doses to sub‐1% uncertainty within 15 min, with little to no optimization. Thus, this work supports the use of egs_mird for dose calculations in TRT. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON): A statistical model based iterative image reconstruction method to eliminate limited-view artifacts and to mitigate the temporal-average artifacts in time-resolved CT.
- Author
-
Chen, Guang‐Hong and Li, Yinsheng
- Subjects
MEDICAL artifacts ,IMAGE reconstruction ,COMPUTED tomography ,STANDARD deviations ,DATA acquisition systems ,MEDICAL physics - Abstract
Purpose: In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. Methods: In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial-temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial-temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial-temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity of the reconstructed images were quantified using the relative root mean square error (rRMSE) and the universal quality index (UQI) in numerical simulations. The performance of the SMART-RECON algorithm was compared with that of the prior image constrained compressed sensing (PICCS) reconstruction quantitatively in simulations and qualitatively in human subject exam. Results: In numerical simulations, the 240? short scan angular span was divided into four consecutive 60° angular subsectors. SMART-RECON enables four high temporal fidelity images without limitedview artifacts. The average rRMSE is 16% and UQIs are 0.96 and 0.95 for the two local regions of interest, respectively. In contrast, the corresponding average rRMSE and UQIs are 25%, 0.78, and 0.81, respectively, for the PICCS reconstruction. Note that only one filtered backprojection image can be reconstructed from the same data set with an average rRMSE and UQIs are 45%, 0.71, and 0.79, respectively, to benchmark reconstruction accuracies. For in vivo contrast enhanced cone beam CT data acquired from a short scan angular span of 200?, three 66? angular subsectors were used in SMART-RECON. The results demonstrated clear contrast difference in three SMART-RECON reconstructed image volumes without limited-view artifacts. In contrast, for the same angular sectors, PICCS cannot reconstruct images without limited-view artifacts and with clear contrast difference in three reconstructed image volumes. Conclusions: In time-resolved CT, the proposed SMART-RECON method provides a new method to eliminate limited-view artifacts using data acquired in an ultranarrow temporal window, which corresponds to approximately 60° angular subsectors. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
48. Segmentation of pulmonary nodules in CT images based on 3D‐UNET combined with three‐dimensional conditional random field optimization.
- Author
-
Wu, Wenhao, Gao, Lei, Duan, Huihong, Huang, Gang, Ye, Xiaodan, and Nie, Shengdong
- Subjects
MARKOV random fields ,RANDOM fields ,PULMONARY nodules ,IMAGE databases ,WEIGHT training ,LUNG cancer ,COMPUTED tomography ,OCCLUSION (Chemistry) - Abstract
Purpose: Pulmonary nodules are a potential manifestation of lung cancer. In computer‐aided diagnosis (CAD) of lung cancer, it is of great significance to extract the complete boundary of the pulmonary nodules in the computed tomography (CT) scans accurately. It can provide doctors with important information such as tumor size and density, which assist doctors in subsequent diagnosis and treatment. In addition to this, in the molecular subtype and radiomics of lung cancer, segmentation of lung nodules also plays a pivotal role. Existing methods are difficult to use only one model to simultaneously treat the boundaries of multiple types of lung nodules in CT images. Method: In order to solve the problem, this paper proposed a three‐dimensional (3D)‐UNET network model optimized by a 3D conditional random field (3D‐CRF) to segment pulmonary nodules. On the basis of 3D‐UNET, the 3D‐CRF is used to optimize the sample output of the training set, so as to update the network weights in training process, reduce the model training time, and reduce the loss rate of the model. We selected 936 sets of pulmonary nodule data for the lung image database consortium and image database resource initiative (LIDC‐IDRI)1 database to train and test the model. What's more, we used clinical data from partner hospitals for additional validation. Results and conclusions: The results show that our method is accurate and effective. Particularly, it shows more significance for the optimization of the segmentation of adhesive pulmonary nodules (the juxta‐pleural and juxta‐vascular nodules) and ground glass pulmonary nodules (GGNs). [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
49. Precise measurement of coronary stenosis diameter with CCTA using CT number calibration.
- Author
-
Chen, Zhennong, Contijoch, Francisco, Schluchter, Andrew, Grady, Leo, Schaap, Michiel, Stayman, Web, Pack, Jed, and McVeigh, Elliot
- Subjects
DIAMETER ,CORONARY artery stenosis ,INTRAVASCULAR ultrasonography ,CORONARY arteries ,CALIBRATION ,CARDIOGRAPHIC tomography ,COMPUTED tomography - Abstract
Purpose: Coronary x‐ray computed tomography angiography (CCTA) continues to develop as a noninvasive method for the assessment of coronary vessel geometry and the identification of physiologically significant lesions. The uncertainty of quantitative lesion diameter measurement due to limited spatial resolution and vessel motion reduces the accuracy of CCTA diagnoses. In this paper, we introduce a new technique called computed tomography (CT)‐number‐Calibrated Diameter to improve the accuracy of the vessel and stenosis diameter measurements with CCTA. Methods: A calibration phantom containing cylindrical holes (diameters spanning from 0.8 mm through 4.0 mm) capturing the range of diameters found in human coronary vessels was three‐dimensional printed. We also printed a human stenosis phantom with 17 tubular channels having the geometry of lesions derived from patient data. We acquired CT scans of the two phantoms with seven different imaging protocols. Calibration curves relating vessel intraluminal maximum voxel value (maximum CT number of a voxel, described in Hounsfield Units, HU) to true diameter, and full‐width‐at‐half maximum (FWHM) to true diameter were constructed for each CCTA protocol. In addition, we acquired scans with a small constant motion (15 mm/s) and used a motion correction reconstruction (Snapshot Freeze) algorithm to correct motion artifacts. We applied our technique to measure the lesion diameter in the 17 lesions in the stenosis phantom and compared the performance of CT‐number‐Calibrated Diameter to the ground truth diameter and a FWHM estimate. Results: In all cases, vessel intraluminal maximum voxel value vs diameter was found to have a simple functional form based on the two‐dimensional point spread function yielding a constant maximum voxel value region above a cutoff diameter, and a decreasing maximum voxel value vs decreasing diameter below a cutoff diameter. After normalization, focal spot size and reconstruction kernel were the principal determinants of cutoff diameter and the rate of maximum voxel value reduction vs decreasing diameter. The small constant motion had a significant effect on the CT number calibration; however, the motion‐correction algorithm returned the maximum voxel value vs diameter curve to that of stationary vessels. The CT number Calibration technique showed better performance than FWHM estimation of diameter, yielding a high accuracy in the tested range (0.8 mm through 2.5 mm). We found a strong linear correlation between the smallest diameter in each of 17 lesions measured by CT‐number‐Calibrated Diameter (DC) and ground truth diameter (Dgt), (DC = 0.951 × Dgt + 0.023 mm, r = 0.998 with a slope very close to 1.0 and intercept very close to 0 mm. Conclusions: Computed tomography‐number‐Calibrated Diameter is an effective method to enhance the accuracy of the estimate of small vessel diameters and degree of coronary stenosis in CCTA. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. StruNet: Perceptual and low‐rank regularized transformer for medical image denoising.
- Author
-
Ma, Yuhui, Yan, Qifeng, Liu, Yonghuai, Liu, Jiang, Zhang, Jiong, and Zhao, Yitian
- Subjects
- *
IMAGE denoising , *COMPUTER-assisted image analysis (Medicine) , *DIAGNOSTIC imaging , *TRANSFORMER models , *OPTICAL coherence tomography , *COMPUTED tomography - Abstract
Background: Various types of noise artifacts inevitably exist in some medical imaging modalities due to limitations of imaging techniques, which impair either clinical diagnosis or subsequent analysis. Recently, deep learning approaches have been rapidly developed and applied on medical images for noise removal or image quality enhancement. Nevertheless, due to complexity and diversity of noise distribution representations in different medical imaging modalities, most of the existing deep learning frameworks are incapable to flexibly remove noise artifacts while retaining detailed information. As a result, it remains challenging to design an effective and unified medical image denoising method that will work across a variety of noise artifacts for different imaging modalities without requiring specialized knowledge in performing the task. Purpose: In this paper, we propose a novel encoder‐decoder architecture called Swin transformer‐based residual u‐shape Network (StruNet), for medical image denoising. Methods: Our StruNet adopts a well‐designed block as the backbone of the encoder‐decoder architecture, which integrates Swin Transformer modules with residual block in parallel connection. Swin Transformer modules could effectively learn hierarchical representations of noise artifacts via self‐attention mechanism in non‐overlapping shifted windows and cross‐window connection, while residual block is advantageous to compensate loss of detailed information via shortcut connection. Furthermore, perceptual loss and low‐rank regularization are incorporated into loss function respectively in order to constrain the denoising results on feature‐level consistency and low‐rank characteristics. Results: To evaluate the performance of the proposed method, we have conducted experiments on three medical imaging modalities including computed tomography (CT), optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA). Conclusions: The results demonstrate that the proposed architecture yields a promising performance of suppressing multiform noise artifacts existing in different imaging modalities. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.