144 results on '"Myers KJ"'
Search Results
2. Patient perspectives on restraint and seclusion experiences: a survey of former patients of New York State psychiatric facilities.
- Author
-
Ray NK, Myers KJ, and Rappaport ME
- Abstract
The study reports the responses of 1,040 individuals to a mail survey related to their treatment and care while in psychiatric facilities in New York State. Approximately half of the respondents reported being subjected to restraint or seclusion during their hospital stays, and 94% of these individuals reported at least one complaint about the appropriateness of the use of restraint and seclusion and/or their care or monitoring during the restraint or seclusion episode. Use of restraints and seclusion also tended to be associated with more negative patient assessments of their overall hospital stay. Responses by former patients also provided strong affirmation of the importance patients attach to sincere staff attempts at less restrictive interventions prior to restraint and seclusion use. [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
3. An Unusual Case of Infectious Mononucleosis: Treatment by Roentgen Therapy
- Author
-
Myers Kj
- Subjects
medicine.medical_specialty ,Pathology ,Mononucleosis ,medicine.medical_treatment ,Medical Records ,Diagnosis, Differential ,symbols.namesake ,Biopsy ,Ascites ,medicine ,Humans ,Disease ,Infectious Mononucleosis ,Lymphatic Diseases ,Cervix ,Unusual case ,Radiotherapy ,medicine.diagnostic_test ,business.industry ,Roentgen ,General Medicine ,medicine.disease ,Hodgkin Disease ,Radiation therapy ,medicine.anatomical_structure ,symbols ,Lymph Nodes ,Radiology ,medicine.symptom ,business - Published
- 1957
4. Beyond videoconferencing: enhancing remote home assessments with 3D modeling technology.
- Author
-
Kang J, Lee MJ, Kreider CM, LeBeau K, Findley K, Myers KJ, and Romero S
- Abstract
Purpose: Occupational therapists in the Veterans Health Administration have transitioned from in-person to videoconferencing for home assessments, benefiting Veterans living in remote and rural areas. However, videoconferencing has limitations, including restricted field of view and poor video quality, affecting hazard identification accuracy. This study aims to introduce and evaluate a three-dimensional (3D) model as an alternative technology for remote home assessments., Materials and methods: We created 3D models using a 360-degree camera and mobile app. Five occupational therapists individually completed virtual training and practice sessions to familiarize themselves with using the 3D model. Each participant then conducted a remote home assessment using the 3D model and completed questionnaires, System Usability Scale (SUS), and semi-structured interviews., Results: Participants spent an average of 10 min training and practicing with the 3D model, and most reported either maintaining or gaining confidence in using it compared to before. All participants successfully completed the assessments, which took an average of 17 min. They rated the 3D model as easy to use, with an average SUS score of 78.5. Participants preferred the 3D model over videoconferencing, noting that it effectively addressed current challenges, and expressed their willingness to integrate it into clinical practice., Conclusion: This study demonstrates that 3D models offer a promising option for remote home assessments. With minimal training, occupational therapists could conduct more effective assessments. It is recommended to use 3D models for an initial understanding of the home environment before videoconferencing-based assessments to enhance the remote assessment experience for occupational therapists and clients.
- Published
- 2024
- Full Text
- View/download PDF
5. Report on the AAPM grand challenge on deep generative modeling for learning medical image statistics.
- Author
-
Deshpande R, Kelkar VA, Gotsis D, Kc P, Zeng R, Myers KJ, Brooks FJ, and Anastasio MA
- Abstract
Background: The findings of the 2023 AAPM Grand Challenge on Deep Generative Modeling for Learning Medical Image Statistics are reported in this Special Report., Purpose: The goal of this challenge was to promote the development of deep generative models for medical imaging and to emphasize the need for their domain-relevant assessments via the analysis of relevant image statistics., Methods: As part of this Grand Challenge, a common training dataset and an evaluation procedure was developed for benchmarking deep generative models for medical image synthesis. To create the training dataset, an established 3D virtual breast phantom was adapted. The resulting dataset comprised about 108 000 images of size 512 × $\times$ 512. For the evaluation of submissions to the Challenge, an ensemble of 10 000 DGM-generated images from each submission was employed. The evaluation procedure consisted of two stages. In the first stage, a preliminary check for memorization and image quality (via the Fréchet Inception Distance [FID]) was performed. Submissions that passed the first stage were then evaluated for the reproducibility of image statistics corresponding to several feature families including texture, morphology, image moments, fractal statistics, and skeleton statistics. A summary measure in this feature space was employed to rank the submissions. Additional analyses of submissions was performed to assess DGM performance specific to individual feature families, the four classes in the training data, and also to identify various artifacts., Results: Fifty-eight submissions from 12 unique users were received for this Challenge. Out of these 12 submissions, 9 submissions passed the first stage of evaluation and were eligible for ranking. The top-ranked submission employed a conditional latent diffusion model, whereas the joint runners-up employed a generative adversarial network, followed by another network for image superresolution. In general, we observed that the overall ranking of the top 9 submissions according to our evaluation method (i) did not match the FID-based ranking, and (ii) differed with respect to individual feature families. Another important finding from our additional analyses was that different DGMs demonstrated similar kinds of artifacts., Conclusions: This Grand Challenge highlighted the need for domain-specific evaluation to further DGM design as well as deployment. It also demonstrated that the specification of a DGM may differ depending on its intended use., (© 2024 The Author(s). Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.)
- Published
- 2024
- Full Text
- View/download PDF
6. Dissecting contributions of pulmonary arterial remodeling to right ventricular afterload in pulmonary hypertension.
- Author
-
Neelakantan S, Mendiola EA, Zambrano B, Vang A, Myers KJ, Zhang P, Choudhary G, and Avazmohammadi R
- Abstract
Pulmonary hypertension (PH) is defined as an elevation in the right ventricle (RV) afterload, characterized by increased hemodynamic pressure in the main pulmonary artery (PA). Elevations in RV afterload increase RV wall stress, resulting in RV remodeling and potentially RV failure. From a biomechanical standpoint, the primary drivers for RV afterload elevations include increases in pulmonary vascular resistance (PVR) in the distal vasculature and decreases in vessel compliance in the proximal PA. However, the individual contributions of the various vascular remodeling events toward the progression of PA pressure elevations and altered vascular hemodynamics remain elusive. In this study, we used a subject-specific one-dimensional (1D) fluid-structure interaction (FSI) model to investigate the alteration of pulmonary hemodynamics in PH and to quantify the contributions of vascular stiffening and increased resistance towards increased main pulmonary artery (MPA) pressure. We used a combination of subject-specific hemodynamic measurements, ex-vivo mechanical testing of arterial tissue specimens, and ex-vivo X-ray micro-tomography imaging to develop the 1D-FSI model and dissect the contribution of PA remodeling events towards alterations in the MPA pressure waveform. Both the amplitude and pulsatility of the MPA pressure waveform were analyzed. Our results indicated that increased distal resistance has the greatest effect on the increase in maximum MPA pressure, while increased stiffness caused significant elevations in the characteristic impedance. The method presented in this study will serve as an essential step toward understanding the complex interplay between PA remodeling events that leads to the most severe adverse effect on RV dysfunction., Competing Interests: Declaration of competing interest The authors declare no conflict of interest.
- Published
- 2024
- Full Text
- View/download PDF
7. Physics-informed motion registration of lung parenchyma across static CT images.
- Author
-
Neelakantan S, Mukherjee T, Myers KJ, Rizi R, and Avazmohammadi R
- Abstract
Lung injuries, such as ventilator-induced lung injury and radiation-induced lung injury, can lead to heterogeneous alterations in the biomechanical behavior of the lungs. While imaging methods, e.g., X-ray and static computed tomography (CT), can point to regional alterations in lung structure between healthy and diseased tissue, they fall short of delineating timewise kinematic variations between the former and the latter. Image registration has gained recent interest as a tool to estimate the displacement experienced by the lungs during respiration via regional deformation metrics such as volumetric expansion and distortion. However, successful image registration commonly relies on a temporal series of image stacks with small displacements in the lungs across succeeding image stacks, which remains limited in static imaging. In this study, we have presented a finite element (FE) method to estimate strains from static images acquired at the end-expiration (EE) and end-inspiration (EI) timepoints, i.e., images with a large deformation between the two distant timepoints. Physiologically realistic loads were applied to the geometry obtained at EE to deform this geometry to match the geometry obtained at EI. The results indicated that the simulation could minimize the error between the two geometries. Using four-dimensional (4D) dynamic CT in a rat, the strain at an isolated transverse plane estimated by our method showed sufficient agreement with that estimated through non-rigid image registration that used all the timepoints. Through the proposed method, we can estimate the lung deformation at any timepoint between EE and EI. The proposed method offers a tool to estimate timewise regional deformation in the lungs using only static images acquired at EE and EI.
- Published
- 2024
8. Assessing the impact of nodule features and software algorithm on pulmonary nodule measurement uncertainty for nodules sized 20 mm or less.
- Author
-
Jirapatnakul A, Yip R, Myers KJ, Cai S, Henschke CI, and Yankelevitz D
- Abstract
Background: Measurements are not exact, so that if a measurement is repeated, one would get a different value each time. The spread of these values is the measurement uncertainty. Understanding measurement uncertainty of pulmonary nodules is important for proper interpretation of size and growth measurements. Larger amounts of measurement uncertainty may require longer follow-up intervals to be confident that any observed growth is due to actual growth rather than measurement uncertainty. We examined the influence of nodule features and software algorithm on measurement uncertainty of small, solid pulmonary nodules., Methods: Volumes of 107 nodules were measured on 4-6 repeated computed tomography (CT) scans (Siemens Somatom AS, 100 kVp, 120 mA, 1.0 mm slice thickness reconstruction) prospectively obtained during CT-guided fine needle aspiration biopsy between 2015-2021 at Department of Diagnostic, Molecular, and Interventional Radiology in Icahn School of Medicine at Mount Sinai, using two different automated volumetric algorithms. For each, the coefficient of variation (standard deviation divided by the mean) of nodule volume measurements was determined. The following features were considered: diameter, location, vessel and pleural attachments, nodule surface area, and extent of the nodule in the three acquisition dimensions of the scanner., Results: Median volume of 107 nodules was 515.23 and 535.53 mm
3 for algorithm #1 and #2, respectively with excellent agreement (intraclass correlation coefficient =0.98). Median coefficient of variation of nodule volume was low for the two algorithms, but significantly different (4.6% vs. 8.7%, P<0.001). Both algorithms had a trend of decreasing coefficient of variation of nodule volume with increasing nodule diameter, though only significant for algorithm #2. Coefficient of variation of nodule volume was significantly associated with nodule volume (P=0.02), attachment to blood vessels (P=0.02), and nodule surface area (P=0.001) for algorithm #2 using a multiple linear regression model. Correlation between the coefficient of variation (CoV) of nodule volume and the CoV of the x, y, z measurements for algorithm #1 were 0.29 (P=0.0021), 0.25 (P=0.009), and 0.80 (P<0.001) respectively, and for algorithm #2, 0.46 (P<0.001), 0.52 (P<0.001), and 0.58 (P<0.001), respectively., Conclusions: Even in the best-case scenario represented in this study, using the same measurement algorithm, scanner, and scanning protocol, considerable measurement uncertainty exists in nodule volume measurement for nodules sized 20 mm or less. We found that measurement uncertainty was affected by interactions between nodule volume, algorithm, and shape complexity., Competing Interests: Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://qims.amegroups.com/article/view/10.21037/qims-23-1501/coif). A.J. received support as the Principal Investigator from a grant from the Prevent Cancer Foundation for this work. He also served as a co-chair on an unpaid basis of the RSNA QIBA Small Lung Nodule Volume committee from Jan 2023 to Feb 2024. R.Y. received support for this work from a grant from the Prevent Cancer Foundation. K.J.M. served as a co-chair on an unpaid basis of the RSNA QIBA Small Lung Nodule Volume committee from Jan 2023 to Feb 2024. She is the owner of Puente Solutions LLC, a for-profit company. She receives consulting fee from Annalise.ai; HeartLung; InformAI; Median iBiopsy; Malcova; Mt. Sinai School of Medicine; Sira; Voronoi; and VoxelCloud. C.I.H. is the President and serves on the board of the Early Diagnosis and Treatment Research Foundation. She receives no compensation from the Foundation. The Foundation is established to provide grants for projects, conferences, and public databases for research on early diagnosis and treatment of diseases. C.I.H. is also a named inventor on a number of patents and patent applications relating to the evaluation of pulmonary nodules on CT scans of the chest which are owned by Cornell Research Foundation (CRF). Since 2009, C.I.H. does not accept any financial benefit from these patents including royalties and any other proceeds related to the patents or patent applications owned by CRF. She is on the advisory board of Lunglife AI without compensation. D.Y. is a named inventor of General Electric on a number of patents and patent applications related to the evaluation of chest diseases including measurements of chest nodules. He has received financial compensation for the licensing of these patents. In addition, he is a consultant and co-owner of Accumetra, a private company developing tools to improve the quality of CT imaging. He is on the advisory board and owns equity in HeartLung, a company that develops software related to CT scans of the chest. He is on the medical advisory board of Median Technology that is developing technology related to analyzing pulmonary nodules and is on the medical advisory board of Carestream, a company that develops radiography equipment. He is also on the advisory board of Lunglife AI. The other author has no conflicts of interest to declare., (2024 Quantitative Imaging in Medicine and Surgery. All rights reserved.)- Published
- 2024
- Full Text
- View/download PDF
9. Complete spatiotemporal quantification of cardiac motion in mice through enhanced acquisition and super-resolution reconstruction.
- Author
-
Mukherjee T, Keshavarzian M, Fugate EM, Naeini V, Darwish A, Ohayon J, Myers KJ, Shah DJ, Lindquist D, Sadayappan S, Pettigrew RI, and Avazmohammadi R
- Abstract
The quantification of cardiac motion using cardiac magnetic resonance imaging (CMR) has shown promise as an early-stage marker for cardiovascular diseases. Despite the growing popularity of CMR-based myocardial strain calculations, measures of complete spatiotemporal strains (i.e., three-dimensional strains over the cardiac cycle) remain elusive. Complete spatiotemporal strain calculations are primarily hampered by poor spatial resolution, with the rapid motion of the cardiac wall also challenging the reproducibility of such strains. We hypothesize that a super-resolution reconstruction (SRR) framework that leverages combined image acquisitions at multiple orientations will enhance the reproducibility of complete spatiotemporal strain estimation. Two sets of CMR acquisitions were obtained for five wild-type mice, combining short-axis scans with radial and orthogonal long-axis scans. Super-resolution reconstruction, integrated with tissue classification, was performed to generate full four-dimensional (4D) images. The resulting enhanced and full 4D images enabled complete quantification of the motion in terms of 4D myocardial strains. Additionally, the effects of SRR in improving accurate strain measurements were evaluated using an in-silico heart phantom. The SRR framework revealed near isotropic spatial resolution, high structural similarity, and minimal loss of contrast, which led to overall improvements in strain accuracy. In essence, a comprehensive methodology was generated to quantify complete and reproducible myocardial deformation, aiding in the much-needed standardization of complete spatiotemporal strain calculations., Competing Interests: 7 Declaration of competing interest S. Sadayappan provides consulting and collaborative research studies to the Leducq Foundation (CUREPLAN), Red Saree Inc., Greater Cincinnati Tamil Sangam, Novo Nordisk, Pfizer, AavantiBio, AstraZeneca, MyoKardia, Merck and Amgen, but such work is unrelated to the content of this article. No other authors declare any conflicts of interest.
- Published
- 2024
- Full Text
- View/download PDF
10. MIDRC-MetricTree: a decision tree-based tool for recommending performance metrics in artificial intelligence-assisted medical image analysis.
- Author
-
Drukker K, Sahiner B, Hu T, Kim GH, Whitney HM, Baughan N, Myers KJ, Giger ML, and McNitt-Gray M
- Abstract
Purpose: The Medical Imaging and Data Resource Center (MIDRC) was created to facilitate medical imaging machine learning (ML) research for tasks including early detection, diagnosis, prognosis, and assessment of treatment response related to the coronavirus disease 2019 pandemic and beyond. The purpose of this work was to create a publicly available metrology resource to assist researchers in evaluating the performance of their medical image analysis ML algorithms., Approach: An interactive decision tree, called MIDRC-MetricTree, has been developed, organized by the type of task that the ML algorithm was trained to perform. The criteria for this decision tree were that (1) users can select information such as the type of task, the nature of the reference standard, and the type of the algorithm output and (2) based on the user input, recommendations are provided regarding appropriate performance evaluation approaches and metrics, including literature references and, when possible, links to publicly available software/code as well as short tutorial videos., Results: Five types of tasks were identified for the decision tree: (a) classification, (b) detection/localization, (c) segmentation, (d) time-to-event (TTE) analysis, and (e) estimation. As an example, the classification branch of the decision tree includes two-class (binary) and multiclass classification tasks and provides suggestions for methods, metrics, software/code recommendations, and literature references for situations where the algorithm produces either binary or non-binary (e.g., continuous) output and for reference standards with negligible or non-negligible variability and unreliability., Conclusions: The publicly available decision tree is a resource to assist researchers in conducting task-specific performance evaluations, including classification, detection/localization, segmentation, TTE, and estimation tasks., (© 2024 The Authors.)
- Published
- 2024
- Full Text
- View/download PDF
11. Longitudinal assessment of demographic representativeness in the Medical Imaging and Data Resource Center open data commons.
- Author
-
Whitney HM, Baughan N, Myers KJ, Drukker K, Gichoya J, Bower B, Chen W, Gruszauskas N, Kalpathy-Cramer J, Koyejo S, Sá RC, Sahiner B, Zhang Z, and Giger ML
- Abstract
Purpose: The Medical Imaging and Data Resource Center (MIDRC) open data commons was launched to accelerate the development of artificial intelligence (AI) algorithms to help address the COVID-19 pandemic. The purpose of this study was to quantify longitudinal representativeness of the demographic characteristics of the primary MIDRC dataset compared to the United States general population (US Census) and COVID-19 positive case counts from the Centers for Disease Control and Prevention (CDC)., Approach: The Jensen-Shannon distance (JSD), a measure of similarity of two distributions, was used to longitudinally measure the representativeness of the distribution of (1) all unique patients in the MIDRC data to the 2020 US Census and (2) all unique COVID-19 positive patients in the MIDRC data to the case counts reported by the CDC. The distributions were evaluated in the demographic categories of age at index, sex, race, ethnicity, and the combination of race and ethnicity., Results: Representativeness of the MIDRC data by ethnicity and the combination of race and ethnicity was impacted by the percentage of CDC case counts for which this was not reported. The distributions by sex and race have retained their level of representativeness over time., Conclusion: The representativeness of the open medical imaging datasets in the curated public data commons at MIDRC has evolved over time as the number of contributing institutions and overall number of subjects have grown. The use of metrics, such as the JSD support measurement of representativeness, is one step needed for fair and generalizable AI algorithm development., (© 2023 The Authors.)
- Published
- 2023
- Full Text
- View/download PDF
12. Sequestration of imaging studies in MIDRC: stratified sampling to balance demographic characteristics of patients in a multi-institutional data commons.
- Author
-
Baughan N, Whitney HM, Drukker K, Sahiner B, Hu T, Kim GH, McNitt-Gray M, Myers KJ, and Giger ML
- Abstract
Purpose: The Medical Imaging and Data Resource Center (MIDRC) is a multi-institutional effort to accelerate medical imaging machine intelligence research and create a publicly available image repository/commons as well as a sequestered commons for performance evaluation and benchmarking of algorithms. After de-identification, approximately 80% of the medical images and associated metadata become part of the open commons and 20% are sequestered from the open commons. To ensure that both commons are representative of the population available, we introduced a stratified sampling method to balance the demographic characteristics across the two datasets., Approach: Our method uses multi-dimensional stratified sampling where several demographic variables of interest are sequentially used to separate the data into individual strata, each representing a unique combination of variables. Within each resulting stratum, patients are assigned to the open or sequestered commons. This algorithm was used on an example dataset containing 5000 patients using the variables of race, age, sex at birth, ethnicity, COVID-19 status, and image modality and compared resulting demographic distributions to naïve random sampling of the dataset over 2000 independent trials., Results: Resulting prevalence of each demographic variable matched the prevalence from the input dataset within one standard deviation. Mann-Whitney U test results supported the hypothesis that sequestration by stratified sampling provided more balanced subsets than naïve randomization, except for demographic subcategories with very low prevalence., Conclusions: The developed multi-dimensional stratified sampling algorithm can partition a large dataset while maintaining balance across several variables, superior to the balance achieved from naïve randomization., (© 2023 The Authors.)
- Published
- 2023
- Full Text
- View/download PDF
13. Discrimination tasks in simulated low-dose CT noise.
- Author
-
Abbey CK, Samuelson FW, Zeng R, Boone JM, Myers KJ, and Eckstein MP
- Subjects
- Humans, Phantoms, Imaging, Algorithms, Image Processing, Computer-Assisted methods, Tomography, X-Ray Computed
- Abstract
Background: This study reports the results of a set of discrimination experiments using simulated images that represent the appearance of subtle lesions in low-dose computed tomography (CT) of the lungs. Noise in these images has a characteristic ramp-spectrum before apodization by noise control filters. We consider three specific diagnostic features that determine whether a lesion is considered malignant or benign, two system-resolution levels, and four apodization levels for a total of 24 experimental conditions., Purpose: The goal of the investigation is to better understand how well human observers perform subtle discrimination tasks like these, and the mechanisms of that performance. We use a forced-choice psychophysical paradigm to estimate observer efficiency and classification images. These measures quantify how effectively subjects can read the images, and how they use images to perform discrimination tasks across the different imaging conditions., Materials and Methods: The simulated CT images used as stimuli in the psychophysical experiments are generated from high-resolution objects passed through a modulation transfer function (MTF) before down-sampling to the image-pixel grid. Acquisition noise is then added with a ramp noise-power spectrum (NPS), with subsequent smoothing through apodization filters. The features considered are lesion size, indistinct lesion boundary, and a nonuniform lesion interior. System resolution is implemented by an MTF with resolution (10% max.) of 0.47 or 0.58 cyc/mm. Apodization is implemented by a Shepp-Logan filter (Sinc profile) with various cutoffs. Six medically naïve subjects participated in the psychophysical studies, entailing training and testing components for each condition. Training consisted of staircase procedures to find the 80% correct threshold for each subject, and testing involved 2000 psychophysical trials at the threshold value for each subject. Human-observer performance is compared to the Ideal Observer to generate estimates of task efficiency. The significance of imaging factors is assessed using ANOVA. Classification images are used to estimate the linear template weights used by subjects to perform these tasks. Classification-image spectra are used to analyze subject weights in the spatial-frequency domain., Results: Overall, average observer efficiency is relatively low in these experiments (10%-40%) relative to detection and localization studies reported previously. We find significant effects for feature type and apodization level on observer efficiency. Somewhat surprisingly, system resolution is not a significant factor. Efficiency effects of the different features appear to be well explained by the profile of the linear templates in the classification images. Increasingly strong apodization is found to both increase the classification-image weights and to increase the mean-frequency of the classification-image spectra. A secondary analysis of "Unapodized" classification images shows that this is largely due to observers undoing (inverting) the effects of apodization filters., Conclusions: These studies demonstrate that human observers can be relatively inefficient at feature-discrimination tasks in ramp-spectrum noise. Observers appear to be adapting to frequency suppression implemented in apodization filters, but there are residual effects that are not explained by spatial weighting patterns. The studies also suggest that the mechanisms for improving performance through the application of noise-control filters may require further investigation., (© 2023 The Authors. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.)
- Published
- 2023
- Full Text
- View/download PDF
14. Assessing the Ability of Generative Adversarial Networks to Learn Canonical Medical Image Statistics.
- Author
-
Kelkar VA, Gotsis DS, Brooks FJ, Kc P, Myers KJ, Zeng R, and Anastasio MA
- Abstract
In recent years, generative adversarial networks (GANs) have gained tremendous popularity for potential applications in medical imaging, such as medical image synthesis, restoration, reconstruction, translation, as well as objective image quality assessment. Despite the impressive progress in generating high-resolution, perceptually realistic images, it is not clear if modern GANs reliably learn the statistics that are meaningful to a downstream medical imaging application. In this work, the ability of a state-of-the-art GAN to learn the statistics of canonical stochastic image models (SIMs) that are relevant to objective assessment of image quality is investigated. It is shown that although the employed GAN successfully learned several basic first- and second-order statistics of the specific medical SIMs under consideration and generated images with high perceptual quality, it failed to correctly learn several per-image statistics pertinent to the these SIMs, highlighting the urgent need to assess medical image GANs in terms of objective measures of image quality.
- Published
- 2023
- Full Text
- View/download PDF
15. Translating radiology reports into plain language using ChatGPT and GPT-4 with prompt learning: results, limitations, and potential.
- Author
-
Lyu Q, Tan J, Zapadka ME, Ponnatapura J, Niu C, Myers KJ, Wang G, and Whitlow CT
- Abstract
The large language model called ChatGPT has drawn extensively attention because of its human-like expression and reasoning abilities. In this study, we investigate the feasibility of using ChatGPT in experiments on translating radiology reports into plain language for patients and healthcare providers so that they are educated for improved healthcare. Radiology reports from 62 low-dose chest computed tomography lung cancer screening scans and 76 brain magnetic resonance imaging metastases screening scans were collected in the first half of February for this study. According to the evaluation by radiologists, ChatGPT can successfully translate radiology reports into plain language with an average score of 4.27 in the five-point system with 0.08 places of information missing and 0.07 places of misinformation. In terms of the suggestions provided by ChatGPT, they are generally relevant such as keeping following-up with doctors and closely monitoring any symptoms, and for about 37% of 138 cases in total ChatGPT offers specific suggestions based on findings in the report. ChatGPT also presents some randomness in its responses with occasionally over-simplified or neglected information, which can be mitigated using a more detailed prompt. Furthermore, ChatGPT results are compared with a newly released large model GPT-4, showing that GPT-4 can significantly improve the quality of translated reports. Our results show that it is feasible to utilize large language models in clinical education, and further efforts are needed to address limitations and maximize their potential., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
16. Development of metaverse for intelligent healthcare.
- Author
-
Wang G, Badal A, Jia X, Maltz JS, Mueller K, Myers KJ, Niu C, Vannier M, Yan P, Yu Z, and Zeng R
- Abstract
The metaverse integrates physical and virtual realities, enabling humans and their avatars to interact in an environment supported by technologies such as high-speed internet, virtual reality, augmented reality, mixed and extended reality, blockchain, digital twins and artificial intelligence (AI), all enriched by effectively unlimited data. The metaverse recently emerged as social media and entertainment platforms, but extension to healthcare could have a profound impact on clinical practice and human health. As a group of academic, industrial, clinical and regulatory researchers, we identify unique opportunities for metaverse approaches in the healthcare domain. A metaverse of 'medical technology and AI' (MeTAI) can facilitate the development, prototyping, evaluation, regulation, translation and refinement of AI-based medical practice, especially medical imaging-guided diagnosis and therapy. Here, we present metaverse use cases, including virtual comparative scanning, raw data sharing, augmented regulatory science and metaversed medical intervention. We discuss relevant issues on the ecosystem of the MeTAI metaverse including privacy, security and disparity. We also identify specific action items for coordinated efforts to build the MeTAI metaverse for improved healthcare quality, accessibility, cost-effectiveness and patient satisfaction., Competing Interests: Competing interests The authors declare no competing interests.
- Published
- 2022
- Full Text
- View/download PDF
17. Reaching the "Hard-to-Reach" Sexual and Gender Diverse Communities for Population-Based Research in Cancer Prevention and Control: Methods for Online Survey Data Collection and Management.
- Author
-
Myers KJ, Jaffe T, Kanda DA, Pankratz VS, Tawfik B, Wu E, McClain ME, Mishra SI, Kano M, Madhivanan P, and Adsul P
- Abstract
Purpose: Around 5% of United States (U.S.) population identifies as Sexual and Gender Diverse (SGD), yet there is limited research around cancer prevention among these populations. We present multi-pronged, low-cost, and systematic recruitment strategies used to reach SGD communities in New Mexico (NM), a state that is both largely rural and racially/ethnically classified as a "majority-minority" state., Methods: Our recruitment focused on using: (1) Every Door Direct Mail (EDDM) program, by the United States Postal Services (USPS); (2) Google and Facebook advertisements; (3) Organizational outreach via emails to publicly available SGD-friendly business contacts; (4) Personal outreach via flyers at clinical and community settings across NM. Guided by previous research, we provide detailed descriptions on using strategies to check for fraudulent and suspicious online responses, that ensure data integrity., Results: A total of 27,369 flyers were distributed through the EDDM program and 436,177 impressions were made through the Google and Facebook ads. We received a total of 6,920 responses on the eligibility survey. For the 5,037 eligible respondents, we received 3,120 (61.9%) complete responses. Of these, 13% (406/3120) were fraudulent/suspicious based on research-informed criteria and were removed. Final analysis included 2,534 respondents, of which the majority (59.9%) reported hearing about the study from social media. Of the respondents, 49.5% were between 31-40 years, 39.5% were Black, Hispanic, or American Indian/Alaskan Native, and 45.9% had an annual household income below $50,000. Over half (55.3%) were assigned male, 40.4% were assigned female, and 4.3% were assigned intersex at birth. Transgender respondents made up 10.6% (n=267) of the respondents. In terms of sexual orientation, 54.1% (n=1371) reported being gay or lesbian, 30% (n=749) bisexual, and 15.8% (n=401) queer. A total of 756 (29.8%) respondents reported receiving a cancer diagnosis and among screen-eligible respondents, 66.2% reported ever having a Pap, 78.6% reported ever having a mammogram, and 84.1% reported ever having a colonoscopy. Over half of eligible respondents (58.7%) reported receiving Human Papillomavirus vaccinations., Conclusion: Study findings showcase effective strategies to reach communities, maximize data quality, and prevent the misrepresentation of data critical to improve health in SGD communities., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2022 Myers, Jaffe, Kanda, Pankratz, Tawfik, Wu, McClain, Mishra, Kano, Madhivanan and Adsul.)
- Published
- 2022
- Full Text
- View/download PDF
18. Providers' Shift to Telerehabilitation at the U.S. Veterans Health Administration During COVID-19: Practical Applications.
- Author
-
Kreider CM, Hale-Gallardo J, Kramer JC, Mburu S, Slamka MR, Findley KE, Myers KJ, and Romero S
- Subjects
- Humans, Pandemics, Veterans Health, COVID-19, Telemedicine, Telerehabilitation
- Abstract
Telerehabilitation provides Veteran patients with necessary rehabilitation treatment. It enhances care continuity and reduces travel time for Veterans who face long distances to receive care at a Veterans Health Administration (VHA) medical facility. The onset of the COVID-19 pandemic necessitated a sudden shift to telehealth-including telerehabilitation, where a paucity of data-driven guidelines exist that are specific to the practicalities entailed in telerehabilitation implementation. This paper explicates gains in practical knowledge for implementing telerehabilitation that were accelerated during the rapid shift of VHA healthcare from out-patient rehabilitation services to telerehabilitation during the COVID-19 pandemic. Group and individual interviews with 12 VHA rehabilitation providers were conducted to examine, in-depth, the providers' implementation of telerehabilitation. Thematic analysis yielded nine themes: (i) Willingness to Give Telerehabilitation a Chance: A Key Ingredient; (ii) Creativity and Adaptability: Critical Attributes for Telerehabilitation Providers; (iii) Adapting Assessments; (iv) Adapting Interventions; (v) Role and Workflow Adaptations; (vi) Appraising for Self the Feasibility of the Telerehabilitation Modality; (vii) Availability of Informal, In-Person Support Improves Feasibility of Telerehabilitation; (viii) Shifts in the Expectations by the Patients and by the Provider; and (ix) Benefit and Anticipated Future of Telerehabilitation. This paper contributes an in-depth understanding of clinical reasoning considerations, supportive strategies, and practical approaches for engaging Veterans in telerehabilitation., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2022 Kreider, Hale-Gallardo, Kramer, Mburu, Slamka, Findley, Myers and Romero.)
- Published
- 2022
- Full Text
- View/download PDF
19. Fifty years of SPIE Medical Imaging proceedings papers.
- Author
-
Nishikawa RM, Deserno TM, Madabhushi A, Krupinski EA, Summers RM, Hoeschen C, Mello-Thoms C, Myers KJ, Kupinski MA, and Siewerdsen JH
- Abstract
Purpose: To commemorate the 50th anniversary of the first SPIE Medical Imaging meeting, we highlight some of the important publications published in the conference proceedings. Approach: We determined the top cited and downloaded papers. We also asked members of the editorial board of the Journal of Medical Imaging to select their favorite papers. Results: There was very little overlap between the three methods of highlighting papers. The downloads were mostly recent papers, whereas the favorite papers were mostly older papers. Conclusions: The three different methods combined provide an overview of the highlights of the papers published in the SPIE Medical Imaging conference proceedings over the last 50 years., (© 2022 Society of Photo-Optical Instrumentation Engineers (SPIE).)
- Published
- 2022
- Full Text
- View/download PDF
20. Performance of a deep learning-based CT image denoising method: Generalizability over dose, reconstruction kernel, and slice thickness.
- Author
-
Zeng R, Lin CY, Li Q, Jiang L, Skopec M, Fessler JA, and Myers KJ
- Subjects
- Algorithms, Humans, Image Processing, Computer-Assisted, Neural Networks, Computer, Phantoms, Imaging, Radiation Dosage, Signal-To-Noise Ratio, Tomography, X-Ray Computed, Deep Learning
- Abstract
Purpose: Deep learning (DL) is rapidly finding applications in low-dose CT image denoising. While having the potential to improve the image quality (IQ) over the filtered back projection method (FBP) and produce images quickly, performance generalizability of the data-driven DL methods is not fully understood yet. The main purpose of this work is to investigate the performance generalizability of a low-dose CT image denoising neural network in data acquired under different scan conditions, particularly relating to these three parameters: reconstruction kernel, slice thickness, and dose (noise) level. A secondary goal is to identify any underlying data property associated with the CT scan settings that might help predict the generalizability of the denoising network., Methods: We select the residual encoder-decoder convolutional neural network (REDCNN) as an example of a low-dose CT image denoising technique in this work. To study how the network generalizes on the three imaging parameters, we grouped the CT volumes in the Low-Dose Grand Challenge (LDGC) data into three pairs of training datasets according to their imaging parameters, changing only one parameter in each pair. We trained REDCNN with them to obtain six denoising models. We test each denoising model on datasets of matching and mismatching parameters with respect to its training sets regarding dose, reconstruction kernel, and slice thickness, respectively, to evaluate the denoising performance changes. Denoising performances are evaluated on patient scans, simulated phantom scans, and physical phantom scans using IQ metrics including mean-squared error (MSE), contrast-dependent modulation transfer function (MTF), pixel-level noise power spectrum (pNPS), and low-contrast lesion detectability (LCD)., Results: REDCNN had larger MSE when the testing data were different from the training data in reconstruction kernel, but no significant MSE difference when varying slice thickness in the testing data. REDCNN trained with quarter-dose data had slightly worse MSE in denoising higher-dose images than that trained with mixed-dose data (17%-80%). The MTF tests showed that REDCNN trained with the two reconstruction kernels and slice thicknesses yielded images of similar image resolution. However, REDCNN trained with mixed-dose data preserved the low-contrast resolution better compared to REDCNN trained with quarter-dose data. In the pNPS test, it was found that REDCNN trained with smooth-kernel data could not remove high-frequency noise in the test data of sharp kernel, possibly because the lack of high-frequency noise in the smooth-kernel data limited the ability of the trained model in removing high-frequency noise. Finally, in the LCD test, REDCNN improved the lesion detectability over the original FBP images regardless of whether the training and testing data had matching reconstruction kernels., Conclusions: REDCNN is observed to be poorly generalizable between reconstruction kernels, more robust in denoising data of arbitrary dose levels when trained with mixed-dose data, and not highly sensitive to slice thickness. It is known that reconstruction kernel affects the in-plane pNPS shape of a CT image, whereas slice thickness and dose level do not, so it is possible that the generalizability performance of this CT image denoising network highly correlates to the pNPS similarity between the testing and training data., (© 2021 American Association of Physicists in Medicine. This article has been contributed to by US Government employees and their work is in the public domain in the USA.)
- Published
- 2022
- Full Text
- View/download PDF
21. Special Issue Editorial: The SPIE Medical Imaging Symposium Celebrates 50 Years.
- Author
-
Myers KJ and Giger ML
- Abstract
The article introduces the JMI Special Issue Celebrating 50 Years of SPIE Medical Imaging., (© 2022 Society of Photo-Optical Instrumentation Engineers (SPIE).)
- Published
- 2022
- Full Text
- View/download PDF
22. Objective Task-Based Evaluation of Artificial Intelligence-Based Medical Imaging Methods:: Framework, Strategies, and Role of the Physician.
- Author
-
Jha AK, Myers KJ, Obuchowski NA, Liu Z, Rahman MA, Saboury B, Rahmim A, and Siegel BA
- Subjects
- Humans, Positron-Emission Tomography, Artificial Intelligence, Physicians
- Abstract
Artificial intelligence-based methods are showing promise in medical imaging applications. There is substantial interest in clinical translation of these methods, requiring that they be evaluated rigorously. We lay out a framework for objective task-based evaluation of artificial intelligence methods. We provide a list of available tools to conduct this evaluation. We outline the important role of physicians in conducting these evaluation studies. The examples in this article are proposed in the context of PET scans with a focus on evaluating neural network-based methods. However, the framework is also applicable to evaluate other medical imaging modalities and other types of artificial intelligence methods., Competing Interests: Disclosure Nancy Obuchowski is a statistician for the Quantitative Imaging Biomarkers Alliance (QIBA). Other authors have no relevant financial disclosures., (Copyright © 2021 Elsevier Inc. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
23. CT metal artifact reduction algorithms: Toward a framework for objective performance assessment.
- Author
-
Vaishnav JY, Ghammraoui B, Leifer M, Zeng R, Jiang L, and Myers KJ
- Subjects
- Algorithms, Metals, Phantoms, Imaging, Artifacts, Tomography, X-Ray Computed
- Abstract
Purpose: Although several metal artifact reduction (MAR) algorithms for computed tomography (CT) scanning are commercially available, no quantitative, rigorous, and reproducible method exists for assessing their performance. The lack of assessment methods poses a challenge to regulators, consumers, and industry. We explored a phantom-based framework for assessing an important aspect of MAR performance: how applying MAR in the presence of metal affects model observer performance at a low-contrast detectability (LCD) task This work is, to our knowledge, the first model observer-based framework for the evaluation of MAR algorithms in the published literature., Methods: We designed a numerical head phantom with metal implants. In order to incorporate an element of randomness, the phantom included a rotatable inset with an inhomogeneous background. We generated simulated projection data for the phantom. We applied two variants of a simple MAR algorithm, sinogram inpainting, to the projection data, that we reconstructed using filtered backprojection. To assess how MAR affected observer performance, we examined the detectability of a signal at the center of a region of interest (ROI) by a channelized Hotelling observer (CHO). As a figure of merit, we used the area under the ROC curve (AUC)., Results: We used simulation to test our framework on two variants of the MAR technique of sinogram inpainting. We found that our method was able to resolve the difference in two different MAR algorithms' effect on LCD task performance, as well as the difference in task performances when MAR was applied, vs not., Conclusion: We laid out a phantom-based framework for objective assessment of how MAR impacts low-contrast detectability, that we tested on two MAR algorithms. Our results demonstrate the importance of testing MAR performance over a range of object and imaging parameters, since applying MAR does not always improve the quality of an image for a given diagnostic task. Our framework is an initial step toward developing a more comprehensive objective assessment method for MAR, which would require developing additional phantoms and methods specific to various clinical applications of MAR, and increasing study efficiency., (Published 2020. This article is a U.S. Government work and is in the public domain in the USA. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.)
- Published
- 2020
- Full Text
- View/download PDF
24. Computational reader design and statistical performance evaluation of an in-silico imaging clinical trial comparing digital breast tomosynthesis with full-field digital mammography.
- Author
-
Zeng R, Samuelson FW, Sharma D, Badal A, Christian GG, Glick SJ, Myers KJ, and Badano A
- Abstract
A recent study reported on an in-silico imaging trial that evaluated the performance of digital breast tomosynthesis (DBT) as a replacement for full-field digital mammography (FFDM) for breast cancer screening. In this in-silico trial, the whole imaging chain was simulated, including the breast phantom generation, the x-ray transport process, and computational readers for image interpretation. We focus on the design and performance characteristics of the computational reader in the above-mentioned trial. Location-known lesion (spiculated mass and clustered microcalcifications) detection tasks were used to evaluate the imaging system performance. The computational readers were designed based on the mechanism of a channelized Hotelling observer (CHO), and the reader models were selected to trend human performance. Parameters were tuned to ensure stable lesion detectability. A convolutional CHO that can adapt a round channel function to irregular lesion shapes was compared with the original CHO and was found to be suitable for detecting clustered microcalcifications but was less optimal in detecting spiculated masses. A three-dimensional CHO that operated on the multiple slices was compared with a two-dimensional (2-D) CHO that operated on three versions of 2-D slabs converted from the multiple slices and was found to be optimal in detecting lesions in DBT. Multireader multicase reader output analysis was used to analyze the performance difference between FFDM and DBT for various breast and lesion types. The results showed that DBT was more beneficial in detecting masses than detecting clustered microcalcifications compared with FFDM, consistent with the finding in a clinical imaging trial. Statistical uncertainty smaller than 0.01 standard error for the estimated performance differences was achieved with a dataset containing approximately 3000 breast phantoms. The computational reader design methodology presented provides evidence that model observers can be useful in-silico tools for supporting the performance comparison of breast imaging systems., (© 2020 Society of Photo-Optical Instrumentation Engineers (SPIE).)
- Published
- 2020
- Full Text
- View/download PDF
25. Human observer templates for lesion discrimination tasks.
- Author
-
Abbey CK, Samuelson FW, Zeng R, Boone JM, Eckstein MP, and Myers KJ
- Abstract
We investigate a series of two-alternative forced-choice (2AFC) discrimination tasks based on malignant features of abnormalities in low-dose lung CT scans. A total of 3 tasks are evaluated, and these consist of a size-discrimination task, a boundary-sharpness task, and an irregular-interior task. Target and alternative signal profiles for these tasks are modulated by one of two system transfer functions and embedded in ramp-spectrum noise that has been apodized for noise control in one of 4 different ways. This gives the resulting images statistical properties that are related to weak ground-glass lesions in axial slices of low-dose lung CT images. We investigate observer performance in these tasks using a combination of statistical efficiency and classification images. We report results of 24 2AFC experiments involving the three tasks. A staircase procedure is used to find the approximate 80% correct discrimination threshold in each task, with a subsequent set of 2,000 trials at this threshold. These data are used to estimate statistical efficiency with respect to the ideal observer for each task, and to estimate the observer template using the classification-image methodology. We find efficiency varies between the different tasks with lowest efficiency in the boundary-sharpness task, and highest efficiency in the non-uniform interior task. All three tasks produce clearly visible patterns of positive and negative weighting in the classification images. The spatial frequency plots of classification images show how apodization results in larger weights at higher spatial frequencies.
- Published
- 2020
- Full Text
- View/download PDF
26. Special Section Guest Editorial: Evaluation Methodologies for Clinical AI.
- Author
-
Astley SM, Chen W, Myers KJ, and Nishikawa RM
- Abstract
The editorial introduces the Special Section on Evaluation Methodologies for Clinical AI., (© 2020 Society of Photo-Optical Instrumentation Engineers (SPIE).)
- Published
- 2020
- Full Text
- View/download PDF
27. Performance evaluation of computed tomography systems: Summary of AAPM Task Group 233.
- Author
-
Samei E, Bakalyar D, Boedeker KL, Brady S, Fan J, Leng S, Myers KJ, Popescu LM, Ramirez Giraldo JC, Ranallo F, Solomon J, Vaishnav J, and Wang J
- Subjects
- Contrast Media, Guidelines as Topic, Image Processing, Computer-Assisted, Quality Control, Radiation Dosage, Tomography, X-Ray Computed instrumentation, Tomography, X-Ray Computed standards, Societies, Medical, Tomography, X-Ray Computed methods
- Abstract
Background: The rapid development and complexity of new x-ray computed tomography (CT) technologies and the need for evidence-based optimization of image quality with respect to radiation and contrast media dose call for an updated approach towards CT performance evaluation., Aims: This report offers updated testing guidelines for testing CT systems with an enhanced focus on the operational performance including iterative reconstructions and automatic exposure control (AEC) techniques., Materials and Methods: The report was developed based on a comprehensive review of best methods and practices in the scientific literature. The detailed methods include the assessment of 1) CT noise (magnitude, texture, nonuniformity, inhomogeneity), 2) resolution (task transfer function under varying conditions and its scalar reflections), 3) task-based performance (detectability, estimability), and 4) AEC performance (spatial, noise, and mA concordance of attenuation and exposure modulation). The methods include varying reconstruction and tube current modulation conditions, standardized testing protocols, and standardized quantities and metrology to facilitate tracking, benchmarking, and quantitative comparisons., Results: The methods, implemented in cited publications, are robust to provide a representative reflection of CT system performance as used operationally in a clinical facility. The methods include recommendations for phantoms and phantom image analysis., Discussion: In line with the current professional trajectory of the field toward quantitation and operational engagement, the stated methods offer quantitation that is more predictive of clinical performance than specification-based approaches. They can pave the way to approach performance testing of new CT systems not only in terms of acceptance testing (i.e., verifying a device meets predefined specifications), but also system commissioning (i.e., determining how the system can be used most effectively in clinical practice)., Conclusion: We offer a set of common testing procedures that can be utilized towards the optimal clinical utilization of CT imaging devices, benchmarking across varying systems and times, and a basis to develop future performance-based criteria for CT imaging., (© 2019 American Association of Physicists in Medicine.)
- Published
- 2019
- Full Text
- View/download PDF
28. Discrimination of Pulmonary Nodule Volume Change for Low- and High-contrast Tasks in a Phantom CT Study with Low-dose Protocols.
- Author
-
Gavrielides MA, Li Q, Zeng R, Berman BP, Sahiner B, Gong Q, Myers KJ, DeFilippo G, and Petrick N
- Subjects
- Area Under Curve, Early Detection of Cancer methods, Humans, Lung diagnostic imaging, Lung Neoplasms pathology, Phantoms, Imaging, ROC Curve, Radiation Dosage, Solitary Pulmonary Nodule pathology, Tumor Burden, Lung Neoplasms diagnostic imaging, Solitary Pulmonary Nodule diagnostic imaging, Tomography, X-Ray Computed methods
- Abstract
Rationale and Objectives: The quantitative assessment of volumetric CT for discriminating small changes in nodule size has been under-examined. This phantom study examined the effect of imaging protocol, nodule size, and measurement method on volume-based change discrimination across low and high object to background contrast tasks., Materials and Methods: Eight spherical objects ranging in diameter from 5.0 mm to 5.75 mm and 8.0 mm to 8.75 mm with 0.25 mm increments were scanned within an anthropomorphic phantom with either foam-background (high-contrast task, ∼1000 HU object to background difference)) or gelatin-background (low-contrast task, ∼50 to 100 HU difference). Ten repeat acquisitions were collected for each protocol with varying exposures, reconstructed slice thicknesses and reconstruction kernels. Volume measurements were obtained using a matched-filter approach (MF) and a publicly available 3D segmentation-based tool (SB). Discrimination of nodule sizes was assessed using the area under the ROC curve (AUC)., Results: Using a low-dose (1.3 mGy), thin-slice (≤1.5 mm) protocol, changes of 0.25 mm in diameter were detected with AU = 1.0 for all baseline sizes for the high-contrast task regardless of measurement method. For the more challenging low-contrast task and same protocol, MF detected changes of 0.25 mm from baseline sizes ≥5.25 mm and volume changes ≥9.4% with AUC≥0.81 whereas corresponding results for SB were poor (AUC within 0.49-0.60). Performance for SB was improved, but still inconsistent, when exposure was increased to 4.4 mGy., Conclusion: The reliable discrimination of small changes in pulmonary nodule size with low-dose, thin-slice CT protocols suitable for lung cancer screening was dependent on the inter-related effects of nodule to background contrast and measurement method., (Published by Elsevier Inc.)
- Published
- 2019
- Full Text
- View/download PDF
29. A data-efficient method for local noise power spectrum (NPS) estimation in FDK-reconstructed 3D cone-beam CT.
- Author
-
Zeng R, Torkaman M, Ning H, Zhuge Y, Miller R, and Myers KJ
- Subjects
- Humans, Signal-To-Noise Ratio, Algorithms, Cone-Beam Computed Tomography methods, Four-Dimensional Computed Tomography methods, Image Processing, Computer-Assisted methods, Lung Neoplasms diagnostic imaging, Phantoms, Imaging
- Abstract
Purpose: For computed tomography (CT) systems in which noise is nonstationary, a local noise power spectrum (NPS) is often needed to characterize its noise property. We have previously developed a data-efficient radial NPS method to estimate the two-dimensional (2D) local NPS for filtered back projection (FBP)-reconstructed fan-beam CT utilizing the polar separability of CT NPS. In this work, we extend this method to estimate three-dimensional (3D) local NPS for feldkamp-davis-kress (FDK)-reconstructed cone-beam CT (CBCT) volumes., Methods: Starting from the 2D polar separability, we analyze the CBCT geometry and FDK image reconstruction process to derive the 3D expression of the polar separability for CBCT local NPS. With the polar separability, the 3D local NPS of CBCT can be decomposed into a 2D radial NPS shape function and a one-dimensional (1D) angular amplitude function with certain geometrical transforms. The 2D radial NPS shape function is a global function characterizing the noise correlation structure, while the 1D angular amplitude function is a local function reflecting the varying local noise amplitudes. The 3D radial local NPS method is constructed from the polar separability. We evaluate the accuracy of the 3D radial local NPS method using simulated and real CBCT data by comparing the radial local NPS estimates to a reference local NPS in terms of normalized mean squared error (NMSE) and a task-based performance metric (lesion detectability)., Results: In both simulated and physical CBCT examples, a very small NMSE (<5%) was achieved by the radial local NPS method from as few as two scans, while for the traditional local NPS method, about 20 scans were needed to reach this accuracy. The results also showed that the detectability-based system performances computed using the local NPS estimated with the NPS method developed in this work from two scans closely reflected the actual system performance., Conclusions: The polar separability greatly reduces the data dimensionality of the 3D CBCT local NPS. The radial local NPS method developed based on this property is shown to be capable of estimating the 3D local NPS from only two CBCT scans with acceptable accuracy. The minimum data requirement indicates the potential utility of local NPS in CBCT applications even for clinical situations., (Published 2019. This article is a U.S. Government work and is in the public domain in the USA.)
- Published
- 2019
- Full Text
- View/download PDF
30. Impact of prevalence and case distribution in lab-based diagnostic imaging studies.
- Author
-
Gallas BD, Chen W, Cole E, Ochs R, Petrick N, Pisano ED, Sahiner B, Samuelson FW, and Myers KJ
- Abstract
We investigated effects of prevalence and case distribution on radiologist diagnostic performance as measured by area under the receiver operating characteristic curve (AUC) and sensitivity-specificity in lab-based reader studies evaluating imaging devices. Our retrospective reader studies compared full-field digital mammography (FFDM) to screen-film mammography (SFM) for women with dense breasts. Mammograms were acquired from the prospective Digital Mammographic Imaging Screening Trial. We performed five reader studies that differed in terms of cancer prevalence and the distribution of noncancers. Twenty radiologists participated in each reader study. Using split-plot study designs, we collected recall decisions and multilevel scores from the radiologists for calculating sensitivity, specificity, and AUC. Differences in reader-averaged AUCs slightly favored SFM over FFDM (biggest AUC difference: 0.047, SE = 0.023 , p = 0.047 ), where standard error accounts for reader and case variability. The differences were not significant at a level of 0.01 (0.05/5 reader studies). The differences in sensitivities and specificities were also indeterminate. Prevalence had little effect on AUC (largest difference: 0.02), whereas sensitivity increased and specificity decreased as prevalence increased. We found that AUC is robust to changes in prevalence, while radiologists were more aggressive with recall decisions as prevalence increased.
- Published
- 2019
- Full Text
- View/download PDF
31. Evaluation of Digital Breast Tomosynthesis as Replacement of Full-Field Digital Mammography Using an In Silico Imaging Trial.
- Author
-
Badano A, Graff CG, Badal A, Sharma D, Zeng R, Samuelson FW, Glick SJ, and Myers KJ
- Subjects
- Breast diagnostic imaging, Breast Neoplasms diagnostic imaging, Calcinosis diagnostic imaging, Computer Simulation, Female, Humans, ROC Curve, Mammography methods, Mammography standards
- Abstract
Importance: Expensive and lengthy clinical trials can delay regulatory evaluation of innovative technologies, affecting patient access to high-quality medical products. Simulation is increasingly being used in product development but rarely in regulatory applications., Objectives: To conduct a computer-simulated imaging trial evaluating digital breast tomosynthesis (DBT) as a replacement for digital mammography (DM) and to compare the results with a comparative clinical trial., Design, Setting, and Participants: The simulated Virtual Imaging Clinical Trial for Regulatory Evaluation (VICTRE) trial was designed to replicate a clinical trial that used human patients and radiologists. Images obtained with in silico versions of DM and DBT systems via fast Monte Carlo x-ray transport were interpreted by a computational reader detecting the presence of lesions. A total of 2986 synthetic image-based virtual patients with breast sizes and radiographic densities representative of a screening population and compressed thicknesses from 3.5 to 6 cm were generated using an analytic approach in which anatomical structures are randomly created within a predefined breast volume and compressed in the craniocaudal orientation. A positive cohort contained a digitally inserted microcalcification cluster or spiculated mass., Main Outcomes and Measures: The trial end point was the difference in area under the receiver operating characteristic curve between modalities for lesion detection. The trial was sized for an SE of 0.01 in the change in area under the curve (AUC), half the uncertainty in the comparative clinical trial., Results: In this trial, computational readers analyzed 31 055 DM and 27 960 DBT cases from 2986 virtual patients with the following Breast Imaging Reporting and Data System densities: 286 (9.6%) extremely dense, 1200 (40.2%) heterogeneously dense, 1200 (40.2%) scattered fibroglandular densities, and 300 (10.0%) almost entirely fat. The mean (SE) change in AUC was 0.0587 (0.0062) (P < .001) in favor of DBT. The change in AUC was larger for masses (mean [SE], 0.0903 [0.008]) than for calcifications (mean [SE], 0.0268 [0.004]), which was consistent with the findings of the comparative trial (mean [SE], 0.065 [0.017] for masses and -0.047 [0.032] for calcifications)., Conclusions and Relevance: The results of the simulated VICTRE trial are consistent with the performance seen in the comparative trial. While further research is needed to assess the generalizability of these findings, in silico imaging trials represent a viable source of regulatory evidence for imaging devices.
- Published
- 2018
- Full Text
- View/download PDF
32. Assessment of bulbar function in amyotrophic lateral sclerosis: validation of a self-report scale (Center for Neurologic Study Bulbar Function Scale).
- Author
-
Smith RA, Macklin EA, Myers KJ, Pattee GL, Goslin KL, Meekins GD, Green JR, Shefner JM, and Pioro EP
- Subjects
- Aged, Amyotrophic Lateral Sclerosis physiopathology, Diagnostic Self Evaluation, Female, Humans, Male, Middle Aged, Quality of Life, Amyotrophic Lateral Sclerosis diagnosis, Deglutition physiology, Speech physiology
- Abstract
Background and Purpose: Impaired bulbar functions of speech and swallowing are among the most serious consequences of amyotrophic lateral sclerosis (ALS). Despite this, clinical trials in ALS have rarely emphasized bulbar function as an endpoint. The rater-administered Amyotrophic Lateral Sclerosis Functional Rating Scale-Revised (ALSFRS-R) or various quality-of-life measures are commonly used to measure symptomatic benefit. Accordingly, we sought to evaluate the utility of measures specific to bulbar function in ALS., Methods: We assessed bulbar functions in 120 patients with ALS, with clinicians first making direct observations of the degree of speech, swallowing and salivation impairment in these subjects. Clinical diagnosis of bulbar impairment was then compared with ALSFRS-R scores, speech rate, time to swallow liquids and solids, and scores obtained when patients completed visual analog scales (VASs) and the newly-developed 21-question self-administered Center for Neurologic Study Bulbar Function Scale (CNS-BFS)., Results: The CNS-BFS, ALSFRS-R, VAS and timed speech and swallowing were all concordant with clinician diagnosis. The self-report CNS-BFS and ALSFRS-R bulbar subscale best predicted clinician diagnosis with misclassification rates of 8% and 14% at the optimal cut-offs, respectively. In addition, the CNS-BFS speech and swallowing subscales outperformed both the bulbar component of the ALSFRS-R and speech and swallowing VASs when correlations were made between these scales and objective measures of timed reading and swallowing., Conclusions: Based on these findings and its relative ease of administration, we conclude that the CNS-BFS is a useful metric for assessing bulbar function in patients with ALS., (© 2018 The Authors. European Journal of Neurology published by John Wiley & Sons Ltd on behalf of European Academy of Neurology.)
- Published
- 2018
- Full Text
- View/download PDF
33. Physiological random processes in precision cancer therapy.
- Author
-
Henscheid N, Clarkson E, Myers KJ, and Barrett HH
- Subjects
- Humans, Antineoplastic Agents therapeutic use, Drug Delivery Systems methods, Models, Biological, Neoplasms drug therapy, Neoplasms physiopathology, Tomography, Emission-Computed
- Abstract
Many different physiological processes affect the growth of malignant lesions and their response to therapy. Each of these processes is spatially and genetically heterogeneous; dynamically evolving in time; controlled by many other physiological processes, and intrinsically random and unpredictable. The objective of this paper is to show that all of these properties of cancer physiology can be treated in a unified, mathematically rigorous way via the theory of random processes. We treat each physiological process as a random function of position and time within a tumor, defining the joint statistics of such functions via the infinite-dimensional characteristic functional. The theory is illustrated by analyzing several models of drug delivery and response of a tumor to therapy. To apply the methodology to precision cancer therapy, we use maximum-likelihood estimation with Emission Computed Tomography (ECT) data to estimate unknown patient-specific physiological parameters, ultimately demonstrating how to predict the probability of tumor control for an individual patient undergoing a proposed therapeutic regimen., Competing Interests: The authors have declared that no competing interests exist.
- Published
- 2018
- Full Text
- View/download PDF
34. Gold Nanoparticles and Radio Frequency Field Interactions: Effects of Nanoparticle Size, Charge, Aggregation, Radio Frequency, and Ionic Background.
- Author
-
Mironava T, Arachchilage VT, Myers KJ, and Suchalkin S
- Abstract
In this study, we investigated experimentally the dependency of radio frequency (rf) absorption by gold nanoparticles (AuNPs) on frequency (10 kHz to 450 MHz), NP size (3.5, 17, and 36 nm), charge of the ligand shell (positive amino and negative carboxylic functional groups), aggregation state, and presence of electrolytes (0-1 M NaCl). In addition, we examined the effect of protein corona on the rf absorption by AuNPs. For the first time, rf energy absorption by AuNPs was analyzed in the 10 kHz to 450 MHz rf range. We have demonstrated that the previously reported rf heating of AuNPs can be solely attributed to the heating of the ionic background and AuNPs do not absorb noticeable rf energy regardless of the NP size, charge, aggregation, and presence of electrolytes. However, the formation of protein corona on the AuNP surface resulted in rf energy absorption by AuNP-albumin constructs, suggesting that protein corona might be partially responsible for the heating of AuNPs observed in vivo. The optimal frequency of rf absorption for the AuNP-albumin constructs is significantly higher than conventional 13.56 MHz, suggesting that the heating of AuNPs in rf field should be performed at considerably higher frequencies for better results in vivo.
- Published
- 2017
- Full Text
- View/download PDF
35. Optimization of digital breast tomosynthesis (DBT) acquisition parameters for human observers: effect of reconstruction algorithms.
- Author
-
Zeng R, Badano A, and Myers KJ
- Subjects
- Breast diagnostic imaging, Breast Neoplasms diagnostic imaging, Female, Humans, Image Processing, Computer-Assisted methods, Models, Theoretical, Algorithms, Breast pathology, Breast Neoplasms pathology, Image Processing, Computer-Assisted standards, Mammography methods, Phantoms, Imaging, Tomography, X-Ray methods
- Abstract
We showed in our earlier work that the choice of reconstruction methods does not affect the optimization of DBT acquisition parameters (angular span and number of views) using simulated breast phantom images in detecting lesions with a channelized Hotelling observer (CHO). In this work we investigate whether the model-observer based conclusion is valid when using humans to interpret images. We used previously generated DBT breast phantom images and recruited human readers to find the optimal geometry settings associated with two reconstruction algorithms, filtered back projection (FBP) and simultaneous algebraic reconstruction technique (SART). The human reader results show that image quality trends as a function of the acquisition parameters are consistent between FBP and SART reconstructions. The consistent trends confirm that the optimization of DBT system geometry is insensitive to the choice of reconstruction algorithm. The results also show that humans perform better in SART reconstructed images than in FBP reconstructed images. In addition, we applied CHOs with three commonly used channel models, Laguerre-Gauss (LG) channels, square (SQR) channels and sparse difference-of-Gaussian (sDOG) channels. We found that LG channels predict human performance trends better than SQR and sDOG channel models for the task of detecting lesions in tomosynthesis backgrounds. Overall, this work confirms that the choice of reconstruction algorithm is not critical for optimizing DBT system acquisition parameters.
- Published
- 2017
- Full Text
- View/download PDF
36. Impact of Reconstruction Algorithms and Gender-Associated Anatomy on Coronary Calcium Scoring with CT: An Anthropomorphic Phantom Study.
- Author
-
Li Q, Liu S, Myers KJ, Gavrielides MA, Zeng R, Sahiner B, and Petrick N
- Subjects
- Algorithms, Female, Humans, Male, Phantoms, Imaging, Radiographic Image Interpretation, Computer-Assisted methods, Reproducibility of Results, Signal-To-Noise Ratio, Tomography, X-Ray Computed methods, Coronary Artery Disease diagnostic imaging, Vascular Calcification diagnostic imaging
- Abstract
Rationale and Objectives: Different computed tomography imaging protocols and patient characteristics can impact the accuracy and precision of the calcium score and may lead to inconsistent patient treatment recommendations. The aim of this work was to determine the impact of reconstruction algorithm and gender characteristics on coronary artery calcium scoring based on a phantom study using computed tomography., Materials and Methods: Four synthetic heart vessels with vessel diameters corresponding to female and male left main and left circumflex arteries containing calcification-mimicking materials (200-1000 HU) were inserted into a thorax phantom and were scanned with and without female breast plates (male and female phantoms, respectively). Ten scans were acquired and were reconstructed at 3-mm slices using filtered-back projection (FBP) and iterative reconstruction with medium and strong denoising (IR3 and IR5) algorithms. Agatston and calcium volume scores were estimated for each vessel. Calcium scores for each vessel and the total calcium score (summation of all four vessels) were compared between the two phantoms to quantify the impact of the breast plates and reconstruction parameters. Calcium scores were also compared among vessels of different diameters to investigate the impact of the vessel size., Results: The calcium scores were significantly larger for FBP reconstruction (FBP > IR3>IR5). Agatston scores (calcium volume score) for vessels in the male phantom scans were on average 4.8% (2.9%), 8.2% (7.1%), and 10.5% (9.4%) higher compared to those in the female phantom with FBP, IR3, and IR5, respectively, when exposure was conserved across phantoms. The total calcium scores from the male phantom were significantly larger than those from the female phantom (P <0.05). In general, calcium volume scores were underestimated (up to about 50%) for smaller vessels, especially when scanned in the female phantom., Conclusions: Calcium scores significantly decreased with iterative reconstruction and tended to be underestimated for female anatomy (smaller vessels and presence of breast plates)., (Published by Elsevier Inc.)
- Published
- 2016
- Full Text
- View/download PDF
37. Comparison of Channel Methods and Observer Models for the Task-Based Assessment of Multi-Projection Imaging in the Presence of Structured Anatomical Noise.
- Author
-
Park S, Zhang G, and Myers KJ
- Subjects
- Algorithms, Female, Humans, Least-Squares Analysis, Phantoms, Imaging, Signal-To-Noise Ratio, Breast diagnostic imaging, Imaging, Three-Dimensional methods, Mammography methods, Signal Processing, Computer-Assisted
- Abstract
Although Laguerre-Gauss (LG) channels are often used for the task-based assessment of multi-projection imaging, LG channels may not be the most reliable in providing performance trends as a function of system or object parameters for all situations. Partial least squares (PLS) channels are more flexible in adapting to background and signal data statistics and were shown to be more efficient for detection tasks involving 2D non-Gaussian random backgrounds (Witten , 2010). In this work, we investigate ways of incorporating spatial correlations in the multi-projection data space using 2D LG channels and two implementations of PLS in the channelized version of the 3D projection Hotelling observer (Park , 2010) (3Dp CHO). Our task is to detect spherical and elliptical 3D signals in the angular projections of a structured breast phantom ensemble. The single PLS (sPLS) incorporates the spatial correlation within each projection, whereas the combined PLS (cPLS) incorporates the spatial correlations both within each of and across the projections. The 3Dp CHO-R indirectly incorporates the spatial correlation from the response space (R), whereas the 3Dp CHO-C from the channel space (C). The 3Dp CHO-R-sPLS has potential to be a good surrogate observer when either sample size is small or one training set is used for training both PLS channels and observer. So does the 3Dp CHO-C-cPLS when the sample size is large enough to have a good sized independent set for training PLS channels. Lastly a stack of 2D LG channels used as 3D channels in the CHO-C model showed the capability of incorporating the spatial correlation between the multiple angular projections.
- Published
- 2016
- Full Text
- View/download PDF
38. Volume estimation of multidensity nodules with thoracic computed tomography.
- Author
-
Gavrielides MA, Li Q, Zeng R, Myers KJ, Sahiner B, and Petrick N
- Abstract
This work focuses on volume estimation of "multidensity" lung nodules in a phantom computed tomography study. Eight objects were manufactured by enclosing spherical cores within larger spheres of double the diameter but with a different density. Different combinations of outer-shell/inner-core diameters and densities were created. The nodules were placed within an anthropomorphic phantom and scanned with various acquisition and reconstruction parameters. The volumes of the entire multidensity object as well as the inner core of the object were estimated using a model-based volume estimator. Results showed percent volume bias across all nodules and imaging protocols with slice thicknesses [Formula: see text] ranging from [Formula: see text] to 6.6% for the entire object (standard deviation ranged from 1.5% to 7.6%), and within [Formula: see text] to 5.7% for the inner-core measurement (standard deviation ranged from 2.0% to 17.7%). Overall, the estimation error was larger for the inner-core measurements, which was expected due to the smaller size of the core. Reconstructed slice thickness was found to substantially affect volumetric error for both tasks; exposure and reconstruction kernel were not. These findings provide information for understanding uncertainty in volumetry of nodules that include multiple densities such as ground glass opacities with a solid component.
- Published
- 2016
- Full Text
- View/download PDF
39. Estimating local noise power spectrum from a few FBP-reconstructed CT scans.
- Author
-
Zeng R, Gavrielides MA, Petrick N, Sahiner B, Li Q, and Myers KJ
- Subjects
- Humans, Image Processing, Computer-Assisted methods, Signal-To-Noise Ratio, Tomography, X-Ray Computed
- Abstract
Purpose: Traditional ways to estimate 2D CT noise power spectrum (NPS) involve an ensemble average of the power spectrums of many noisy scans. When only a few scans are available, regions of interest are often extracted from different locations to obtain sufficient samples to estimate the NPS. Using image samples from different locations ignores the nonstationarity of CT noise and thus cannot accurately characterize its local properties. The purpose of this work is to develop a method to estimate local NPS using only a few fan-beam CT scans., Methods: As a result of FBP reconstruction, the CT NPS has the same radial profile shape for all projection angles, with the magnitude varying with the noise level in the raw data measurement. This allows a 2D CT NPS to be factored into products of a 1D angular and a 1D radial function in polar coordinates. The polar separability of CT NPS greatly reduces the data requirement for estimating the NPS. The authors use this property and derive a radial NPS estimation method: in brief, the radial profile shape is estimated from a traditional NPS based on image samples extracted at multiple locations. The amplitudes are estimated by fitting the traditional local NPS to the estimated radial profile shape. The estimated radial profile shape and amplitudes are then combined to form a final estimate of the local NPS. We evaluate the accuracy of the radial NPS method and compared it to traditional NPS methods in terms of normalized mean squared error (NMSE) and signal detectability index., Results: For both simulated and real CT data sets, the local NPS estimated with no more than six scans using the radial NPS method was very close to the reference NPS, according to the metrics of NMSE and detectability index. Even with only two scans, the radial NPS method was able to achieve a fairly good accuracy. Compared to those estimated using traditional NPS methods, the accuracy improvement was substantial when a few scans were available., Conclusions: The radial NPS method was shown to be accurate and efficient in estimating the local NPS of FBP-reconstructed 2D CT images. It presents strong advantages over traditional NPS methods when the number of scans is limited and can be extended to estimate the in-plane NPS of cone-beam CT and multislice helical CT scans.
- Published
- 2016
- Full Text
- View/download PDF
40. Radiance and photon noise: imaging in geometrical optics, physical optics, quantum optics and radiology.
- Author
-
Caucci L, Myers KJ, and Barrett HH
- Abstract
The statistics of detector outputs produced by an imaging system are derived from basic radiometric concepts and definitions. We show that a fundamental way of describing a photon-limited imaging system is in terms of a Poisson random process in spatial, angular, and wavelength variables. We begin the paper by recalling the concept of radiance in geometrical optics, radiology, physical optics, and quantum optics. The propagation and conservation laws for radiance in each of these domains are reviewed. Building upon these concepts, we distinguish four categories of imaging detectors that all respond in some way to the incident radiance, including the new category of photon-processing detectors (capable of measuring radiance on a photon-by-photon basis). This allows us to rigorously show how the concept of radiance is related to the statistical properties of detector outputs and to the information content of a single detected photon. A Monte-Carlo technique, which is derived from the Boltzmann transport equation, is presented as a way to estimate probability density functions to be used in reconstruction from photon-processing data.
- Published
- 2016
- Full Text
- View/download PDF
41. Statistical analysis of lung nodule volume measurements with CT in a large-scale phantom study.
- Author
-
Li Q, Gavrielides MA, Sahiner B, Myers KJ, Zeng R, and Petrick N
- Subjects
- Analysis of Variance, Data Interpretation, Statistical, Datasets as Topic, Likelihood Functions, Linear Models, Models, Biological, Phantoms, Imaging, Reproducibility of Results, Tomography, X-Ray Computed instrumentation, Lung diagnostic imaging, Tomography, X-Ray Computed methods
- Abstract
Purpose: To determine inter-related factors that contribute substantially to measurement error of pulmonary nodule measurements with CT by assessing a large-scale dataset of phantom scans and to quantitatively validate the repeatability and reproducibility of a subset containing nodules and CT acquisitions consistent with the Quantitative Imaging Biomarker Alliance (QIBA) metrology recommendations., Methods: The dataset has about 40 000 volume measurements of 48 nodules (5-20 mm, four shapes, three radiodensities) estimated by a matched-filter estimator from CT images involving 72 imaging protocols. Technical assessment was performed under a framework suggested by QIBA, which aimed to minimize the inconsistency of terminologies and techniques used in the literature. Accuracy and precision of lung nodule volume measurements were examined by analyzing the linearity, bias, variance, root mean square error (RMSE), repeatability, reproducibility, and significant and substantial factors that contribute to the measurement error. Statistical methodologies including linear regression, analysis of variance, and restricted maximum likelihood were applied to estimate the aforementioned metrics. The analysis was performed on both the whole dataset and a subset meeting the criteria proposed in the QIBA Profile document., Results: Strong linearity was observed for all data. Size, slice thickness × collimation, and randomness in attachment to vessels or chest wall were the main sources of measurement error. Grouping the data by nodule size and slice thickness × collimation, the standard deviation (3.9%-28%), and RMSE (4.4%-68%) tended to increase with smaller nodule size and larger slice thickness. For 5, 8, 10, and 20 mm nodules with reconstruction slice thickness ≤0.8, 3, 3, and 5 mm, respectively, the measurements were almost unbiased (-3.0% to 3.0%). Repeatability coefficients (RCs) were from 6.2% to 40%. Pitch of 0.9, detail kernel, and smaller slice thicknesses yielded better (smaller) RCs than those from pitch of 1.2, medium kernel, and larger slice thicknesses. Exposure showed no impact on RC. The overall reproducibility coefficient (RDC) was 45%, and reduced to about 20%-30% when the slice thickness and collimation were fixed. For nodules and CT imaging complying with the QIBA Profile (QIBA Profile subset), the measurements were highly repeatable and reproducible in spite of variations in nodule characteristics and imaging protocols. The overall measurement error was small and mostly due to the randomness in attachment. The bias, standard deviation, and RMSE grouped by nodule size and slice thickness × collimation in the QIBA Profile subset were within ±3%, 4%, and 5%, respectively. RCs are within 11% and the overall RDC is equal to 11%., Conclusions: The authors have performed a comprehensive technical assessment of lung nodule volumetry with a matched-filter estimator from CT scans of synthetic nodules and identified the main sources of measurement error among various nodule characteristics and imaging parameters. The results confirm that the QIBA Profile set is highly repeatable and reproducible. These phantom study results can serve as a bound on the clinical performance achievable with volumetric CT measurements of pulmonary nodules.
- Published
- 2015
- Full Text
- View/download PDF
42. Incretin-based medications for type 2 diabetes: an overview of reviews.
- Author
-
Gamble JM, Clarke A, Myers KJ, Agnew MD, Hatch K, Snow MM, and Davis EM
- Subjects
- Humans, Observational Studies as Topic, Randomized Controlled Trials as Topic, Diabetes Mellitus, Type 2 drug therapy, Dipeptidyl-Peptidase IV Inhibitors therapeutic use, Glucagon-Like Peptide-1 Receptor agonists, Incretins therapeutic use, Review Literature as Topic
- Abstract
Aims: To summarize evidence from and assess the quality of published systematic reviews evaluating the safety, efficacy and effectiveness of incretin-based medications used in the treatment of type 2 diabetes., Methods: We identified systematic reviews of randomized controlled trials or observational studies published in any language that evaluated the safety and/or effectiveness of glucagon-like peptide-1 (GLP-1) receptor agonists or dipeptidyl-peptidase-4 (DPP-4) inhibitors. Data sources used include the Cochrane Library, PubMed, EMBASE, Web of Science, International Pharmaceutical Abstracts, table of contents of diabetes journals, and hand-searching of reference lists and clinical practice guidelines. The methodological quality of systematic reviews was independently assessed by two reviewers using the Assessment of Multiple Systematic Reviews (AMSTAR) checklist. Our study protocol was registered with PROSPERO (2013:CRD42013005149). The primary outcomes were pooled treatment effect estimates for glycaemic control, macrovascular and microvascular complications, and hypoglycaemic events., Results: We identified 467 unique citations of which 84 systematic reviews met our inclusion criteria. There were 51 reviews that evaluated GLP-1 receptor agonists and 64 reviews that evaluated DPP-4 inhibitors. The median (interquartile range) AMSTAR score was 6 (3) out of 11 for quantitative and 1 (1) for non-quantitative reviews. Among the 66 quantitative systematic reviews, there were a total of 718 pooled treatment effect estimates reported for our primary outcomes and 1012 reported pooled treatment effect estimates for secondary outcomes., Conclusions: Clinicians and policy makers, when using the results of systematic reviews to inform decision-making with regard to round clinical care or healthcare policies for incretin-based medications, should consider the variability in quality of reviews., (© 2015 John Wiley & Sons Ltd.)
- Published
- 2015
- Full Text
- View/download PDF
43. Evaluating the sensitivity of the optimization of acquisition geometry to the choice of reconstruction algorithm in digital breast tomosynthesis through a simulation study.
- Author
-
Zeng R, Park S, Bakic P, and Myers KJ
- Subjects
- Algorithms, Artifacts, Breast Neoplasms diagnostic imaging, Computer Simulation, Female, Humans, Imaging, Three-Dimensional, Likelihood Functions, Normal Distribution, Observer Variation, Signal-To-Noise Ratio, Tomography, X-Ray, Breast pathology, Breast Neoplasms pathology, Image Processing, Computer-Assisted methods, Mammography methods
- Abstract
Due to the limited number of views and limited angular span in digital breast tomosynthesis (DBT), the acquisition geometry design is an important factor that affects the image quality. Therefore, intensive studies have been conducted regarding the optimization of the acquisition geometry. However, different reconstruction algorithms were used in most of the reported studies. Because each type of reconstruction algorithm can provide images with its own image resolution, noise properties and artifact appearance, it is unclear whether the optimal geometries concluded for the DBT system in one study can be generalized to the DBT systems with a reconstruction algorithm different to the one applied in that study. Hence, we investigated the effect of the reconstruction algorithm on the optimization of acquisition geometry parameters through carefully designed simulation studies. Our results show that using various reconstruction algorithms, including the filtered back-projection, the simultaneous algebraic reconstruction technique, the maximum-likelihood method and the total-variation regularized least-square method, gave similar performance trends for the acquisition parameters for detecting lesions. The consistency of system ranking indicates that the choice of the reconstruction algorithm may not be critical for DBT system geometry optimization.
- Published
- 2015
- Full Text
- View/download PDF
44. Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.
- Author
-
Obuchowski NA, Reeves AP, Huang EP, Wang XF, Buckler AJ, Kim HJ, Barnhart HX, Jackson EF, Giger ML, Pennello G, Toledano AY, Kalpathy-Cramer J, Apanasovich TV, Kinahan PE, Myers KJ, Goldgof DB, Barboriak DP, Gillies RJ, Schwartz LH, and Sullivan DC
- Subjects
- Bias, Computer Simulation, Humans, Phantoms, Imaging, Reference Standards, Reproducibility of Results, Algorithms, Biomarkers, Diagnostic Imaging, Research Design, Statistics as Topic
- Abstract
Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research., (© The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.)
- Published
- 2015
- Full Text
- View/download PDF
45. Volume estimation of low-contrast lesions with CT: a comparison of performances from a phantom study, simulations and theoretical analysis.
- Author
-
Li Q, Gavrielides MA, Zeng R, Myers KJ, Sahiner B, and Petrick N
- Subjects
- Humans, Liver Neoplasms pathology, Radiation Dosage, Computer Simulation, Image Processing, Computer-Assisted methods, Liver Neoplasms diagnostic imaging, Models, Theoretical, Phantoms, Imaging, Tomography, X-Ray Computed methods, Tumor Burden
- Abstract
Measurements of lung nodule volume with multi-detector computed tomography (MDCT) have been shown to be more accurate and precise compared to conventional lower dimensional measurements. Quantifying the size of lesions is potentially more difficult when the object-to-background contrast is low as with lesions in the liver. Physical phantom and simulation studies are often utilized to analyze the bias and variance of lesion size estimates because a ground truth or reference standard can be established. In addition, it may also be useful to derive theoretical bounds as another way of characterizing lesion sizing methods. The goal of this work was to study the performance of a MDCT system for a lesion volume estimation task with object-to-background contrast less than 50 HU, and to understand the relation among performances obtained from phantom study, simulation and theoretical analysis. We performed both phantom and simulation studies, and analyzed the bias and variance of volume measurements estimated by a matched-filter-based estimator. We further corroborated results with a theoretical analysis to estimate the achievable performance bound, which was the Cramer-Rao's lower bound (CRLB) of minimum variance for the size estimates. Results showed that estimates of non-attached solid small lesion volumes with object-to-background contrast of 31-46 HU can be accurate and precise, with less than 10.8% in percent bias and 4.8% in standard deviation of percent error (SPE), in standard dose scans. These results are consistent with theoretical (CRLB), computational (simulation) and empirical phantom bounds. The difference between the bounds is rather small (for SPE less than 1.9%) indicating that the theoretical- and simulation-based performance bounds can be good surrogates for physical phantom studies.
- Published
- 2015
- Full Text
- View/download PDF
46. Task-based measures of image quality and their relation to radiation dose and patient risk.
- Author
-
Barrett HH, Myers KJ, Hoeschen C, Kupinski MA, and Little MP
- Subjects
- Humans, Image Enhancement methods, Risk, Photons adverse effects, Radiation Dosage
- Abstract
The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality.
- Published
- 2015
- Full Text
- View/download PDF
47. Pioneers in Medical Imaging: Honoring the Memory of Robert F. Wagner.
- Author
-
Myers KJ and Chen W
- Published
- 2014
- Full Text
- View/download PDF
48. RADIANCE AND PHOTON NOISE: Imaging in geometrical optics, physical optics, quantum optics and radiology.
- Author
-
Barrett HH, Myers KJ, and Caucci L
- Abstract
A fundamental way of describing a photon-limited imaging system is in terms of a Poisson random process in spatial, angular and wavelength variables. The mean of this random process is the spectral radiance. The principle of conservation of radiance then allows a full characterization of the noise in the image (conditional on viewing a specified object). To elucidate these connections, we first review the definitions and basic properties of radiance as defined in terms of geometrical optics, radiology, physical optics and quantum optics. The propagation and conservation laws for radiance in each of these domains are reviewed. Then we distinguish four categories of imaging detectors that all respond in some way to the incident radiance, including the new category of photon-processing detectors. The relation between the radiance and the statistical properties of the detector output is discussed and related to task-based measures of image quality and the information content of a single detected photon.
- Published
- 2014
- Full Text
- View/download PDF
49. Objective assessment of image quality and dose reduction in CT iterative reconstruction.
- Author
-
Vaishnav JY, Jung WC, Popescu LM, Zeng R, and Myers KJ
- Subjects
- Humans, Models, Theoretical, Phantoms, Imaging, Radiation Dosage, Tomography, X-Ray Computed instrumentation, Tomography, X-Ray Computed standards, Algorithms, Tomography, X-Ray Computed methods
- Abstract
Purpose: Iterative reconstruction (IR) algorithms have the potential to reduce radiation dose in CT diagnostic imaging. As these algorithms become available on the market, a standardizable method of quantifying the dose reduction that a particular IR method can achieve would be valuable. Such a method would assist manufacturers in making promotional claims about dose reduction, buyers in comparing different devices, physicists in independently validating the claims, and the United States Food and Drug Administration in regulating the labeling of CT devices. However, the nonlinear nature of commercially available IR algorithms poses challenges to objectively assessing image quality, a necessary step in establishing the amount of dose reduction that a given IR algorithm can achieve without compromising that image quality. This review paper seeks to consolidate information relevant to objectively assessing the quality of CT IR images, and thereby measuring the level of dose reduction that a given IR algorithm can achieve., Methods: The authors discuss task-based methods for assessing the quality of CT IR images and evaluating dose reduction., Results: The authors explain and review recent literature on signal detection and localization tasks in CT IR image quality assessment, the design of an appropriate phantom for these tasks, possible choices of observers (including human and model observers), and methods of evaluating observer performance., Conclusions: Standardizing the measurement of dose reduction is a problem of broad interest to the CT community and to public health. A necessary step in the process is the objective assessment of CT image quality, for which various task-based methods may be suitable. This paper attempts to consolidate recent literature that is relevant to the development and implementation of task-based methods for the assessment of CT IR image quality.
- Published
- 2014
- Full Text
- View/download PDF
50. CT image assessment by low contrast signal detectability evaluation with unknown signal location.
- Author
-
Popescu LM and Myers KJ
- Subjects
- Algorithms, Computer Simulation, Humans, Normal Distribution, Phantoms, Imaging, Poisson Distribution, ROC Curve, Reproducibility of Results, Signal Processing, Computer-Assisted, Image Processing, Computer-Assisted methods, Radiographic Image Interpretation, Computer-Assisted methods, Tomography, X-Ray Computed
- Abstract
Purpose: To devise a new methodology for CT image quality evaluation in order to assess the dose reduction potential of new iterative reconstruction algorithms (IRA)., Methods: Because of the nonlinear behavior of IRA, the authors propose a task-based methodology consisting of measuring the detectability of small, low contrast signals at random locations. The authors test, via simulations, a phantom design that facilitates human and numerical observer studies in such conditions. The setup allows for the random selection of regions of interest (ROI) around each signal, so that the relative signal location is unknown if the ROIs are shown separately. With such a setup one can perform signal detectability measurements with a variety of image reading arrangements and data analysis methods. In this work, the authors demonstrate the use of the localization relative operating characteristic method. The phantom design also allows for efficient image evaluation utilizing an automatic signal search technique and a recently developed nonparametric data analysis method using the exponential transformation of the free response characteristic curve., Results: The authors present the application of these methods by performing a comparison between the filtered back projection (FBP) algorithm and a polychromatic iterative image reconstruction algorithm. In this generic illustration of the image evaluation framework, the expected improved performance of the IRA over FBP is confirmed., Conclusions: The results demonstrate the ability of these methods to determine signal detectability indices with good accuracy with only a small number, of the order of a few tens, of image samples.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.