4 results on '"Andrew J. Buckler"'
Search Results
2. Statistical issues in the comparison of quantitative imaging biomarker algorithms using pulmonary nodule volume as an example
- Author
-
Anthony P. Reeves, Jayashree Kalpathy-Cramer, Andrew J. Buckler, Nancy A. Obuchowski, Gene Pennello, Huiman X. Barnhart, Hyun J. Kim, and Xiao-Feng Wang
- Subjects
Diagnostic Imaging ,Statistics and Probability ,Reproducibility ,Phantoms, Imaging ,Epidemiology ,Intraclass correlation ,Computer science ,Clinical study design ,Statistics as Topic ,Coverage probability ,Reproducibility of Results ,Solitary Pulmonary Nodule ,Contrast (statistics) ,Repeatability ,Article ,Imaging phantom ,Bias ,Health Information Management ,Research Design ,Humans ,Biomarker (medicine) ,Algorithm ,Algorithms ,Biomarkers - Abstract
Quantitative imaging biomarkers are being used increasingly in medicine to diagnose and monitor patients’ disease. The computer algorithms that measure quantitative imaging biomarkers have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms’ bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms’ performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantages and limitations of various common statistical methods for quantitative imaging biomarker studies.
- Published
- 2014
- Full Text
- View/download PDF
3. Quantitative imaging biomarkers: A review of statistical methods for computer algorithm comparisons
- Author
-
Alicia Y. Toledano, Paul E. Kinahan, Erich P. Huang, Anthony P. Reeves, Daniel C. Sullivan, Xiao-Feng Wang, Kyle J. Myers, Andrew J. Buckler, Tatiyana V. Apanasovich, Daniel P. Barboriak, Maryellen L. Giger, Gene Pennello, Lawrence H. Schwartz, Nancy A. Obuchowski, Edward F. Jackson, Hyun J. Kim, Jayashree Kalpathy-Cramer, Dmitry B. Goldgof, Robert J. Gillies, and Huiman X. Barnhart
- Subjects
Diagnostic Imaging ,Statistics and Probability ,Quantitative imaging ,Epidemiology ,Computer science ,Statistics as Topic ,computer.software_genre ,Article ,Bias ,Health Information Management ,Humans ,Computer Simulation ,Digital reference ,Reference standards ,Equivalence (measure theory) ,Phantoms, Imaging ,Clinical study design ,Reproducibility of Results ,Reference Standards ,Computer algorithm ,Research Design ,Clinical diagnosis ,Data mining ,computer ,Algorithms ,Biomarkers - Abstract
Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research.
- Published
- 2014
- Full Text
- View/download PDF
4. Meta-analysis of the technical performance of an imaging procedure: Guidelines and statistical methodology
- Author
-
Jingjing Ye, Kingshuk Roy Choudhury, Paul E. Kinahan, Erich P. Huang, Edward F. Jackson, Alexander R. Guimaraes, Mithat Gonen, Gudrun Zahlmann, Anthony P. Reeves, Lisa M. McShane, Xiao-Feng Wang, and Andrew J. Buckler
- Subjects
Diagnostic Imaging ,Statistics and Probability ,Quantitative imaging ,Epidemiology ,Statistics as Topic ,Guidelines as Topic ,Machine learning ,computer.software_genre ,Article ,Meta-Analysis as Topic ,Health Information Management ,Medical imaging ,Humans ,Medicine ,Meta-regression ,medicine.diagnostic_test ,business.industry ,Reproducibility of Results ,Biomarker (cell) ,Technical performance ,Identification (information) ,Research Design ,Positron emission tomography ,Meta-analysis ,Data mining ,Artificial intelligence ,business ,computer ,Biomarkers - Abstract
Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes.
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.