12 results on '"David Coz"'
Search Results
2. DermGAN: Synthetic Generation of Clinical Skin Images with Pathology.
- Author
-
Amirata Ghorbani, Vivek Natarajan, David Coz, and Yuan Liu
- Published
- 2019
3. A deep learning system for differential diagnosis of skin diseases.
- Author
-
Yuan Liu, Ayush Jain, Clara Eng, David H. Way, Kang Lee, Peggy Bui, Kimberly Kanada, Guilherme de Oliveira Marinho, Jessica Gallegos, Sara Gabriele, Vishakha Gupta, Nalini Singh, Vivek Natarajan, Rainer Hofmann-Wellenhof, Gregory S. Corrado, Lily H. Peng, Dale R. Webster, Dennis Ai, Susan Huang, Yun Liu 0013, R. Carter Dunn, and David Coz
- Published
- 2019
4. A deep learning system for differential diagnosis of skin diseases
- Author
-
Kimberly Kanada, Vishakha Gupta, Vivek T. Natarajan, David H. Way, Susan Huang, Sara Gabriele, Clara H. Eng, Rainer Hofmann-Wellenhof, Dale R. Webster, R. Carter Dunn, Kang Lee, David Coz, Lily Peng, Jessica Gallegos, Greg S. Corrado, Peggy Bui, Guilherme de Oliveira Marinho, Nalini Singh, Dennis Ai, Yuan Liu, Ayush Jain, and Yun Liu
- Subjects
Male ,FOS: Computer and information sciences ,0301 basic medicine ,Native Hawaiian or Other Pacific Islander ,Skin Neoplasms ,Nurse practitioners ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Eczema ,Economic shortage ,0302 clinical medicine ,Acne Vulgaris ,Photography ,Medical diagnosis ,Melanoma ,Image and Video Processing (eess.IV) ,Hispanic or Latino ,General Medicine ,Middle Aged ,Alaskan Natives ,Telemedicine ,030220 oncology & carcinogenesis ,Carcinoma, Squamous Cell ,Female ,Warts ,Adult ,Teledermatology ,medicine.medical_specialty ,Referral ,Primary care ,Skin Diseases ,Physicians, Primary Care ,White People ,General Biochemistry, Genetics and Molecular Biology ,Diagnosis, Differential ,03 medical and health sciences ,Deep Learning ,FOS: Electrical engineering, electronic engineering, information engineering ,medicine ,Humans ,Psoriasis ,Nurse Practitioners ,Medical physics ,Keratosis, Seborrheic ,Folliculitis ,Asian ,business.industry ,Deep learning ,technology, industry, and agriculture ,Electrical Engineering and Systems Science - Image and Video Processing ,Dermatitis, Seborrheic ,Black or African American ,030104 developmental biology ,Carcinoma, Basal Cell ,Indians, North American ,Artificial intelligence ,Differential diagnosis ,business ,Dermatologists - Abstract
Skin conditions affect an estimated 1.9 billion people worldwide. A shortage of dermatologists causes long wait times and leads patients to seek dermatologic care from general practitioners. However, the diagnostic accuracy of general practitioners has been reported to be only 0.24-0.70 (compared to 0.77-0.96 for dermatologists), resulting in referral errors, delays in care, and errors in diagnosis and treatment. In this paper, we developed a deep learning system (DLS) to provide a differential diagnosis of skin conditions for clinical cases (skin photographs and associated medical histories). The DLS distinguishes between 26 skin conditions that represent roughly 80% of the volume of skin conditions seen in primary care. The DLS was developed and validated using de-identified cases from a teledermatology practice serving 17 clinical sites via a temporal split: the first 14,021 cases for development and the last 3,756 cases for validation. On the validation set, where a panel of three board-certified dermatologists defined the reference standard for every case, the DLS achieved 0.71 and 0.93 top-1 and top-3 accuracies respectively. For a random subset of the validation set (n=963 cases), 18 clinicians reviewed the cases for comparison. On this subset, the DLS achieved a 0.67 top-1 accuracy, non-inferior to board-certified dermatologists (0.63, p
- Published
- 2020
- Full Text
- View/download PDF
5. Race- and Ethnicity-Stratified Analysis of an Artificial Intelligence–Based Tool for Skin Condition Diagnosis by Primary Care Physicians and Nurse Practitioners (Preprint)
- Author
-
Ayush Jain, David Way, Vishakha Gupta, Yi Gao, Guilherme de Oliveira Marinho, Jay Hartford, Rory Sayres, Kimberly Kanada, Clara Eng, Kunal Nagpal, Karen B DeSalvo, Greg S Corrado, Lily Peng, Dale R Webster, R Carter Dunn, David Coz, Susan J Huang, Yun Liu, Peggy Bui, and Yuan Liu
- Abstract
BACKGROUND Many dermatologic cases are first evaluated by primary care physicians or nurse practitioners. OBJECTIVE This study aimed to evaluate an artificial intelligence (AI)-based tool that assists with interpreting dermatologic conditions. METHODS We developed an AI-based tool and conducted a randomized multi-reader, multi-case study (20 primary care physicians, 20 nurse practitioners, and 1047 retrospective teledermatology cases) to evaluate its utility. Cases were enriched and comprised 120 skin conditions. Readers were recruited to optimize for geographical diversity; the primary care physicians practiced across 12 states (2-32 years of experience, mean 11.3 years), and the nurse practitioners practiced across 9 states (2-34 years of experience, mean 13.1 years). To avoid memory effects from incomplete washout, each case was read once by each clinician either with or without AI assistance, with the assignment randomized. The primary analyses evaluated the top-1 agreement, defined as the agreement rate of the clinicians’ primary diagnosis with the reference diagnoses provided by a panel of dermatologists (per case: 3 dermatologists from a pool of 12, practicing across 8 states, with 5-13 years of experience, mean 7.2 years of experience). We additionally conducted subgroup analyses stratified by cases’ self-reported race and ethnicity and measured the performance spread: the maximum performance subtracted by the minimum across subgroups. RESULTS The AI’s standalone top-1 agreement was 63%, and AI assistance was significantly associated with higher agreement with reference diagnoses. For primary care physicians, the increase in diagnostic agreement was 10% (PP CONCLUSIONS AI assistance was associated with significantly improved diagnostic agreement with dermatologists. Across race and ethnicity subgroups, for both primary care physicians and nurse practitioners, the effect of AI assistance remained high at 8%-15%, and the performance spread was similar at 5%-7%.
- Published
- 2022
- Full Text
- View/download PDF
6. Development and Assessment of an Artificial Intelligence–Based Tool for Skin Condition Diagnosis by Primary Care Physicians and Nurse Practitioners in Teledermatology Practices
- Author
-
Vishakha Gupta, Rory Sayres, Ayush Jain, David H. Way, Lily Peng, Yun Liu, Karen DeSalvo, Jay David Hartford, Yuan Liu, Peggy Bui, Clara H. Eng, Greg S. Corrado, Carter Dunn, Dale R. Webster, David Coz, Guilherme de Oliveira Marinho, Yi Gao, Susan Jen Huang, Kunal Nagpal, and Kimberly Kanada
- Subjects
Teledermatology ,Telemedicine ,Randomization ,Referral ,business.industry ,education ,Guinea Pigs ,MEDLINE ,General Medicine ,Mobile Applications ,Search Engine ,Interquartile range ,Artificial Intelligence ,Medicine ,Animals ,Humans ,Medical history ,Artificial intelligence ,Medical diagnosis ,business ,health care economics and organizations ,Original Investigation - Abstract
IMPORTANCE: Most dermatologic cases are initially evaluated by nondermatologists such as primary care physicians (PCPs) or nurse practitioners (NPs). OBJECTIVE: To evaluate an artificial intelligence (AI)–based tool that assists with diagnoses of dermatologic conditions. DESIGN, SETTING, AND PARTICIPANTS: This multiple-reader, multiple-case diagnostic study developed an AI-based tool and evaluated its utility. Primary care physicians and NPs retrospectively reviewed an enriched set of cases representing 120 different skin conditions. Randomization was used to ensure each clinician reviewed each case either with or without AI assistance; each clinician alternated between batches of 50 cases in each modality. The reviews occurred from February 21 to April 28, 2020. Data were analyzed from May 26, 2020, to January 27, 2021. EXPOSURES: An AI-based assistive tool for interpreting clinical images and associated medical history. MAIN OUTCOMES AND MEASURES: The primary analysis evaluated agreement with reference diagnoses provided by a panel of 3 dermatologists for PCPs and NPs. Secondary analyses included diagnostic accuracy for biopsy-confirmed cases, biopsy and referral rates, review time, and diagnostic confidence. RESULTS: Forty board-certified clinicians, including 20 PCPs (14 women [70.0%]; mean experience, 11.3 [range, 2-32] years) and 20 NPs (18 women [90.0%]; mean experience, 13.1 [range, 2-34] years) reviewed 1048 retrospective cases (672 female [64.2%]; median age, 43 [interquartile range, 30-56] years; 41 920 total reviews) from a teledermatology practice serving 11 sites and provided 0 to 5 differential diagnoses per case (mean [SD], 1.6 [0.7]). The PCPs were located across 12 states, and the NPs practiced in primary care without physician supervision across 9 states. The NPs had a mean of 13.1 (range, 2-34) years of experience and practiced in primary care without physician supervision across 9 states. Artificial intelligence assistance was significantly associated with higher agreement with reference diagnoses. For PCPs, the increase in diagnostic agreement was 10% (95% CI, 8%-11%; P
- Published
- 2021
7. Agreement Between Saliency Maps and Human-Labeled Regions of Interest: Applications to Skin Disease Classification
- Author
-
Aaron Loh, Kang Lee, Christof Angermueller, David Coz, Nalini Singh, Susan Huang, and Yuan Liu
- Subjects
Data collection ,Computer science ,business.industry ,media_common.quotation_subject ,Disease classification ,Pattern recognition ,Agreement ,Debugging ,Quantitative analysis (finance) ,Generalizability theory ,Artificial intelligence ,Spurious relationship ,business ,Classifier (UML) ,media_common - Abstract
We propose to systematically identify potentially problematic patterns in skin disease classification models via quantitative analysis of agreement between saliency maps and human-labeled regions of interest. We further compute summary statistics describing patterns in this agreement for various stratifications of input examples. Through this analysis, we discover candidate spurious associations learned by the classifier and suggest next steps to handle such associations. Our approach can be used as a debugging tool to systematically spot difficult examples and error categories. Insights from this analysis could guide targeted data collection and improve model generalizability.
- Published
- 2020
- Full Text
- View/download PDF
8. Race- and Ethnicity-Stratified Analysis of an Artificial Intelligence–Based Tool for Skin Condition Diagnosis by Primary Care Physicians and Nurse Practitioners
- Author
-
Ayush Jain, David Way, Vishakha Gupta, Yi Gao, Guilherme de Oliveira Marinho, Jay Hartford, Rory Sayres, Kimberly Kanada, Clara Eng, Kunal Nagpal, Karen B DeSalvo, Greg S Corrado, Lily Peng, Dale R Webster, R Carter Dunn, David Coz, Susan J Huang, Yun Liu, Peggy Bui, and Yuan Liu
- Abstract
Background Many dermatologic cases are first evaluated by primary care physicians or nurse practitioners. Objective This study aimed to evaluate an artificial intelligence (AI)-based tool that assists with interpreting dermatologic conditions. Methods We developed an AI-based tool and conducted a randomized multi-reader, multi-case study (20 primary care physicians, 20 nurse practitioners, and 1047 retrospective teledermatology cases) to evaluate its utility. Cases were enriched and comprised 120 skin conditions. Readers were recruited to optimize for geographical diversity; the primary care physicians practiced across 12 states (2-32 years of experience, mean 11.3 years), and the nurse practitioners practiced across 9 states (2-34 years of experience, mean 13.1 years). To avoid memory effects from incomplete washout, each case was read once by each clinician either with or without AI assistance, with the assignment randomized. The primary analyses evaluated the top-1 agreement, defined as the agreement rate of the clinicians’ primary diagnosis with the reference diagnoses provided by a panel of dermatologists (per case: 3 dermatologists from a pool of 12, practicing across 8 states, with 5-13 years of experience, mean 7.2 years of experience). We additionally conducted subgroup analyses stratified by cases’ self-reported race and ethnicity and measured the performance spread: the maximum performance subtracted by the minimum across subgroups. Results The AI’s standalone top-1 agreement was 63%, and AI assistance was significantly associated with higher agreement with reference diagnoses. For primary care physicians, the increase in diagnostic agreement was 10% (P Conclusions AI assistance was associated with significantly improved diagnostic agreement with dermatologists. Across race and ethnicity subgroups, for both primary care physicians and nurse practitioners, the effect of AI assistance remained high at 8%-15%, and the performance spread was similar at 5%-7%. Acknowledgments This work was funded by Google LLC. Conflicts of Interest AJ, DW, VG, YG, GOM, JH, RS, CE, KN, KBD, GSC, LP, DRW, RCD, DC, Yun Liu, PB, and Yuan Liu are/were employees at Google and own Alphabet stocks.
- Published
- 2022
- Full Text
- View/download PDF
9. Using a Deep Learning Algorithm and Integrated Gradients Explanation to Assist Grading for Diabetic Retinopathy
- Author
-
Ankur Taly, Derek Wu, Jesse M. Smith, Zahra Rastegar, Anthony Joseph, Naama Hammel, Arunachalam Narayanaswamy, Arjun B. Sood, Katy Blumer, Shawn Xu, Dale R. Webster, David Coz, Lily Peng, Ehsan Rahimy, Rory Sayres, Scott Barb, Greg S. Corrado, Jonathan Krause, and Michael Shumski
- Subjects
Male ,Fundus (eye) ,Sensitivity and Specificity ,Likert scale ,03 medical and health sciences ,0302 clinical medicine ,Deep Learning ,Disease severity ,Diagnostic technology ,Photography ,Medicine ,Humans ,Diagnosis, Computer-Assisted ,Grading (education) ,Reference standards ,030304 developmental biology ,0303 health sciences ,Diabetic Retinopathy ,Ophthalmologists ,business.industry ,Deep learning ,Reproducibility of Results ,Diabetic retinopathy ,Reference Standards ,medicine.disease ,Ophthalmology ,ROC Curve ,030221 ophthalmology & optometry ,Female ,Artificial intelligence ,business ,Algorithm ,Algorithms - Abstract
To understand the impact of deep learning diabetic retinopathy (DR) algorithms on physician readers in computer-assisted settings.Evaluation of diagnostic technology.One thousand seven hundred ninety-six retinal fundus images from 1612 diabetic patients.Ten ophthalmologists (5 general ophthalmologists, 4 retina specialists, 1 retina fellow) read images for DR severity based on the International Clinical Diabetic Retinopathy disease severity scale in each of 3 conditions: unassisted, grades only, or grades plus heatmap. Grades-only assistance comprised a histogram of DR predictions (grades) from a trained deep-learning model. For grades plus heatmap, we additionally showed explanatory heatmaps.For each experiment arm, we computed sensitivity and specificity of each reader and the algorithm for different levels of DR severity against an adjudicated reference standard. We also measured accuracy (exact 5-class level agreement and Cohen's quadratically weighted κ), reader-reported confidence (5-point Likert scale), and grading time.Readers graded more accurately with model assistance than without for the grades-only condition (P0.001). Grades plus heatmaps improved accuracy for patients with DR (P0.001), but reduced accuracy for patients without DR (P = 0.006). Both forms of assistance increased readers' sensitivity moderate-or-worse DR: unassisted: mean, 79.4% [95% confidence interval (CI), 72.3%-86.5%]; grades only: mean, 87.5% [95% CI, 85.1%-89.9%]; grades plus heatmap: mean, 88.7% [95% CI, 84.9%-92.5%] without a corresponding drop in specificity (unassisted: mean, 96.6% [95% CI, 95.9%-97.4%]; grades only: mean, 96.1% [95% CI, 95.5%-96.7%]; grades plus heatmap: mean, 95.5% [95% CI, 94.8%-96.1%]). Algorithmic assistance increased the accuracy of retina specialists above that of the unassisted reader or model alone; and increased grading confidence and grading time across all readers. For most cases, grades plus heatmap was only as effective as grades only. Over the course of the experiment, grading time decreased across all conditions, although most sharply for grades plus heatmap.Deep learning algorithms can improve the accuracy of, and confidence in, DR diagnosis in an assisted read setting. They also may increase grading time, although these effects may be ameliorated with experience.
- Published
- 2018
10. The role of outsiders working in communities of the poor
- Author
-
Crosscombe, David Coz and Crosscombe, David Coz
11. The role of outsiders working in communities of the poor
- Author
-
Crosscombe, David Coz and Crosscombe, David Coz
12. Introducing Ethnobiology Letters
- Author
-
Steve Wolverton, Cynthia Fowler, and David Cozzo
- Subjects
Human ecology. Anthropogeography ,GF1-900 - Published
- 2010
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.