16 results on '"Ravesloot CJ"'
Search Results
2. Tests, Quizzes, and Self-Assessments : How to Construct a High-Quality Examination
- Author
-
van der Gijp, Anouk, Ravesloot, CJ, Ten Cate, Olle Th J, van Schaik, JPJ, Webb, Emily M, Naeger, David M, van der Gijp, Anouk, Ravesloot, CJ, Ten Cate, Olle Th J, van Schaik, JPJ, Webb, Emily M, and Naeger, David M
- Published
- 2016
3. Tests, Quizzes, and Self-Assessments: How to Construct a High-Quality Examination
- Author
-
MS Radiologie, Onderzoek Beeld, Arts-assistenten Radiologie, Expertisecentrum Alg., Other research (not in main researchprogram), Arts-Assistenten Onderwijs Radiologie, van der Gijp, Anouk, Ravesloot, CJ, Ten Cate, Olle Th J, van Schaik, JPJ, Webb, Emily M, Naeger, David M, MS Radiologie, Onderzoek Beeld, Arts-assistenten Radiologie, Expertisecentrum Alg., Other research (not in main researchprogram), Arts-Assistenten Onderwijs Radiologie, van der Gijp, Anouk, Ravesloot, CJ, Ten Cate, Olle Th J, van Schaik, JPJ, Webb, Emily M, and Naeger, David M
- Published
- 2016
4. Fourteen years of progress testing in radiology residency training: experiences from The Netherlands.
- Author
-
Rutgers DR, van Raamt F, van Lankeren W, Ravesloot CJ, van der Gijp A, Ten Cate TJ, and van Schaik JPJ
- Subjects
- Follow-Up Studies, Humans, Netherlands, Reproducibility of Results, Clinical Competence, Educational Measurement methods, Forecasting, Internship and Residency, Radiology education
- Abstract
Objectives: To describe the development of the Dutch Radiology Progress Test (DRPT) for knowledge testing in radiology residency training in The Netherlands from its start in 2003 up to 2016., Methods: We reviewed all DRPTs conducted since 2003. We assessed key changes and events in the test throughout the years, as well as resident participation and dispensation for the DRPT, test reliability and discriminative power of test items., Results: The DRPT has been conducted semi-annually since 2003, except for 2015 when one digital DRPT failed. Key changes in these years were improvements in test analysis and feedback, test digitalization (2013) and inclusion of test items on nuclear medicine (2016). From 2003 to 2016, resident dispensation rates increased (Pearson's correlation coefficient 0.74, P-value <0.01) to maximally 16 %. Cronbach´s alpha for test reliability varied between 0.83 and 0.93. The percentage of DRPT test items with negative item-rest-correlations, indicating relatively poor discriminative power, varied between 4 % and 11 %., Conclusions: Progress testing has proven feasible and sustainable in Dutch radiology residency training, keeping up with innovations in the radiological profession. Test reliability and discriminative power of test items have remained fair over the years, while resident dispensation rates have increased., Key Points: • Progress testing allows for monitoring knowledge development from novice to senior trainee. • In postgraduate medical training, progress testing is used infrequently. • Progress testing is feasible and sustainable in radiology residency training.
- Published
- 2018
- Full Text
- View/download PDF
5. Increasing Authenticity of Simulation-Based Assessment in Diagnostic Radiology.
- Author
-
van der Gijp A, Ravesloot CJ, Tipker CA, de Crom K, Rutgers DR, van der Schaaf MF, van der Schaaf IC, Mol CP, Vincken KL, Ten Cate OTJ, Maas M, and van Schaik JPJ
- Subjects
- Humans, Pilot Projects, Program Development, Program Evaluation, Clinical Decision-Making, Educational Measurement methods, Internship and Residency organization & administration, Radiology education, Simulation Training organization & administration
- Abstract
Introduction: Clinical reasoning in diagnostic imaging professions is a complex skill that requires processing of visual information and image manipulation skills. We developed a digital simulation-based test method to increase authenticity of image interpretation skill assessment., Methods: A digital application, allowing volumetric image viewing and manipulation, was used for three test administrations of the national Dutch Radiology Progress Test for residents. This study describes the development and implementation process in three phases. To assess authenticity of the digital tests, perceived image quality and correspondence to clinical practice were evaluated and compared with previous paper-based tests (PTs). Quantitative and qualitative evaluation results were used to improve subsequent tests., Results: Authenticity of the first digital test was not rated higher than the PTs. Test characteristics and environmental conditions, such as image manipulation options and ambient lighting, were optimized based on participants' comments. After adjustments in the third digital test, participants favored the image quality and clinical correspondence of the digital image questions over paper-based image questions., Conclusions: Digital simulations can increase authenticity of diagnostic radiology assessments compared with paper-based testing. However, authenticity does not necessarily increase with higher fidelity. It can be challenging to simulate the image interpretation task of clinical practice in a large-scale assessment setting, because of technological limitations. Optimizing image manipulation options, the level of ambient light, time limits, and question types can help improve authenticity of simulation-based radiology assessments.
- Published
- 2017
- Full Text
- View/download PDF
6. Predictors of Knowledge and Image Interpretation Skill Development in Radiology Residents.
- Author
-
Ravesloot CJ, van der Schaaf MF, Kruitwagen CLJJ, van der Gijp A, Rutgers DR, Haaring C, Ten Cate O, and van Schaik JPJ
- Subjects
- Adult, Educational Measurement statistics & numerical data, Female, Humans, Male, Netherlands, Radiologists standards, Retrospective Studies, Clinical Competence statistics & numerical data, Internship and Residency statistics & numerical data, Radiologists statistics & numerical data, Radiology education
- Abstract
Purpose To investigate knowledge and image interpretation skill development in residency by studying scores on knowledge and image questions on radiology tests, mediated by the training environment. Materials and Methods Ethical approval for the study was obtained from the ethical review board of the Netherlands Association for Medical Education. Longitudinal test data of 577 of 2884 radiology residents who took semiannual progress tests during 5 years were retrospectively analyzed by using a nonlinear mixed-effects model taking training length as input variable. Tests included nonimage and image questions that assessed knowledge and image interpretation skill. Hypothesized predictors were hospital type (academic or nonacademic), training hospital, enrollment age, sex, and test date. Results Scores showed a curvilinear growth during residency. Image scores increased faster during the first 3 years of residency and reached a higher maximum than knowledge scores (55.8% vs 45.1%). The slope of image score development versus knowledge question scores of 1st-year residents was 16.8% versus 12.4%, respectively. Training hospital environment appeared to be an important predictor in both knowledge and image interpretation skill development (maximum score difference between training hospitals was 23.2%; P < .001). Conclusion Expertise developed rapidly in the initial years of radiology residency and leveled off in the 3rd and 4th training year. The shape of the curve was mainly influenced by the specific training hospital.
© RSNA, 2017 Online supplemental material is available for this article.- Published
- 2017
- Full Text
- View/download PDF
7. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.
- Author
-
van der Gijp A, Ravesloot CJ, Jarodzka H, van der Schaaf MF, van der Schaaf IC, van Schaik JPJ, and Ten Cate TJ
- Subjects
- Clinical Competence, Education, Medical, Humans, Attention physiology, Eye Movements physiology, Radiology education, Visual Perception physiology
- Abstract
Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye tracking literature in radiology indicates several search patterns are related to high levels of expertise, but teaching novices to search as an expert may not be effective. Experimental research is needed to find out which search strategies can improve image perception in learners.
- Published
- 2017
- Full Text
- View/download PDF
8. Identifying error types in visual diagnostic skill assessment.
- Author
-
Ravesloot CJ, van der Gijp A, van der Schaaf MF, Huige JCBM, Ten Cate O, Vincken KL, Mol CP, and van Schaik JPJ
- Subjects
- Educational Measurement methods, Humans, Perception, Radiography methods, Reproducibility of Results, Clinical Competence, Diagnostic Errors classification, Radiology education, Students, Medical
- Abstract
Background: Misinterpretation of medical images is an important source of diagnostic error. Errors can occur in different phases of the diagnostic process. Insight in the error types made by learners is crucial for training and giving effective feedback. Most diagnostic skill tests however penalize diagnostic mistakes without an eye for the diagnostic process and the type of error. A radiology test with stepwise reasoning questions was used to distinguish error types in the visual diagnostic process. We evaluated the additional value of a stepwise question-format, in comparison with only diagnostic questions in radiology tests., Methods: Medical students in a radiology elective (n=109) took a radiology test including 11-13 cases in stepwise question-format: marking an abnormality, describing the abnormality and giving a diagnosis. Errors were coded by two independent researchers as perception, analysis, diagnosis, or undefined. Erroneous cases were further evaluated for the presence of latent errors or partial knowledge. Inter-rater reliabilities and percentages of cases with latent errors and partial knowledge were calculated., Results: The stepwise question-format procedure applied to 1351 cases completed by 109 medical students revealed 828 errors. Mean inter-rater reliability of error type coding was Cohen's κ=0.79. Six hundred and fifty errors (79%) could be coded as perception, analysis or diagnosis errors. The stepwise question-format revealed latent errors in 9% and partial knowledge in 18% of cases., Conclusions: A stepwise question-format can reliably distinguish error types in the visual diagnostic process, and reveals latent errors and partial knowledge.
- Published
- 2017
- Full Text
- View/download PDF
9. Tests, Quizzes, and Self-Assessments: How to Construct a High-Quality Examination.
- Author
-
van der Gijp A, Ravesloot CJ, Ten Cate OT, van Schaik JP, Webb EM, and Naeger DM
- Subjects
- Education, Medical, Continuing, Guidelines as Topic, Humans, Educational Measurement methods, Radiology education, Self-Assessment
- Abstract
Objective: The purposes of this article are to highlight aspects of tests that increase or decrease their effectiveness and to provide guidelines for constructing high-quality tests in radiology., Conclusion: Many radiologists help construct tests for a variety of purposes. Only well-constructed tests can provide reliable and valuable information about the test taker.
- Published
- 2016
- Full Text
- View/download PDF
10. The Importance of Human-Computer Interaction in Radiology E-learning.
- Author
-
den Harder AM, Frijlingh M, Ravesloot CJ, Oosterbaan AE, and van der Gijp A
- Subjects
- Humans, Radiology trends, Radiologists education, Radiology education, User-Computer Interface
- Abstract
With the development of cross-sectional imaging techniques and transformation to digital reading of radiological imaging, e-learning might be a promising tool in undergraduate radiology education. In this systematic review of the literature, we evaluate the emergence of image interaction possibilities in radiology e-learning programs and evidence for effects of radiology e-learning on learning outcomes and perspectives of medical students and teachers. A systematic search in PubMed, EMBASE, Cochrane, ERIC, and PsycInfo was performed. Articles were screened by two authors and included when they concerned the evaluation of radiological e-learning tools for undergraduate medical students. Nineteen articles were included. Seven studies evaluated e-learning programs with image interaction possibilities. Students perceived e-learning with image interaction possibilities to be a useful addition to learning with hard copy images and to be effective for learning 3D anatomy. Both e-learning programs with and without image interaction possibilities were found to improve radiological knowledge and skills. In general, students found e-learning programs easy to use, rated image quality high, and found the difficulty level of the courses appropriate. Furthermore, they felt that their knowledge and understanding of radiology improved by using e-learning. In conclusion, the addition of radiology e-learning in undergraduate medical education can improve radiological knowledge and image interpretation skills. Differences between the effect of e-learning with and without image interpretation possibilities on learning outcomes are unknown and should be subject to future research.
- Published
- 2016
- Full Text
- View/download PDF
11. The don't know option in progress testing.
- Author
-
Ravesloot CJ, Van der Schaaf MF, Muijtjens AM, Haaring C, Kruitwagen CL, Beek FJ, Bakker J, Van Schaik JP, and Ten Cate TJ
- Subjects
- Cross-Over Studies, Female, Humans, Knowledge, Male, Reproducibility of Results, Sex Factors, Educational Measurement methods, Educational Measurement standards, Internship and Residency methods, Internship and Residency standards, Radiology education
- Abstract
Formula scoring (FS) is the use of a don't know option (DKO) with subtraction of points for wrong answers. Its effect on construct validity and reliability of progress test scores, is subject of discussion. Choosing a DKO may not only be affected by knowledge level, but also by risk taking tendency, and may thus introduce construct-irrelevant variance into the knowledge measurement. On the other hand, FS may result in more reliable test scores. To evaluate the impact of FS on construct validity and reliability of progress test scores, a progress test for radiology residents was divided into two tests of 100 parallel items (A and B). Each test had a FS and a number-right (NR) version, A-FS, B-FS, A-NR, and B-NR. Participants (337) were randomly divided into two groups. One group took test A-FS followed by B-NR, and the second group test B-FS followed by A-NR. Evidence for impaired construct validity was sought in a hierarchical regression analysis by investigating how much of the participants' FS-score variance was explained by the DKO-score, compared to the contribution of the knowledge level (NR-score), while controlling for Group, Gender, and Training length. Cronbach's alpha was used to estimate NR and FS-score reliability per year group. NR score was found to explain 27 % of the variance of FS [F(1,332) = 219.2, p < 0.0005], DKO-score, and the interaction of DKO and Gender were found to explain 8 % [F(2,330) = 41.5, p < 0.0005], and the interaction of DKO and NR 1.6 % [F(1,329) = 16.6, p < 0.0005], supporting our hypothesis that FS introduces construct-irrelevant variance into the knowledge measurement. However, NR-scores showed considerably lower reliabilities than FS-scores (mean year-test group Cronbach's alphas were 0.62 and 0.74, respectively). Decisions about FS with progress tests should be a careful trade-off between systematic and random measurement error.
- Published
- 2015
- Full Text
- View/download PDF
12. Volumetric and two-dimensional image interpretation show different cognitive processes in learners.
- Author
-
van der Gijp A, Ravesloot CJ, van der Schaaf MF, van der Schaaf IC, Huige JC, Vincken KL, Ten Cate OT, and van Schaik JP
- Subjects
- Cognition, Humans, Clinical Competence, Cone-Beam Computed Tomography, Radiographic Image Interpretation, Computer-Assisted standards, Radiology education
- Abstract
Rationale and Objectives: In current practice, radiologists interpret digital images, including a substantial amount of volumetric images. We hypothesized that interpretation of a stack of a volumetric data set demands different skills than interpretation of two-dimensional (2D) cross-sectional images. This study aimed to investigate and compare knowledge and skills used for interpretation of volumetric versus 2D images., Materials and Methods: Twenty radiology clerks were asked to think out loud while reading four or five volumetric computed tomography (CT) images in stack mode and four or five 2D CT images. Cases were presented in a digital testing program allowing stack viewing of volumetric data sets and changing views and window settings. Thoughts verbalized by the participants were registered and coded by a framework of knowledge and skills concerning three components: perception, analysis, and synthesis. The components were subdivided into 16 discrete knowledge and skill elements. A within-subject analysis was performed to compare cognitive processes during volumetric image readings versus 2D cross-sectional image readings., Results: Most utterances contained knowledge and skills concerning perception (46%). A smaller part involved synthesis (31%) and analysis (23%). More utterances regarded perception in volumetric image interpretation than in 2D image interpretation (Median 48% vs 35%; z = -3.9; P < .001). Synthesis was less prominent in volumetric than in 2D image interpretation (Median 28% vs 42%; z = -3.9; P < .001). No differences were found in analysis utterances., Conclusions: Cognitive processes in volumetric and 2D cross-sectional image interpretation differ substantially. Volumetric image interpretation draws predominantly on perceptual processes, whereas 2D image interpretation is mainly characterized by synthesis. The results encourage the use of volumetric images for teaching and testing perceptual skills., (Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
13. Volumetric CT-images improve testing of radiological image interpretation skills.
- Author
-
Ravesloot CJ, van der Schaaf MF, van Schaik JP, ten Cate OT, van der Gijp A, Mol CP, and Vincken KL
- Subjects
- Education, Medical, Continuing, Female, Humans, Male, Netherlands, Radiographic Image Enhancement standards, Reproducibility of Results, Clinical Competence standards, Cone-Beam Computed Tomography, Educational Measurement standards, Radiology education, Students, Medical statistics & numerical data
- Abstract
Rationale and Objectives: Current radiology practice increasingly involves interpretation of volumetric data sets. In contrast, most radiology tests still contain only 2D images. We introduced a new testing tool that allows for stack viewing of volumetric images in our undergraduate radiology program. We hypothesized that tests with volumetric CT-images enhance test quality, in comparison with traditional completely 2D image-based tests, because they might better reflect required skills for clinical practice., Materials and Methods: Two groups of medical students (n=139; n=143), trained with 2D and volumetric CT-images, took a digital radiology test in two versions (A and B), each containing both 2D and volumetric CT-image questions. In a questionnaire, they were asked to comment on the representativeness for clinical practice, difficulty and user-friendliness of the test questions and testing program. Students' test scores and reliabilities, measured with Cronbach's alpha, of 2D and volumetric CT-image tests were compared., Results: Estimated reliabilities (Cronbach's alphas) were higher for volumetric CT-image scores (version A: .51 and version B: .54), than for 2D CT-image scores (version A: .24 and version B: .37). Participants found volumetric CT-image tests more representative of clinical practice, and considered them to be less difficult than volumetric CT-image questions. However, in one version (A), volumetric CT-image scores (M 80.9, SD 14.8) were significantly lower than 2D CT-image scores (M 88.4, SD 10.4) (p<.001). The volumetric CT-image testing program was considered user-friendly., Conclusion: This study shows that volumetric image questions can be successfully integrated in students' radiology testing. Results suggests that the inclusion of volumetric CT-images might improve the quality of radiology tests by positively impacting perceived representativeness for clinical practice and increasing reliability of the test., (Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
14. Support for external validity of radiological anatomy tests using volumetric images.
- Author
-
Ravesloot CJ, van der Gijp A, van der Schaaf MF, Huige JC, Vincken KL, Mol CP, Bleys RL, ten Cate OT, and van Schaik JP
- Subjects
- Cadaver, Female, Humans, Male, Reproducibility of Results, Surveys and Questionnaires, Education, Medical, Undergraduate, Educational Measurement methods, Radiology education
- Abstract
Rationale and Objectives: Radiology practice has become increasingly based on volumetric images (VIs), but tests in medical education still mainly involve two-dimensional (2D) images. We created a novel, digital, VI test and hypothesized that scores on this test would better reflect radiological anatomy skills than scores on a traditional 2D image test. To evaluate external validity we correlated VI and 2D image test scores with anatomy cadaver-based test scores., Materials and Methods: In 2012, 246 medical students completed one of two comparable versions (A and B) of a digital radiology test, each containing 20 2D image and 20 VI questions. Thirty-three of these participants also took a human cadaver anatomy test. Mean scores and reliabilities of the 2D image and VI subtests were compared and correlated with human cadaver anatomy test scores. Participants received a questionnaire about perceived representativeness and difficulty of the radiology test., Results: Human cadaver test scores were not correlated with 2D image scores, but significantly correlated with VI scores (r = 0.44, P < .05). Cronbach's α reliability was 0.49 (A) and 0.65 (B) for the 2D image subtests and 0.65 (A) and 0.71 (B) for VI subtests. Mean VI scores (74.4%, standard deviation 2.9) were significantly lower than 2D image scores (83.8%, standard deviation 2.4) in version A (P < .001). VI questions were considered more representative of clinical practice and education than 2D image questions and less difficult (both P < .001)., Conclusions: VI tests show higher reliability, a significant correlation with human cadaver test scores, and are considered more representative for clinical practice than tests with 2D images., (Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
15. Interpretation of radiological images: towards a framework of knowledge and skills.
- Author
-
van der Gijp A, van der Schaaf MF, van der Schaaf IC, Huige JC, Ravesloot CJ, van Schaik JP, and Ten Cate TJ
- Subjects
- Humans, Netherlands, Students, Medical, Clinical Competence standards, Radiographic Image Interpretation, Computer-Assisted standards, Radiology education
- Abstract
The knowledge and skills that are required for radiological image interpretation are not well documented, even though medical imaging is gaining importance. This study aims to develop a comprehensive framework of knowledge and skills, required for two-dimensional and multiplanar image interpretation in radiology. A mixed-method study approach was applied. First, a literature search was performed to identify knowledge and skills that are important for image interpretation. Three databases, PubMed, PsycINFO and Embase, were searched for studies using synonyms of image interpretation skills or visual expertise combined with synonyms of radiology. Empirical or review studies concerning knowledge and skills for medical image interpretation were included and relevant knowledge and skill items were extracted. Second, a preliminary framework was built and discussed with nine selective experts in individual semi-structured interviews. The expert team consisted of four radiologists, one radiology resident, two education scientists, one cognitive psychologist and one neuropsychologist. The framework was optimised based on the experts comments. Finally, the framework was applied to empirical data, derived from verbal protocols of ten clerks interpreting two-dimensional and multiplanar radiological images. In consensus meetings adjustments were made to resolve discrepancies of the framework with the verbal protocol data. We designed a framework with three main components of image interpretation: perception, analysis and synthesis. The literature study provided four knowledge and twelve skill items. As a result of the expert interviews, one skill item was added and formulations of existing items were adjusted. The think-aloud experiment showed that all knowledge items and three of the skill items were applied within all three main components of the image interpretation process. The remaining framework items were apparent only within one of the main components. After combining two knowledge items, we finally identified three knowledge items and thirteen skills, essential for image interpretation by trainees. The framework can serve as a guideline for education and assessment of two- and three-dimensional image interpretation. Further validation of the framework in larger study groups with different levels of expertise is needed.
- Published
- 2014
- Full Text
- View/download PDF
16. Computed tomography pulmonary angiography in acute pulmonary embolism: the effect of a computer-assisted detection prototype used as a concurrent reader.
- Author
-
Wittenberg R, Peters JF, van den Berk IA, Freling NJ, Lely R, de Hoop B, Horsthuis K, Ravesloot CJ, Weber M, Prokop WM, and Schaefer-Prokop CM
- Subjects
- Acute Disease, Angiography, Diagnosis, Differential, Female, Humans, Male, Middle Aged, Retrospective Studies, Sensitivity and Specificity, Pulmonary Embolism diagnostic imaging, Radiographic Image Interpretation, Computer-Assisted methods, Tomography, X-Ray Computed methods
- Abstract
Purpose: To assess the effect of computer-assisted detection (CAD) on diagnostic accuracy, reader confidence, and reading time when used as a concurrent reader for the detection of acute pulmonary embolism in computed tomography pulmonary angiography., Materials and Methods: In this institutional review board-approved retrospective study, 6 observers with varying experience evaluated 158 negative and 38 positive consecutive computed tomography pulmonary angiographies (mean patient age 60 y; 115 women) without and with CAD as a concurrent reader. Readers were asked to determine the presence of pulmonary embolism, assess their diagnostic confidence using a 5-point scale, and document their reading time. Results were compared with an independent standard established by 2 readers, and a third chest radiologist was consulted in case of discordant findings., Results: Using logistic regression for repeated measurements, we found a significant increase in readers' sensitivity (P<0.001) without loss of specificity (P=0.855) with the effects being reader dependent (P<0.001). Sensitivities varied from 68% to 100% without CAD and from 76% to 100% with CAD. A 2-way analysis of variance showed a small but significant decrease in reading time (P<0.001), with the duration varying between 24 and 208 seconds without CAD and between 17 and 196 seconds with CAD, and a significant increase in readers' confidence scores using CAD as a concurrent reader (P<0.001)., Conclusions: CAD as a concurrent reader has the potential to increase readers' sensitivity and confidence with a decrease in reading time without loss of specificity. The differences between readers, however, require further evaluation of CAD as a concurrent reader in a larger trial before stronger conclusions can be drawn.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.