10 results on '"Huige, Josephine C B M"'
Search Results
2. Support for external validity of radiological anatomy tests using volumetric images
- Author
-
Ravesloot, Cécile J., van der Gijp, Anouk, van der Schaaf, Marieke F., Huige, Josephine C B M, Vincken, Koen L., Mol, Christian P., Bleys, Ronald L A W, ten Cate, Olle T., van Schaik, Jan P J, Leerstoel Brekelmans, Education and Learning: Development in Interaction, Leerstoel Brekelmans, and Education and Learning: Development in Interaction
- Subjects
Male ,Pathology ,medicine.medical_specialty ,Testing ,External validity ,Correlation ,Cronbach's alpha ,Digital radiology ,Cadaver ,Surveys and Questionnaires ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Radiology test ,business.industry ,Reproducibility of Results ,Volumetric datasets ,Radiological anatomy ,Radiological image interpretation ,Test (assessment) ,Radiology Nuclear Medicine and imaging ,Physical therapy ,Radiology education ,Volumetric images ,Female ,Educational Measurement ,business ,Radiology ,Education, Medical, Undergraduate - Abstract
RATIONALE AND OBJECTIVES: Radiology practice has become increasingly based on volumetric images (VIs), but tests in medical education still mainly involve two-dimensional (2D) images. We created a novel, digital, VI test and hypothesized that scores on this test would better reflect radiological anatomy skills than scores on a traditional 2D image test. To evaluate external validity we correlated VI and 2D image test scores with anatomy cadaver-based test scores. MATERIALS AND METHODS: In 2012, 246 medical students completed one of two comparable versions (A and B) of a digital radiology test, each containing 20 2D image and 20 VI questions. Thirty-three of these participants also took a human cadaver anatomy test. Mean scores and reliabilities of the 2D image and VI subtests were compared and correlated with human cadaver anatomy test scores. Participants received a questionnaire about perceived representativeness and difficulty of the radiology test. RESULTS: Human cadaver test scores were not correlated with 2D image scores, but significantly correlated with VI scores (r = 0.44, P < .05). Cronbach's α reliability was 0.49 (A) and 0.65 (B) for the 2D image subtests and 0.65 (A) and 0.71 (B) for VI subtests. Mean VI scores (74.4%, standard deviation 2.9) were significantly lower than 2D image scores (83.8%, standard deviation 2.4) in version A (P < .001). VI questions were considered more representative of clinical practice and education than 2D image questions and less difficult (both P < .001). CONCLUSIONS: VI tests show higher reliability, a significant correlation with human cadaver test scores, and are considered more representative for clinical practice than tests with 2D images.
- Published
- 2015
3. Volumetric and two-dimensional image interpretation show different cognitive processes in learners
- Author
-
van der Gijp, Anouk, Ravesloot, Cécile J., van der Schaaf, Marieke F., van der Schaaf, Irene C., Huige, Josephine C B M, Vincken, Koen L., Ten Cate, Olle Th J, van Schaik, Jan P J, van der Gijp, Anouk, Ravesloot, Cécile J., van der Schaaf, Marieke F., van der Schaaf, Irene C., Huige, Josephine C B M, Vincken, Koen L., Ten Cate, Olle Th J, and van Schaik, Jan P J
- Abstract
Rationale and Objectives: In current practice, radiologists interpret digital images, including a substantial amount of volumetric images. We hypothesized that interpretation of a stack of a volumetric data set demands different skills than interpretation of two-dimensional (2D) cross-sectional images. This study aimed to investigate and compare knowledge and skills used for interpretation of volumetric versus 2D images. Materials and Methods: Twenty radiology clerks were asked to think out loud while reading four or five volumetric computed tomography (CT) images in stack mode and four or five 2D CT images. Cases were presented in a digital testing program allowing stack viewing of volumetric data sets and changing views and window settings. Thoughts verbalized by the participants were registered and coded by a framework of knowledge and skills concerning three components: perception, analysis, and synthesis. The components were subdivided into 16 discrete knowledge and skill elements. A within-subject analysis was performed to compare cognitive processes during volumetric image readings versus 2D cross-sectional image readings. Results: Most utterances contained knowledge and skills concerning perception (46%). A smaller part involved synthesis (31%) and analysis (23%). More utterances regarded perception in volumetric image interpretation than in 2D image interpretation (Median 48% vs 35%; z=-3.9; P<.001). Synthesis was less prominent in volumetric than in 2D image interpretation (Median 28% vs 42%; z=-3.9; P<.001). No differences were found in analysis utterances. Conclusions: Cognitive processes in volumetric and 2D cross-sectional image interpretation differ substantially. Volumetric image interpretation draws predominantly on perceptual processes, whereas 2D image interpretation is mainly characterized by synthesis. The results encourage the use of volumetric images for teaching and testing perceptual skills.
- Published
- 2015
4. Support for external validity of radiological anatomy tests using volumetric images
- Author
-
Leerstoel Brekelmans, Education and Learning: Development in Interaction, Ravesloot, Cécile J., van der Gijp, Anouk, van der Schaaf, Marieke F., Huige, Josephine C B M, Vincken, Koen L., Mol, Christian P., Bleys, Ronald L A W, ten Cate, Olle T., van Schaik, Jan P J, Leerstoel Brekelmans, Education and Learning: Development in Interaction, Ravesloot, Cécile J., van der Gijp, Anouk, van der Schaaf, Marieke F., Huige, Josephine C B M, Vincken, Koen L., Mol, Christian P., Bleys, Ronald L A W, ten Cate, Olle T., and van Schaik, Jan P J
- Published
- 2015
5. Volumetric and two-dimensional image interpretation show different cognitive processes in learners
- Author
-
Leerstoel Brekelmans, Education and Learning: Development in Interaction, van der Gijp, Anouk, Ravesloot, Cécile J., van der Schaaf, Marieke F., van der Schaaf, Irene C., Huige, Josephine C B M, Vincken, Koen L., Ten Cate, Olle Th J, van Schaik, Jan P J, Leerstoel Brekelmans, Education and Learning: Development in Interaction, van der Gijp, Anouk, Ravesloot, Cécile J., van der Schaaf, Marieke F., van der Schaaf, Irene C., Huige, Josephine C B M, Vincken, Koen L., Ten Cate, Olle Th J, and van Schaik, Jan P J
- Published
- 2015
6. Support for external validity of radiological anatomy tests using volumetric images
- Author
-
Onderzoek Beeld, Arts-assistenten Radiologie, Beeldverwerking ISI, Brain, Other research (not in main researchprogram), Cancer, Anatomie, Circulatory Health, Expertisecentrum Alg., Arts-Assistenten Onderwijs Radiologie, Ravesloot, Cecile J., van der Gijp, Anouk, van der Schaaf, Marieke F, Huige, Josephine C B M, Vincken, Koen L, Mol, Christian P, Bleys, Ronald L A W, ten Cate, Olle T, van Schaik, JPJ, Onderzoek Beeld, Arts-assistenten Radiologie, Beeldverwerking ISI, Brain, Other research (not in main researchprogram), Cancer, Anatomie, Circulatory Health, Expertisecentrum Alg., Arts-Assistenten Onderwijs Radiologie, Ravesloot, Cecile J., van der Gijp, Anouk, van der Schaaf, Marieke F, Huige, Josephine C B M, Vincken, Koen L, Mol, Christian P, Bleys, Ronald L A W, ten Cate, Olle T, and van Schaik, JPJ
- Published
- 2015
7. Volumetric and two-dimensional image interpretation show different cognitive processes in learners
- Author
-
Arts-assistenten Radiologie, MS Radiologie, Beeldverwerking ISI, Brain, Other research (not in main researchprogram), Cancer, Expertisecentrum Alg., Arts-Assistenten Onderwijs Radiologie, van der Gijp, Anouk, Ravesloot, C.J., van der Schaaf, Marieke F, van der Schaaf, Irene C, Huige, Josephine C B M, Vincken, Koen L, Ten Cate, Olle Th J, van Schaik, JPJ, Arts-assistenten Radiologie, MS Radiologie, Beeldverwerking ISI, Brain, Other research (not in main researchprogram), Cancer, Expertisecentrum Alg., Arts-Assistenten Onderwijs Radiologie, van der Gijp, Anouk, Ravesloot, C.J., van der Schaaf, Marieke F, van der Schaaf, Irene C, Huige, Josephine C B M, Vincken, Koen L, Ten Cate, Olle Th J, and van Schaik, JPJ
- Published
- 2015
8. Identifying error types in visual diagnostic skill assessment.
- Author
-
Ravesloot CJ, van der Gijp A, van der Schaaf MF, Huige JCBM, Ten Cate O, Vincken KL, Mol CP, and van Schaik JPJ
- Subjects
- Educational Measurement methods, Humans, Perception, Radiography methods, Reproducibility of Results, Clinical Competence, Diagnostic Errors classification, Radiology education, Students, Medical
- Abstract
Background: Misinterpretation of medical images is an important source of diagnostic error. Errors can occur in different phases of the diagnostic process. Insight in the error types made by learners is crucial for training and giving effective feedback. Most diagnostic skill tests however penalize diagnostic mistakes without an eye for the diagnostic process and the type of error. A radiology test with stepwise reasoning questions was used to distinguish error types in the visual diagnostic process. We evaluated the additional value of a stepwise question-format, in comparison with only diagnostic questions in radiology tests., Methods: Medical students in a radiology elective (n=109) took a radiology test including 11-13 cases in stepwise question-format: marking an abnormality, describing the abnormality and giving a diagnosis. Errors were coded by two independent researchers as perception, analysis, diagnosis, or undefined. Erroneous cases were further evaluated for the presence of latent errors or partial knowledge. Inter-rater reliabilities and percentages of cases with latent errors and partial knowledge were calculated., Results: The stepwise question-format procedure applied to 1351 cases completed by 109 medical students revealed 828 errors. Mean inter-rater reliability of error type coding was Cohen's κ=0.79. Six hundred and fifty errors (79%) could be coded as perception, analysis or diagnosis errors. The stepwise question-format revealed latent errors in 9% and partial knowledge in 18% of cases., Conclusions: A stepwise question-format can reliably distinguish error types in the visual diagnostic process, and reveals latent errors and partial knowledge.
- Published
- 2017
- Full Text
- View/download PDF
9. Volumetric and two-dimensional image interpretation show different cognitive processes in learners.
- Author
-
van der Gijp A, Ravesloot CJ, van der Schaaf MF, van der Schaaf IC, Huige JC, Vincken KL, Ten Cate OT, and van Schaik JP
- Subjects
- Cognition, Humans, Clinical Competence, Cone-Beam Computed Tomography, Radiographic Image Interpretation, Computer-Assisted standards, Radiology education
- Abstract
Rationale and Objectives: In current practice, radiologists interpret digital images, including a substantial amount of volumetric images. We hypothesized that interpretation of a stack of a volumetric data set demands different skills than interpretation of two-dimensional (2D) cross-sectional images. This study aimed to investigate and compare knowledge and skills used for interpretation of volumetric versus 2D images., Materials and Methods: Twenty radiology clerks were asked to think out loud while reading four or five volumetric computed tomography (CT) images in stack mode and four or five 2D CT images. Cases were presented in a digital testing program allowing stack viewing of volumetric data sets and changing views and window settings. Thoughts verbalized by the participants were registered and coded by a framework of knowledge and skills concerning three components: perception, analysis, and synthesis. The components were subdivided into 16 discrete knowledge and skill elements. A within-subject analysis was performed to compare cognitive processes during volumetric image readings versus 2D cross-sectional image readings., Results: Most utterances contained knowledge and skills concerning perception (46%). A smaller part involved synthesis (31%) and analysis (23%). More utterances regarded perception in volumetric image interpretation than in 2D image interpretation (Median 48% vs 35%; z = -3.9; P < .001). Synthesis was less prominent in volumetric than in 2D image interpretation (Median 28% vs 42%; z = -3.9; P < .001). No differences were found in analysis utterances., Conclusions: Cognitive processes in volumetric and 2D cross-sectional image interpretation differ substantially. Volumetric image interpretation draws predominantly on perceptual processes, whereas 2D image interpretation is mainly characterized by synthesis. The results encourage the use of volumetric images for teaching and testing perceptual skills., (Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
10. Support for external validity of radiological anatomy tests using volumetric images.
- Author
-
Ravesloot CJ, van der Gijp A, van der Schaaf MF, Huige JC, Vincken KL, Mol CP, Bleys RL, ten Cate OT, and van Schaik JP
- Subjects
- Cadaver, Female, Humans, Male, Reproducibility of Results, Surveys and Questionnaires, Education, Medical, Undergraduate, Educational Measurement methods, Radiology education
- Abstract
Rationale and Objectives: Radiology practice has become increasingly based on volumetric images (VIs), but tests in medical education still mainly involve two-dimensional (2D) images. We created a novel, digital, VI test and hypothesized that scores on this test would better reflect radiological anatomy skills than scores on a traditional 2D image test. To evaluate external validity we correlated VI and 2D image test scores with anatomy cadaver-based test scores., Materials and Methods: In 2012, 246 medical students completed one of two comparable versions (A and B) of a digital radiology test, each containing 20 2D image and 20 VI questions. Thirty-three of these participants also took a human cadaver anatomy test. Mean scores and reliabilities of the 2D image and VI subtests were compared and correlated with human cadaver anatomy test scores. Participants received a questionnaire about perceived representativeness and difficulty of the radiology test., Results: Human cadaver test scores were not correlated with 2D image scores, but significantly correlated with VI scores (r = 0.44, P < .05). Cronbach's α reliability was 0.49 (A) and 0.65 (B) for the 2D image subtests and 0.65 (A) and 0.71 (B) for VI subtests. Mean VI scores (74.4%, standard deviation 2.9) were significantly lower than 2D image scores (83.8%, standard deviation 2.4) in version A (P < .001). VI questions were considered more representative of clinical practice and education than 2D image questions and less difficult (both P < .001)., Conclusions: VI tests show higher reliability, a significant correlation with human cadaver test scores, and are considered more representative for clinical practice than tests with 2D images., (Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.