43 results on '"Lambert, W."'
Search Results
2. Embedding a Coaching Culture into Programmatic Assessment
- Author
-
Svetlana Michelle King, Lambert W. T. Schuwirth, and Johanna H. Jordaan
- Subjects
coaching ,programmatic assessment ,educational change ,case study ,Education - Abstract
Educational change in higher education is challenging and complex, requiring engagement with a multitude of perspectives and contextual factors. In this paper, we present a case study based on our experiences of enacting a fundamental educational change in a medical program; namely, the steps taken in the transition to programmatic assessment. Specifically, we reflect on the successes and failures in embedding a coaching culture into programmatic assessment. To do this, we refer to the principles of programmatic assessment as they apply to this case and conclude with some key lessons that we have learnt from engaging in this change process. Fostering a culture of programmatic assessment that supports learners to thrive through coaching has required compromise and adaptability, particularly in light of the changes to teaching and learning necessitated by the global pandemic. We continue to inculcate this culture and enact the principles of programmatic assessment with a focus on continuous quality improvement.
- Published
- 2022
- Full Text
- View/download PDF
3. Assessment in the context of problem-based learning
- Author
-
van der Vleuten, Cees P. M., Schuwirth, Lambert W. T., RS: SHE - R1 - Research (OvO), and Onderwijsontw & Onderwijsresearch
- Subjects
Cooperative learning ,DONT KNOWS ,Problem-based learning ,IMPACT ,Constructive alignment ,Rote learning ,Assessment ,computer.software_genre ,Article ,CLINICAL SKILLS ,Education ,Programmatic assessment ,CULTURE ,Alternative assessment ,Educational assessment ,MEDICAL-EDUCATION ,Mathematics education ,ComputingMilieux_COMPUTERSANDEDUCATION ,VALIDITY ,Competency-based medical education ,Independent study ,Education, Medical ,FEEDBACK ,Progress test ,General Medicine ,COMPETENCE ,Competency-Based Education ,Progress testing ,RELIABILITY ,Psychology ,computer ,Program Evaluation - Abstract
Arguably, constructive alignment has been the major challenge for assessment in the context of problem-based learning (PBL). PBL focuses on promoting abilities such as clinical reasoning, team skills and metacognition. PBL also aims to foster self-directed learning and deep learning as opposed to rote learning. This has incentivized researchers in assessment to find possible solutions. Originally, these solutions were sought in developing the right instruments to measure these PBL-related skills. The search for these instruments has been accelerated by the emergence of competency-based education. With competency-based education assessment moved away from purely standardized testing, relying more heavily on professional judgment of complex skills. Valuable lessons have been learned that are directly relevant for assessment in PBL. Later, solutions were sought in the development of new assessment strategies, initially again with individual instruments such as progress testing, but later through a more holistic approach to the assessment program as a whole. Programmatic assessment is such an integral approach to assessment. It focuses on optimizing learning through assessment, while at the same gathering rich information that can be used for rigorous decision-making about learner progression. Programmatic assessment comes very close to achieving the desired constructive alignment with PBL, but its wide adoption—just like PBL—will take many years ahead of us.
- Published
- 2019
- Full Text
- View/download PDF
4. Assuring the quality of programmatic assessment: Moving beyond psychometrics
- Author
-
Sebastian Uijtdehaage, Lambert W. T. Schuwirth, RS: SHE - R1 - Research (OvO), and Onderwijsontw & Onderwijsresearch
- Subjects
020205 medical informatics ,Psychometrics ,Universities ,media_common.quotation_subject ,education ,02 engineering and technology ,Learning curves ,Education ,Programmatic assessment ,03 medical and health sciences ,0302 clinical medicine ,Competency development ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Quality (business) ,030212 general & internal medicine ,Longitudinal Studies ,Students ,Qualitative Research ,media_common ,Netherlands ,Retrospective Studies ,Reproducibility of Results ,Competency-Based Education ,Performance-relevant information ,Engineering management ,Outcome-based education ,Commentary ,Original Article ,Clinical Competence ,Educational Measurement ,Psychology ,Education, Veterinary - Abstract
Introduction Competency-based education (CBE) is now pervasive in health professions education. A foundational principle of CBE is to assess and identify the progression of competency development in students over time. It has been argued that a programmatic approach to assessment in CBE maximizes student learning. The aim of this study is to investigate if programmatic assessment, i. e., a system of assessment, can be used within a CBE framework to track progression of student learning within and across competencies over time. Methods Three workplace-based assessment methods were used to measure the same seven competency domains. We performed a retrospective quantitative analysis of 327,974 assessment data points from 16,575 completed assessment forms from 962 students over 124 weeks using both descriptive (visualization) and modelling (inferential) analyses. This included multilevel random coefficient modelling and generalizability theory. Results Random coefficient modelling indicated that variance due to differences in inter-student performance was highest (40%). The reliability coefficients of scores from assessment methods ranged from 0.86 to 0.90. Method and competency variance components were in the small-to-moderate range. Discussion The current validation evidence provides cause for optimism regarding the explicit development and implementation of a program of assessment within CBE. The majority of the variance in scores appears to be student-related and reliable, supporting the psychometric properties as well as both formative and summative score applications.
- Published
- 2018
5. Expertise in performance assessment: assessors' perspectives
- Author
-
Christoph Berendonk, Renée E. Stalmeijer, Lambert W. T. Schuwirth, Afdeling Onderwijs FHML, Onderwijsontw & Onderwijsresearch, and RS: SHE School of Health Professions Education
- Subjects
Semi-structured interview ,Adult ,Male ,Performance appraisal ,Health Knowledge, Attitudes, Practice ,Faculty, Medical ,Applied psychology ,Exploratory research ,Assessment ,Grounded theory ,Article ,Education ,Expertise development ,Professional Competence ,Surveys and Questionnaires ,Humans ,Qualitative Research ,Netherlands ,Medicine(all) ,business.industry ,Assessment for learning ,General Medicine ,Middle Aged ,Competency-Based Education ,Personal development ,Female ,Faculty development ,business ,Psychology ,Social psychology ,Decision making ,Qualitative research - Abstract
The recent rise of interest among the medical education community in individual faculty making subjective judgments about medical trainee performance appears to be directly related to the introduction of notions of integrated competency-based education and assessment for learning. Although it is known that assessor expertise plays an important role in performance assessment, the roles played by different factors remain to be unraveled. We therefore conducted an exploratory study with the aim of building a preliminary model to gain a better understanding of assessor expertise. Using a grounded theory approach, we conducted seventeen semi-structured interviews with individual faculty members who differed in professional background and assessment experience. The interviews focused on participants' perceptions of how they arrived at judgments about student performance. The analysis resulted in three categories and three recurring themes within these categories: the categories assessor characteristics, assessors' perceptions of the assessment tasks, and the assessment context, and the themes perceived challenges, coping strategies, and personal development. Central to understanding the key processes in performance assessment appear to be the dynamic interrelatedness of the different factors and the developmental nature of the processes. The results are supported by literature from the field of expertise development and in line with findings from social cognition research. The conceptual framework has implications for faculty development and the design of programs of assessment.
- Published
- 2013
- Full Text
- View/download PDF
6. Modelling the pre-assessment learning effects of assessment: evidence in the validity chain
- Author
-
Francois J Cilliers, Lambert W T Schuwirth, Cees P M van der Vleuten, Onderwijsontw & Onderwijsresearch, and RS: SHE School of Health Professions Education
- Subjects
Male ,Educational measurement ,Models, Educational ,Students, Medical ,Cross-sectional study ,Applied psychology ,Population ,Psychological intervention ,Assessment ,Grounded theory ,Education ,Learning effect ,Surveys and Questionnaires ,Humans ,Learning ,education ,education.field_of_study ,Education, Medical ,Mechanism (biology) ,Reproducibility of Results ,Effective primary care and public health [NCEBP 7] ,General Medicine ,Cross-Sectional Studies ,Female ,Educational Measurement ,Psychology ,Social psychology ,Pre-assessment - Abstract
OBJECTIVES We previously developed a model of the pre-assessment learning effects of consequential assessment and started to validate it. The model comprises assessment factors, mechanism factors and learning effects. The purpose of this study was to continue the validation process. For stringency, we focused on a subset of assessment factor–learning effect associations that featured least commonly in a baseline qualitative study. Our aims were to determine whether these uncommon associations were operational in a broader but similar population to that in which the model was initially derived. METHODS A cross-sectional survey of 361 senior medical students at one medical school was undertaken using a purpose-made questionnaire based on a grounded theory and comprising pairs of written situational tests. In each pair, the manifestation of an assessment factor was varied. The frequencies at which learning effects were selected were compared for each item pair, using an adjusted alpha to assign significance. The frequencies at which mechanism factors were selected were calculated. RESULTS There were significant differences in the learning effect selected between the two scenarios of an item pair for 13 of this subset of 21 uncommon associations, even when a p-value of < 0.00625 was considered to indicate significance. Three mechanism factors were operational in most scenarios: agency; response efficacy, and response value. CONCLUSIONS For a subset of uncommon associations in the model, the role of most assessment factor–learning effect associations and the mechanism factors involved were supported in a broader but similar population to that in which the model was derived. Although model validation is an ongoing process, these results move the model one step closer to the stage of usefully informing interventions. Results illustrate how factors not typically included in studies of the learning effects of assessment could confound the results of interventions aimed at using assessment to influence learning. Discuss ideas arising from this article at ‘http://www.mededuc.com discuss’
- Published
- 2012
7. The use of progress testing
- Author
-
Lambert W. T. Schuwirth and Cees P. M. Van der Vleuten
- Subjects
Predictive validity ,Computer science ,Appeal ,Review Article ,Benchmarking ,Assessment ,Collaboration ,Data science ,Popularity ,Education ,Progress testing ,Risk analysis (engineering) ,Equating ,Learning ,Educational ,Curriculum ,Competence (human resources) ,Activities - Abstract
Progress testing is gaining ground rapidly after having been used almost exclusively in Maastricht and Kansas City. This increased popularity is understandable considering the intuitive appeal longitudinal testing has as a way to predict future competence and performance. Yet there are also important practicalities. Progress testing is longitudinal assessment in that it is based on subsequent equivalent, yet different, tests. The results of these are combined to determine the growth of functional medical knowledge for each student, enabling more reliable and valid decision making about promotion to a next study phase. The longitudinal integrated assessment approach has a demonstrable positive effect on student learning behaviour by discouraging binge learning. Furthermore, it leads to more reliable decisions as well as good predictive validity for future competence or retention of knowledge. Also, because of its integration and independence of local curricula, it can be used in a multi-centre collaborative production and administration framework, reducing costs, increasing efficiency and allowing for constant benchmarking. Practicalities include the relative unfamiliarity of faculty with the concept, the fact that remediation for students with a series of poor results is time consuming, the need to embed the instrument carefully into the existing assessment programme and the importance of equating subsequent tests to minimize test-to-test variability in difficulty. Where it has been implemented—collaboratively—progress testing has led to satisfaction, provided the practicalities are heeded well.
- Published
- 2012
- Full Text
- View/download PDF
8. Programmatic assessment and Kane’s validity perspective
- Author
-
Lambert W T Schuwirth and Cees P M van der Vleuten
- Subjects
Educational measurement ,Psychometrics ,Process (engineering) ,media_common.quotation_subject ,Judgement ,Inference ,Context (language use) ,General Medicine ,Education ,Quality (business) ,Construct (philosophy) ,Psychology ,Social psychology ,media_common ,Cognitive psychology - Abstract
CONTEXT: Programmatic assessment is a notion that implies that the strength of the assessment process results from a careful combination of various assessment instruments. Accordingly, no single instrument is superior to another, but each has its own strengths, weaknesses and purpose in a programme. Yet, in terms of psychometric methods, a one-size-fits-all approach is often used. Kane's views on validity as represented by a series of arguments provide a useful framework from which to highlight the value of different widely used approaches to improve the quality and validity of assessment procedures. METHODS: In this paper we discuss four inferences which form part of Kane's validity theory: from observations to scores; from scores to universe scores; from universe scores to target domain, and from target domain to construct. For each of these inferences, we provide examples and descriptions of approaches and arguments that may help to support the validity inference. CONCLUSIONS: As well as standard psychometric methods, a programme of assessment makes use of various other arguments, such as: item review and quality control, structuring and examiner training; probabilistic methods, saturation approaches and judgement processes, and epidemiological methods, collation, triangulation and member-checking procedures. In an assessment programme each of these can be used.
- Published
- 2011
- Full Text
- View/download PDF
9. General overview of the theories used in assessment: AMEE Guide No. 57
- Author
-
Lambert W. T. Schuwirth, Cees P. M. van der Vleuten, Onderwijsontw & Onderwijsresearch, and RS: SHE School of Health Professions Education
- Subjects
Health Knowledge, Attitudes, Practice ,Models, Educational ,Psychometrics ,computer.software_genre ,Scientific theory ,Semantic network ,Education ,Classical test theory ,Dreyfus model of skill acquisition ,Cognition ,Item response theory ,Humans ,Learning ,Problem Solving ,Education, Medical ,Effective primary care and public health [NCEBP 7] ,General Medicine ,Assessment for learning ,United States ,Evaluation Studies as Topic ,Scripting language ,Data Interpretation, Statistical ,Clinical Competence ,Explicit knowledge ,Psychological Theory ,Psychology ,Social psychology ,computer ,Cognitive psychology - Abstract
Item does not contain fulltext There are no scientific theories that are uniquely related to assessment in medical education. There are many theories in adjacent fields, however, that can be informative for assessment in medical education, and in the recent decades they have proven their value. In this AMEE Guide we discuss theories on expertise development and psychometric theories, and the relatively young and emerging framework of assessment for learning. Expertise theories highlight the multistage processes involved. The transition from novice to expert is characterised by an increase in the aggregation of concepts from isolated facts, through semantic networks to illness scripts and instance scripts. The latter two stages enable the expert to recognise the problem quickly and form a quick and accurate representation of the problem in his/her working memory. Striking differences between experts and novices is not per se the possession of more explicit knowledge but the superior organisation of knowledge in his/her brain and pairing it with multiple real experiences, enabling not only better problem solving but also more efficient problem solving. Psychometric theories focus on the validity of the assessment - does it measure what it purports to measure and reliability - are the outcomes of the assessment reproducible. Validity is currently seen as building a train of arguments of how best observations of behaviour (answering a multiple-choice question is also a behaviour) can be translated into scores and how these can be used at the end to make inferences about the construct of interest. Reliability theories can be categorised into classical test theory, generalisability theory and item response theory. All three approaches have specific advantages and disadvantages and different areas of application. Finally in the Guide, we discuss the phenomenon of assessment for learning as opposed to assessment of learning and its implications for current and future development and research.
- Published
- 2011
- Full Text
- View/download PDF
10. Opinion versus value; local versus global: what determines our future research agenda?
- Author
-
Paul Worley, Lambert W T Schuwirth, Onderwijsontw & Onderwijsresearch, and RS: SHE - R1 - Research (OvO)
- Subjects
Male ,Clinical Practice ,Education, Medical ,Medical ,Research ,Political science ,Humans ,Female ,General Medicine ,Social science ,Marketing ,Value (mathematics) ,Education - Published
- 2014
- Full Text
- View/download PDF
11. The mechanism of impact of summative assessment on medical students' learning
- Author
-
Francois J. Cilliers, Lambert W. Schuwirth, Hanelie J. Adendorff, Nicoline Herman, Cees P. van der Vleuten, Onderwijsontw & Onderwijsresearch, and RS: SHE School of Health Professions Education
- Subjects
Male ,Models, Educational ,Educational measurement ,Students, Medical ,Higher education ,education ,Self-concept ,Assessment ,Article ,Education ,Formative assessment ,Cognition ,Agency (sociology) ,Pedagogy ,Humans ,Learning ,Medicine(all) ,Self-efficacy ,Motivation ,Medical education ,Education, Medical ,Mechanism (biology) ,business.industry ,Determinants of action ,Teaching ,Mechanism of impact ,General Medicine ,Self Concept ,Self Efficacy ,Summative assessment ,Educational Status ,Female ,Educational Measurement ,business ,Psychology - Abstract
It has become axiomatic that assessment impacts powerfully on student learning, but there is a surprising dearth of research on how. This study explored the mechanism of impact of summative assessment on the process of learning of theory in higher education. Individual, in-depth interviews were conducted with medical students and analyzed qualitatively. The impact of assessment on learning was mediated through various determinants of action. Respondents’ learning behaviour was influenced by: appraising the impact of assessment; appraising their learning response; their perceptions of agency; and contextual factors. This study adds to scant extant evidence and proposes a mechanism to explain this impact. It should help enhance the use of assessment as a tool to augment learning.
- Published
- 2010
- Full Text
- View/download PDF
12. Combined formative and summative professional behaviour assessment approach in the bachelor phase of medical school: A Dutch perspective
- Author
-
Walther N. K. A. van Mook, Scheltus J. Van Luijk, Marij J. G. Fey-Schoenmakers, Guido Tans, Jan-Joost E. Rethans, Lambert W. Schuwirth, Cees P. M. van der Vleuten, MUMC+: MA Medische Staf IC (9), Intensive Care, Biochemie, Skillslab, Onderwijsontw & Onderwijsresearch, Interne Geneeskunde, RS: CARIM School for Cardiovascular Diseases, RS: SHE School of Health Professions Education, IOO, and Other Research
- Subjects
Male ,Students, Medical ,media_common.quotation_subject ,Perspective (graphical) ,Medical school ,General Medicine ,Bachelor ,Phase (combat) ,Education ,Formative assessment ,Professional Competence ,Undergraduate curriculum ,Summative assessment ,Pedagogy ,Humans ,Female ,Curriculum ,Psychology ,Education, Medical, Undergraduate ,Netherlands ,media_common - Abstract
Background: Teaching and assessment of professional behaviour (PB) has been receiving increasing attention in the educational literature and educational practice. Although the focus tends to be summative aspects, it seems perfectly feasible to combine formative and summative approaches in one procedural approach. Aims and method: Although, many examples of frameworks of professionalism and PB can be found in the literature, most originate from North America, and only few are designed in other continents. This article presents the framework for PB that is used at Maastricht medical school, the Netherlands. Results: The approach to PB used in the Dutch medical schools is described with special attention to 4 years (2005-2009) of experience with PB education in the first 3 years of the 6-year undergraduate curriculum of Maastricht medical school. Future challenges are identified. Conclusions: The adages 'Assessment drives learning' and 'They do not respect what you do not inspect' [Cohen JJ. 2006. Professionalism in medical education, an American perspective: From evidence to accountability. Med Educ 40, 607-617] suggest that formative and summative aspects of PB assessment can be combined within an assessment framework. Formative and summative assessments do not represent contrasting but rather complementary approaches. The Maastricht medical school framework combines the two approaches, as two sides of the same coin.
- Published
- 2010
- Full Text
- View/download PDF
13. Assessment of competence and progressive independence in postgraduate clinical training
- Author
-
Marja G K Dijksterhuis, Marlies Voorhuis, Pim W Teunissen, Lambert W T Schuwirth, Olle T J ten Cate, Didi D M Braat, and Fedde Scheele
- Subjects
Male ,Higher education ,education ,Education ,Obstetrics and gynaecology ,Nursing ,Humans ,Independent practice ,Competence (human resources) ,Netherlands ,business.industry ,Internship and Residency ,General Medicine ,Focus Groups ,Focus group ,Competency-Based Education ,Human Reproduction [NCEBP 12] ,Education, Medical, Graduate ,Clinical training ,Female ,Clinical Competence ,Educational Measurement ,Clinical competence ,business ,Postgraduate training ,Psychology - Abstract
Item does not contain fulltext CONTEXT: At present, competency-based, outcome-focused training is gradually replacing more traditional master-apprentice teaching in postgraduate training. This change requires a different approach to the assessment of clinical competence, especially given the decisions that must be made about the level of independence allowed to trainees. METHODS: This study was set within postgraduate obstetrics and gynaecology training in the Netherlands. We carried out seven focus group discussions, four with postgraduate trainees from four training programmes and three with supervisors from three training programmes. During these discussions, we explored current opinions of supervisors and trainees about how to determine when a trainee is competent to perform a clinical procedure and the role of formal assessment in this process. RESULTS: When the focus group recordings were transcribed, coded and discussed, two higher-order themes emerged: factors that determine the level of competence of a trainee in a clinical procedure, and factors that determine the level of independence granted to a trainee or acceptable to a trainee. CONCLUSIONS: From our study, it is evident that both determining the level of competence of a trainee for a certain professional activity and making decisions about the degree of independence entrusted to a trainee are complex, multi-factorial processes, which are not always transparent. Furthermore, competence achieved in a certain clinical procedure does not automatically translate into more independent practice. We discuss the implications of our findings for the assessment of clinical competence and provide suggestions for a transparent assessment structure with explicit attention to progressive independence.
- Published
- 2009
- Full Text
- View/download PDF
14. Narrative information obtained during student selection predicts problematic study behavior
- Author
-
Mirjam G. A. oude Egbrink, Lambert W. T. Schuwirth, RS: SHE - R1 - Research (OvO), and Onderwijsontw & Onderwijsresearch
- Subjects
Students, Medical ,020205 medical informatics ,media_common.quotation_subject ,education ,MEDLINE ,Exploratory research ,Aptitude ,02 engineering and technology ,Education ,Test Taking Skills ,Interviews as Topic ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Selection (linguistics) ,Humans ,School Admission Criteria ,Narrative ,030212 general & internal medicine ,Association (psychology) ,Retrospective Studies ,media_common ,Medical education ,Retrospective cohort study ,General Medicine ,Education, Medical, Graduate ,Psychology ,Forecasting ,Clinical psychology - Abstract
Introduction: Up to now, student selection for medical schools is merely used to decide which applicants will be admitted. We investigated whether narrative information obtained during multiple mini-interviews (MMIs) can also be used to predict problematic study behavior.Methods: A retrospective exploratory study was performed on students who were selected into a four-year research master's program Physician-Clinical Investigator in 2007 and 2008 (n=60). First, counselors were asked for the most prevalent non-cognitive problems among their students. Second, MMI notes were analyzed to identify potential indicators for these problems. Third, a case-control study was performed to investigate the association between students exhibiting the non-cognitive problems and the presence of indicators for these problems in their MMI notes.Results: The most prevalent non-cognitive problems concerned planning and self-reflection. Potential indicators for these problems were identified in randomly chosen MMI notes. The case-control analysis demonstrated a significant association between indicators in the notes and actual planning problems (odds ratio: 9.33, p=0.003). No such evidence was found for self-reflection-related problems (odds ratio: 1.39, p=0.68).Conclusions: Narrative information obtained during MMIs contains predictive indicators for planning-related problems during study. This information would be useful for early identification of students-at-risk, which would enable focused counseling and interventions to improve their academic achievement.
- Published
- 2016
15. Benchmarking by cross-institutional comparison of student achievement in a progress test
- Author
-
Arno M M Muijtjens, Lambert W T Schuwirth, Janke Cohen-Schotanus, Arnold J N M Thoben, and Cees P M van der Vleuten
- Subjects
Educational measurement ,business.industry ,education ,General Medicine ,Benchmarking ,Academic achievement ,Educational evaluation ,Education ,Test (assessment) ,Test score ,Statistics ,Mathematics education ,Medicine ,business ,Educational program ,Reliability (statistics) - Abstract
OBJECTIVE To determine the effectiveness of single-point benchmarking and longitudinal benchmarking for inter-school educational evaluation. METHODS We carried out a mixed, longitudinal, cross-sectional study using data from 24 annual measurement moments (4 tests x 6 year groups) over 4 years for 4 annual progress tests assessing the graduation-level knowledge of all students from 3 co-operating medical schools. Participants included undergraduate medical students (about 5000) from 3 medical schools. The main outcome measures involved between-school comparisons of progress test results based on different benchmarking methods. RESULTS Variations in relative school performance across different tests and year groups indicate instability and low reliability of single-point benchmarking, which is subject to distortions as a result of school-test and year group-test interaction effects. Deviations of school means from the overall mean follow an irregular, noisy pattern obscuring systematic between-school differences. The longitudinal benchmarking method results in suppression of noise and revelation of systematic differences. The pattern of a school's cumulative deviations per year group gives a credible reflection of the relative performance of year groups. CONCLUSIONS Even with highly comparable curricula, single-point benchmarking can result in distortion of the results of comparisons. If longitudinal data are available, the information contained in a school's cumulative deviations from the overall mean can be used. In such a case, the mean test score across schools is a useful benchmark for cross-institutional comparison.
- Published
- 2007
- Full Text
- View/download PDF
16. Origin bias of test items compromises the validity and fairness of curriculum comparisons
- Author
-
Arno M M Muijtjens, Lambert W T Schuwirth, Janke Cohen-Schotanus, Cees P M van der Vleuten, and Faculteit Medische Wetenschappen/UMCG
- Subjects
media_common.quotation_subject ,education ,curriculum ,STUDENTS ,Test validity ,Scientific literature ,education, medical, undergraduate ,behavioral disciplines and activities ,medical ,Mathematics education ,selection bias ,Curriculum ,media_common ,Netherlands ,Selection bias ,undergraduate ,comparative study [publication type] ,General Medicine ,PROGRESS TEST ,curriculum, standards ,Test (assessment) ,Test score ,validation studies [publication type] ,standards ,Potential confounder ,clinical competence, standards ,multicentre study [publication type] ,Psychology ,Educational program ,Clinical psychology ,clinical competence - Abstract
OBJECTIVE To determine whether items of progress tests used for inter-curriculum comparison favour students from the medical school where the items were produced (i.e. whether the origin bias of test items is a potential confounder in comparisons between curricula).METHODS We investigated scores of students from different schools on subtests consisting of progress test items constructed by authors from the different schools. In a cross-institutional collaboration between 3 medical schools, progress tests are jointly constructed and simultaneously administered to all students at the 3 schools. Test score data for 6 consecutive progress tests were investigated. Participants consisted of approximately 5000 undergraduate medical students from 3 medical schools. The main outcome measure was the difference between the scores on subtests of items constructed by authors from 2 of the collaborating schools (subtest difference score).RESULTS The subtest difference scores showed that students obtained better results on items produced at their own schools. This effect was more pronounced in Years 2-5 of the curriculum than in Year 1, and diminished in Year 6.CONCLUSIONS Progress test items were subject to origin bias. As a consequence, all participating schools should contribute equal numbers of test items if tests are to be used for valid and fair inter-curriculum comparisons.
- Published
- 2007
- Full Text
- View/download PDF
17. Factors inhibiting assessment of students' professional behaviour in the tutorial group during problem-based learning
- Author
-
Walther N K A van Mook, Willem S de Grave, Elise Huijssen-Huisman, Marianne de Witt-Luth, Diana H J M Dolmans, Arno M M Muijtjens, Lambert W Schuwirth, and Cees P M van der Vleuten
- Subjects
Students, Medical ,Attitude of Health Personnel ,education ,Context (language use) ,Education ,Likert scale ,Professional Competence ,Patient-Centered Care ,Pedagogy ,medicine ,Humans ,Curriculum ,Netherlands ,Physician-Patient Relations ,Medical education ,Collaborative learning ,Problem-Based Learning ,General Medicine ,Confirmatory factor analysis ,Group Processes ,Problem-based learning ,Learning disability ,Faculty development ,medicine.symptom ,Psychology ,Education, Medical, Undergraduate - Abstract
CONTEXT We addressed the assessment of professional behaviour in tutorial groups by investigating students' perceptions of the frequency and impact of critical incidents that impede this assessment and 5 factors underlying these critical incidents. METHODS A questionnaire asking students to rate the frequency and impact of 40 critical incidents relating to effective assessment of professional behaviour on a 5-point Likert scale was developed and sent to all undergraduate medical students in Years 2-4 of a 6-year undergraduate curriculum. RESULTS The response rate was 70% (n = 393). Important factors underlying critical incidents are: lack of effective interaction; lack of thoroughness; tutors' failure to confront students with unprofessional behaviour; lack of effort to find solutions, and lack of student motivation. Confirmatory factor analysis showed a good model fit. Because the relationship between frequency of occurrence and degree of impediment varies, the best information about the true impact of critical incidents and the underlying factors is provided by the product of frequency and degree of impediment. Frequency of occurrence remains stable and degree of impediment increases in Years 2-4. CONCLUSIONS The results of this study can be used to design and improve faculty development programmes aimed at improving assessment of professional behaviour. Training programmes should motivate tutors by providing background information as to why and how sound assessment of professional behaviour is to be performed and encourage tutors to confront students with and discuss all aspects of professional behaviour, as well as provide appropriate feedback.
- Published
- 2007
- Full Text
- View/download PDF
18. Broadening Perspectives on Clinical Performance Assessment: Rethinking the Nature of In-training Assessment
- Author
-
Marjan J. B. Govaerts, Cees P. M. van der Vleuten, Lambert W. T. Schuwirth, and Arno M. M. Muijtjens
- Subjects
Observer Variation ,Educational measurement ,Education, Medical ,Psychometrics ,Applied psychology ,Clinical performance ,MEDLINE ,Reproducibility of Results ,Social environment ,Cognition ,General Medicine ,Education ,Pedagogy ,Humans ,Clinical Competence ,Educational Measurement ,Psychology ,On-the-job training ,Objectivity (science) - Abstract
In-training assessment (ITA), defined as multiple assessments of performance in the setting of day-to-day practice, is an invaluable tool in assessment programmes which aim to assess professional competence in a comprehensive and valid way. Research on clinical performance ratings, however, consistently shows weaknesses concerning accuracy, reliability and validity. Attempts to improve the psychometric characteristics of ITA focusing on standardisation and objectivity of measurement thus far result in limited improvement of ITA-practices.The aim of the paper is to demonstrate that the psychometric framework may limit more meaningful educational approaches to performance assessment, because it does not take into account key issues in the mechanics of the assessment process. Based on insights from other disciplines, we propose an approach to ITA that takes a constructivist, social-psychological perspective and integrates elements of theories of cognition, motivation and decision making. A central assumption in the proposed framework is that performance assessment is a judgment and decision making process, in which rating outcomes are influenced by interactions between individuals and the social context in which assessment occurs.The issues raised in the article and the proposed assessment framework bring forward a number of implications for current performance assessment practice. It is argued that focusing on the context of performance assessment may be more effective in improving ITA practices than focusing strictly on raters and rating instruments. Furthermore, the constructivist approach towards assessment has important implications for assessment procedures as well as the evaluation of assessment quality. Finally, it is argued that further research into performance assessment should contribute towards a better understanding of the factors that influence rating outcomes, such as rater motivation, assessment procedures and other contextual variables.
- Published
- 2006
- Full Text
- View/download PDF
19. Changing education, changing assessment, changing research?
- Author
-
Lambert W T Schuwirth and Cees P M van der Vleuten
- Subjects
Research design ,Models, Educational ,Education, Medical ,media_common.quotation_subject ,Reproducibility of Results ,General Medicine ,Education ,Neglect ,Formative assessment ,Important research ,Research Design ,Mathematics education ,Humans ,Research questions ,Engineering ethics ,Clinical Competence ,Educational Measurement ,Psychology ,Competence (human resources) ,media_common - Abstract
Background In medical education, assessment of medical competence and performance, important changes have taken place in the last 5 decades. These changes have affected the basic concepts in all 3 domains. Developments in education and assessment In education constructivism has provided a completely new view on how students learn best. In assessment the change from trait-orientated to competency- or role-orientated thinking has given rise to a whole range of new approaches. Certain methods of education, such as problem-based learning (PBL), and assessment, however, are often seen as almost synonymous with the underlying concepts, and one tends to forget that it is the concept that is important and that a particular method is but 1 way of using a concept. When doing this, one runs the risk of confusing means and ends, which may hamper or slow down new developments. Lessons for research A similar problem seems to occur often in research of medical education. Here too, methods – or, rather, methodologies – are confused with research questions. This may lead to an overemphasis on research that fits well known methodologies (e.g. the randomised controlled trial) and neglect of what are sometimes even more important research questions because they do not fit well known methodologies. Conclusion In this paper we advocate a return to the underlying concepts and a careful reflection of their use in various situations.
- Published
- 2004
- Full Text
- View/download PDF
20. Medical school benchmarking - From tools to programmes
- Author
-
Tim J. Wilkinson, Judith N. Hudson, Geoffrey J. Mccoll, Wendy C. Y. Hu, Brian C. Jolly, Lambert W. T. Schuwirth, RS: SHE - R1 - Research (OvO), and Onderwijsontw & Onderwijsresearch
- Subjects
Value (ethics) ,Educational measurement ,Quality management ,Knowledge management ,Management science ,Computer science ,business.industry ,education ,Australia ,Analogy ,General Medicine ,Benchmarking ,Quality Improvement ,Education ,Conceptual framework ,Blueprint ,Benchmark (computing) ,Humans ,Learning ,Educational Measurement ,business ,Schools, Medical ,New Zealand - Abstract
Background: Benchmarking among medical schools is essential, but may result in unwanted effects. Aim: To apply a conceptual framework to selected benchmarking activities of medical schools. Methods: We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. Results: The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Conclusion: Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.
- Published
- 2015
21. The impact of programmatic assessment on student learning: theory versus practice
- Author
-
Sylvia Heeneman, Andrea Oudkerk Pool, Lambert W T Schuwirth, Cees P M van der Vleuten, Erik W Driessen, Pathologie, Promovendi SHE, Onderwijsontw & Onderwijsresearch, RS: CARIM - R3 - Vascular biology, and RS: SHE - R1 - Research (OvO)
- Subjects
Cooperative learning ,Adult ,Male ,Students, Medical ,Teaching method ,education ,Context (language use) ,Experiential learning ,Healthcare improvement science Radboud Institute for Health Sciences [Radboudumc 18] ,Education ,Feedback ,Formative assessment ,Young Adult ,Pedagogy ,Humans ,Learning ,Qualitative Research ,Netherlands ,Medical education ,General Medicine ,Summative assessment ,Education, Medical, Graduate ,Active learning ,Female ,Educational Measurement ,Thematic analysis ,Psychology - Abstract
ContextIt is widely acknowledged that assessment can affect student learning. In recent years, attention has been called to programmatic assessment', which is intended to optimise both learning functions and decision functions at the programme level of assessment, rather than according to individual methods of assessment. Although the concept is attractive, little research into its intended effects on students and their learning has been conducted. ObjectivesThis study investigated the elements of programmatic assessment that students perceived as supporting or inhibiting learning, and the factors that influenced the active construction of their learning. MethodsThe study was conducted in a graduate-entry medical school that implemented programmatic assessment. Thus, all assessment information, feedback and reflective activities were combined into a comprehensive, holistic programme of assessment. We used a qualitative approach and interviewed students (n=17) in the pre-clinical phase of the programme about their perceptions of programmatic assessment and learning approaches. Data were scrutinised using theory-based thematic analysis. ResultsElements from the comprehensive programme of assessment, such as feedback, portfolios, assessments and assignments, were found to have both supporting and inhibiting effects on learning. These supporting and inhibiting elements influenced students' construction of learning. Findings showed that: (i) students perceived formative assessment as summative; (ii) programmatic assessment was an important trigger for learning, and (iii) the portfolio's reflective activities were appreciated for their generation of knowledge, the lessons drawn from feedback, and the opportunities for follow-up. Some students, however, were less appreciative of reflective activities. For these students, the elements perceived as inhibiting seemed to dominate the learning response. ConclusionsThe active participation of learners in their own learning is possible when learning is supported by programmatic assessment. Certain features of the comprehensive programme of assessment were found to influence student learning, and this influence can either support or inhibit students' learning responses. Discuss ideas arising from the article at discuss.
- Published
- 2015
22. Procedures for establishing defensible programmes for assessing practice performance
- Author
-
Stephen R Lew, Gordon G Page, Lambert W T Schuwirth, Margarita Baron-Maldonado, Joelle M J Lescop, Neil S Paget, Lesley J Southgate, and Winifred B Wade
- Subjects
Work (electrical) ,Management science ,business.industry ,Medicine ,Engineering ethics ,General Medicine ,Test validity ,Project management ,business ,Education - Abstract
The assessment of the performance of doctors in practice is becoming more widely accepted. While there are many potential purposes for such assessments, sometimes the consequences of the assessments will be 'high stakes'. In these circumstances, any of the many elements of the assessment programme may potentially be challenged. These assessment programmes therefore need to be robust, fair and defensible, taken from the perspectives of consumer, assessee and assessor. In order to inform the design of defensible programmes for assessing practice performance, a group of education researchers at the 10th Cambridge Conference adopted a project management approach to designing practice performance assessment programmes. This paper describes issues to consider in the articulation of the purposes and outcomes of the assessment, planning the programme, the administrative processes involved, including communication and preparation of assessees. Examples of key questions to be answered are provided, but further work is needed to test validity.
- Published
- 2002
- Full Text
- View/download PDF
23. Learning the ways of the force: advice to minority students in STEM fields on succeeding in graduate school
- Author
-
Lambert, W. Marcus
- Subjects
Minority students -- Training ,Graduate students -- Training ,Education - Abstract
It's June, and by now you have selected a graduate program that will influence the next five to seven years of your life. For those just starting on this journey, [...]
- Published
- 2015
24. Medical Students Perceive Better Group Learning Processes when Large Classes Are Made to Seem Small
- Author
-
Juliette Hommes, Onyebuchi A. Arah, Willem de Grave, Lambert W. T. Schuwirth, Albert J. J. A. Scherpbier, Gerard M. J. Bos, MUMC+: MA AIOS Plastische Chirurgie (9), Plastische Chirurgie (PLC), Onderwijsontw & Onderwijsresearch, Onderwijs instituut FHML, Interne Geneeskunde, RS: SHE - R1 - Research (OvO), and Costa, Manuel João
- Subjects
Male ,and promotion of well-being ,Medical psychology ,Students, Medical ,lcsh:Medicine ,Social Sciences ,law.invention ,Training (Education) ,Randomized controlled trial ,Sociology ,law ,Adaptive Training ,Medicine and Health Sciences ,Medicine ,Psychology ,lcsh:Science ,Pediatric ,Multidisciplinary ,Schools ,Education, Medical ,Pedagogy ,Collaborative learning ,Principles of learning ,Female ,Curriculum ,Research Article ,Cooperative learning ,Adult ,Social Psychology ,Universities ,General Science & Technology ,Science Policy ,Clinical Trials and Supportive Activities ,education ,Context (language use) ,Education ,Young Adult ,Clinical Research ,Intervention (counseling) ,Medical ,Learning ,Humans ,Students ,Medical education ,business.industry ,lcsh:R ,Cognitive Psychology ,Biology and Life Sciences ,Prevention of disease and conditions ,Group Processes ,Science Education ,Medical Education ,Teaching Methods ,3.1 Primary prevention interventions to modify behaviours or promote wellbeing ,Cognitive Science ,lcsh:Q ,Perception ,business ,Medical Humanities ,Neuroscience - Abstract
Objective Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. Design A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n = 50) as the intervention groups; a control group (n = 102) was mixed with the remaining students (the non-randomised group n∼100) to create one large subset. Setting The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6–10 weeks. Intervention The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. Main Outcome Measures Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. Results Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β = 0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>−0.69). Interviews tapped mainly positive effects and negligible negative side effects of the intervention. Conclusion Better group learning processes can be achieved in large medical schools by making large classes seem small.
- Published
- 2014
25. Web-based feedback after summative assessment: how do students engage?
- Author
-
Christopher J Harrison, Karen D Könings, Adrian Molyneux, Lambert W T Schuwirth, Valerie Wass, Cees P M van der Vleuten, Onderwijsontw & Onderwijsresearch, and RS: SHE School of Health Professions Education
- Subjects
Self-assessment ,Self-Assessment ,Students, Medical ,Context (language use) ,Motivational Interviewing ,Education ,Formative assessment ,Knowledge of results ,Humans ,Learning ,Self-efficacy ,Medical education ,Internet ,Motivation ,Peer feedback ,Goal orientation ,General Medicine ,Effective primary care and public health [NCEBP 7] ,Self Efficacy ,Summative assessment ,Clinical Competence ,Educational Measurement ,Psychology ,Social psychology ,Goals ,Knowledge of Results, Psychological - Abstract
Item does not contain fulltext CONTEXT: There is little research into how to deliver summative assessment student feedback effectively. The main aims of this study were to clarify how students engage with feedback in this context and to explore the roles of learning-related characteristics and previous and current performance. METHODS: A website was developed to deliver feedback about the objective structural clinical examination (OSCE) in various formats: station by station or on skills across stations. In total, 138 students (in the third year out of five) completed a questionnaire about goal orientation, motivation, self-efficacy, control of learning beliefs and attitudes to feedback. Individual website usage was analysed over an 8-week period. Latent class analyses were used to identify profiles of students, based on their use of different aspects of the feedback website. Differences in learning-related student characteristics between profiles were assessed using analyses of variance (anovas). Individual website usage was related to OSCE performance. RESULTS: In total, 132 students (95.7%) viewed the website. The number of pages viewed ranged from two to 377 (median 102). Fifty per cent of students engaged comprehensively with the feedback, 27% used it in a minimal manner, whereas a further 23% used it in a more selective way. Students who were comprehensive users of the website scored higher on the value of feedback scale, whereas students who were minimal users scored higher on extrinsic motivation. Higher performing students viewed significantly more web pages showing comparisons with peers than weaker students did. Students who just passed the assessment made least use of the feedback. CONCLUSIONS: Higher performing students appeared to use the feedback more for positive affirmation than for diagnostic information. Those arguably most in need engaged least. We need to construct feedback after summative assessment in a way that will more effectively engage those students who need the most help.
- Published
- 2013
26. Programmatic assessment: From assessment of learning to assessment for learning
- Author
-
Lambert W. T. Schuwirth, Cees P. M. Van der Vleuten, Onderwijsontw & Onderwijsresearch, and RS: SHE School of Health Professions Education
- Subjects
Knowledge management ,Psychometrics ,media_common.quotation_subject ,education ,Judgement ,Education ,Formative assessment ,Surveys and Questionnaires ,Humans ,Learning ,Quality (business) ,Standards-based assessment ,media_common ,Education, Medical ,Management science ,business.industry ,Effective primary care and public health [NCEBP 7] ,General Medicine ,Assessment for learning ,Variety (cybernetics) ,The Conceptual Framework ,Clinical Competence ,Educational Measurement ,Psychology ,Construct (philosophy) ,business - Abstract
Item does not contain fulltext In assessment a considerable shift in thinking has occurred from assessment of learning to assessment for learning. This has important implications for the conceptual framework from which to approach the issue of assessment, but also with respect to the research agenda. The main conceptual changes pertain to programmes of assessment. This has led to a broadened perspective on the types of construct assessment tries to capture, the way information from various sources is collected and collated, the role of human judgement and the variety of psychometric methods to determine the quality of the assessment. Research into the quality of assessment programmes, how assessment influences learning and teaching, new psychometric models and the role of human judgement is much needed.
- Published
- 2011
27. Bad apples spoil the barrel: Addressing unprofessional behaviour
- Author
-
Walther N. K. A. van Mook, Simone L. Gorter, Willem S. De Grave, Scheltus J. van Luijk, Valerie Wass, Jan Harm Zwaveling, Lambert W. Schuwirth, Cees P. M. Van Der Vleuten, MUMC+: MA Medische Staf IC (9), Intensive Care, Interne Geneeskunde, Onderwijsontw & Onderwijsresearch, MUMC+: PZ A Academie Medische Vervolgopleiding (9), RS: CAPHRI School for Public Health and Primary Care, RS: SHE School of Health Professions Education, IOO, and Other Research
- Subjects
Medical education ,Personality Inventory ,Whistleblowing ,media_common.quotation_subject ,Interprofessional Relations ,Acknowledgement ,education ,Medical school ,General Medicine ,Education ,Patient safety ,Education, Medical, Graduate ,Personality ,Humans ,School Admission Criteria ,Curriculum ,Psychology ,Professional Misconduct ,media_common ,Education, Medical, Undergraduate - Abstract
Given the changes in society we are experiencing, the increasing focus on patient-centred care and acknowledgement that medical education including professionalism issues needs to continue not only in the residency programmes but also throughout the doctors career, is not surprising. Although most of the literature on professionalism pertains to learning and teaching professionalism issues, addressing unprofessional behaviour and related patient safety issues forms an alternative or perhaps complementary approach. This article describes the possibility of selecting applicants for a medical school based on personality characteristics, the attention to professional lapses in contemporary undergraduate training, as well as the magnitude, aetiology, surveillance and methods of dealing with reports of unprofessional behaviour in postgraduate education and CME.
- Published
- 2010
- Full Text
- View/download PDF
28. Evidence for validity within workplace assessment: the Longitudinal Evaluation of Performance (LEP)
- Author
-
Linda Prescott-Clements, Cees P M van der Vleuten, Lambert W T Schuwirth, Yvonne Hurst, and James S Rennie
- Subjects
Gerontology ,Evidence-based practice ,Students, Medical ,education ,Judgement ,Scientific literature ,Personal Satisfaction ,Education, Dental, Graduate ,Education ,Continuous assessment ,Feedback ,Formative assessment ,Cohort Studies ,Medicine ,Humans ,Longitudinal Studies ,Workplace ,Competence (human resources) ,Medical education ,business.industry ,General Medicine ,Evidence-based medicine ,Summative assessment ,Scotland ,Feasibility Studies ,Clinical Competence ,business - Abstract
OBJECTIVE The drive towards valid and reliable assessment methods for health professions' training is becoming increasingly focused towards authentic models of workplace performance assessment. This study investigates the validity of such a method, longitudinal evaluation of performance (LEP), which has been implemented in the assessment of postgraduate dental trainees in Scotland. Although it is similar in format to the mini-CEX (mini clinical evaluation exercise) and other tools that use global ratings for assessing performance in the workplace, a number of differences exist in the way in which the LEP has been implemented. These include the use of a reference point for evaluators' judgement that represents the standard expected upon completion of the training, flexibility, a greater range of cases assessed and the use of frequency scores within feedback to identify trainees' progress over time. METHODS A range of qualitative and quantitative data were collected and analysed from 2 consecutive cohorts of trainees in Scotland (2002-03 and 2003-04). RESULTS There is rich evidence supporting the validity, educational impact and feasibility of the LEP. In particular, a great deal of support was given by trainers for the use of a fixed reference point for judgements, despite initial concerns that this might be demotivating to trainees. Trainers were highly positive about this approach and considered it useful in identifying trainees' progress and helping to drive learning. CONCLUSIONS The LEP has been successful in combining a strong formative approach to continuous assessment with the collection of evidence on performance within the workplace that (alongside other tools within an assessment system) can contribute towards a summative decision regarding competence.
- Published
- 2008
29. Benchmarking by cross-institutional comparison of student achievement in a progress test
- Author
-
Muijtjens, Arno M. M., Schuwirth, Lambert W. T., Cohen-Schotanus, Janke, Thoben, Arnold J. N. M., van der Vleuten, Cees P. M., van, der, Faculteit Medische Wetenschappen/UMCG, and Science in Healthy Ageing & healthcaRE (SHARE)
- Subjects
educational measurement ,programme evaluation ,education ,curriculum ,QUALITY ,educational, medical, undergraduate ,KNOWLEDGE ,benchmarking ,multicentre study [publication type] ,inter-institutional relations, schools, medical ,Netherlands - Abstract
OBJECTIVE To determine the effectiveness of single-point benchmarking and longitudinal benchmarking for inter-school educational evaluation. METHODS We carried out a mixed, longitudinal, cross-sectional study using data from 24 annual measurement moments (4 tests x 6 year groups) over 4 years for 4 annual progress tests assessing the graduation-level knowledge of all students from 3 co-operating medical schools. Participants included undergraduate medical students (about 5000) from 3 medical schools. The main outcome measures involved between-school comparisons of progress test results based on different benchmarking methods. RESULTS Variations in relative school performance across different tests and year groups indicate instability and low reliability of single-point benchmarking, which is subject to distortions as a result of school-test and year group-test interaction effects. Deviations of school means from the overall mean follow an irregular, noisy pattern obscuring systematic between-school differences. The longitudinal benchmarking method results in suppression of noise and revelation of systematic differences. The pattern of a school's cumulative deviations per year group gives a credible reflection of the relative performance of year groups. CONCLUSIONS Even with highly comparable curricula, single-point benchmarking can result in distortion of the results of comparisons. If longitudinal data are available, the information contained in a school's cumulative deviations from the overall mean can be used. In such a case, the mean test score across schools is a useful benchmark for cross-institutional comparison.
- Published
- 2008
30. A plea for new psychometric models in educational assessment
- Author
-
Lambert W T Schuwirth and Cees P M Vleuten
- Subjects
Reductionism ,Models, Educational ,Psychometrics ,Education, Medical ,Management science ,Probabilistic logic ,Construct validity ,Bayes Theorem ,General Medicine ,computer.software_genre ,Scientific modelling ,Education ,Educational assessment ,Credibility ,Educational Measurement ,Psychology ,computer ,Competence (human resources) ,Social psychology - Abstract
OBJECTIVE To describe the weaknesses of the current psychometric approach to assessment as a scientific model. DISCUSSION The current psychometric model has played a major role in improving the quality of assessment of medical competence. It is becoming increasingly difficult, however, to apply this model to modern assessment methods. The central assumption in the current model is that medical competence can be subdivided into separate measurable stable and generic traits. This assumption has several far-reaching implications. Perhaps the most important is that it requires a numerical and reductionist approach, and that aspects such as fairness, defensibility and credibility are by necessity mainly translated into reliability and construct validity. These approaches are more and more difficult to align with modern assessment approaches such as mini-CEX, 360-degree feedback and portfolios. This paper describes some of the weaknesses of the psychometric model and aims to open a discussion on a conceptually different statistical approach to quality of assessment. FUTURE DIRECTIONS We hope that the discussion opened by this paper will lead to the development of a conceptually different statistical approach to quality of assessment. A probabilistic or Bayesian approach would be worth exploring.
- Published
- 2006
31. Different written assessment methods: what can be said about their strengths and weaknesses?
- Author
-
Lambert W T Schuwirth and Cees P M van der Vleuten
- Subjects
Writing ,Reproducibility of Results ,General Medicine ,Education ,Undergraduate methods ,Surveys and Questionnaires ,Assessment methods ,Educational impact ,Humans ,Cognitive skill ,Clinical Competence ,Educational Measurement ,Psychology ,Social psychology ,Competence (human resources) ,Strengths and weaknesses ,Cognitive psychology ,Education, Medical, Undergraduate - Abstract
Introduction Written assessment techniques can be subdivided according to their stimulus format – what the questions asks – and their response format – how the answer is recorded. The former is more important in determining the type of competence being asked for than the latter. It is nevertheless important to consider both when selecting the most appropriate types. Some major elements to consider when making such a selection are cueing effect, reliability, validity, educational impact and resource-intensiveness. Response formats Open-ended questions should be used solely to test aspects that cannot be tested with multiple-choice questions. In all other cases the loss of reliability and the higher resource-intensiveness represent a significant downside. In such cases, multiple-choice questions are not less valid than open-ended questions. Stimulus format When making this distinction, it is important to consider whether the question is embedded within a relevant case or context and cannot be answered without the case, or not. This appears to be more or less essential according to what is being tested by the question. Context-rich questions test other cognitive skills than do context-free questions. If knowledge alone is the purpose of the test, context-free questions may be useful, but if it is the application of knowledge or knowledge as a part of problem solving that is being tested, then context is indispensable. Conclusion Every format has its (dis)advantages and a combination of formats based on rational selection is more useful than trying to find or develop a panacea. The response format is less important in this respect than the stimulus.
- Published
- 2004
32. Controlled trial of effect of computer-based nutrition course on knowledge and practice of general practitioner trainees
- Author
-
Maiburg, Bas H. J., Rethans, Jan-Joost E., Schuwirth, Lambert W. T., Mathus-Vliegen, Lisbeth M. H., van Ree, Jan W., and Gastroenterology and Hepatology
- Subjects
education - Abstract
Nutrition education is not an integral part of either undergraduate or postgraduate medical education. Computer-based instruction on nutrition might be an attractive and appropriate tool to fill this gap. The study objective was to assess the degree to which computer-based instruction on nutrition improves factual knowledge and practice behavior of general practitioner (GP) trainees. We carried out a controlled experimental study, using a 79-item knowledge test and 3 incognito standardized patients' visits in a pre- and posttest design with 49 first-year GP trainees. The experimental group (n = 25) received an average of 6 h of a newly developed computer-based instruction on nutrition. The control subjects (n = 24) took the standard vocational training program. The percentage of correct answers on the knowledge test increased from 30% at pretest to 42% at posttest in the experimental group, and from 36% to 37% in the control group. Analysis of covariance, with the pretest scores as covariate, showed a significant experimental versus control group difference at posttest: 9.2% (P = 0.002). The mean percentage of correctly performed items during the 3 standardized patients' visits (assessed by checklists) showed an increase in the experimental group from 20% at pretest to 36% at posttest, whereas the control group changed from 20% to 22%. Analysis of covariance, with the pretest scores as covariate, revealed a significant group difference at posttest: 13.7% (P
- Published
- 2003
33. Integrating Performance Assessment, Maintenance of Competence, and Continuing Professional Development of Community Pharmacists
- Author
-
Nancy E. Winslade, Robyn M. Tamblyn, Laurel K. Taylor, Lambert W. T. Schuwirth, and Cees P. M. Van der Vleuten
- Subjects
Knowledge management ,business.industry ,Best practice ,education ,Pharmacist ,Pharmacy ,Community Pharmacy Services ,General Medicine ,Pharmacists ,Education ,Professional Competence ,Continuing professional development ,Pharmaconomist ,Employee Performance Appraisal ,Special Articles ,Humans ,Medicine ,Pharmacy practice ,Education, Pharmacy, Continuing ,General Pharmacology, Toxicology and Pharmaceutics ,business ,Competence (human resources) - Abstract
Although a number of regulatory authorities are developing programs intended to ensure that health professionals continue to practice in a safe and effective manner, the design and implementation of these programs has been challenging. For the pharmacy profession, a novel framework is proposed that is performance based, applies to all community pharmacists, recognizes the powerful influence of external factors on an individual pharmacist's ability to perform to his/her highest level of capability, and can be effectively integrated with CPD. The framework expands upon current best practices in health professions assessment, and in doing so identifies a number of research questions. First, the use of databases as a source of performance data is central to the proposed framework and the validity of using such indicators as measures of quality of pharmacy practice remains to be evaluated, as does the validity of using pharmacy-based measures to reflect the performance of individual pharmacists employed at these pharmacies. Second, further research is needed to gain a better understanding of the varied source and nature of determinants of quality community pharmacy practice. Third, the tools and formats to assess the impact of these determinants on the daily practice of community pharmacists must be developed or modified from those used by other health professions. Fourth, the most effective strategies to overcome specific barriers documented to impact quality community pharmacy practice require evaluation. Finally, as with any assessment program, the efficiency and outcomes of the program must be evaluated to determine the impact on the quality and safety of community pharmacists' practice.
- Published
- 2007
- Full Text
- View/download PDF
34. Medical Students Perceive Better Group Learning Processes when Large Classes Are Made to Seem Small.
- Author
-
Hommes, Juliette, Arah, Onyebuchi A., de Grave, Willem, Schuwirth, Lambert W. T., Scherpbier, Albert J. J. A., and Bos, Gerard M. J.
- Subjects
MEDICAL students ,MEDICAL schools ,COLLABORATIVE learning ,MEDICAL education ,PROBLEM-based learning ,FORMAL groups - Abstract
Objective: Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. Design: A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n = 50) as the intervention groups; a control group (n = 102) was mixed with the remaining students (the non-randomised group n∼100) to create one large subset. Setting: The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6–10 weeks. Intervention: The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. Main Outcome Measures: Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. Results: Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β = 0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>−0.69). Interviews tapped mainly positive effects and negligible negative side effects of the intervention. Conclusion: Better group learning processes can be achieved in large medical schools by making large classes seem small. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
35. Evidence for validity within workplace assessment: the Longitudinal Evaluation of Performance (LEP).
- Author
-
Prescott‐Clements, Linda, Van Der Vleuten, Cees P M, Schuwirth, Lambert W T, Hurst, Yvonne, and Rennie, James S
- Subjects
JOB evaluation ,JOB performance ,CLINICAL competence ,MEDICAL care ,CLINICAL medicine - Abstract
Objective The drive towards valid and reliable assessment methods for health professions’ training is becoming increasingly focused towards authentic models of workplace performance assessment. This study investigates the validity of such a method, longitudinal evaluation of performance (LEP), which has been implemented in the assessment of postgraduate dental trainees in Scotland. Although it is similar in format to the mini-CEX (mini clinical evaluation exercise) and other tools that use global ratings for assessing performance in the workplace, a number of differences exist in the way in which the LEP has been implemented. These include the use of a reference point for evaluators’ judgement that represents the standard expected upon completion of the training, flexibility, a greater range of cases assessed and the use of frequency scores within feedback to identify trainees’ progress over time. Methods A range of qualitative and quantitative data were collected and analysed from 2 consecutive cohorts of trainees in Scotland (2002–03 and 2003–04). Results There is rich evidence supporting the validity, educational impact and feasibility of the LEP. In particular, a great deal of support was given by trainers for the use of a fixed reference point for judgements, despite initial concerns that this might be demotivating to trainees. Trainers were highly positive about this approach and considered it useful in identifying trainees’ progress and helping to drive learning. Conclusions The LEP has been successful in combining a strong formative approach to continuous assessment with the collection of evidence on performance within the workplace that (alongside other tools within an assessment system) can contribute towards a summative decision regarding competence. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
36. Assessing professional competence: from methods to programmes.
- Author
-
Van Der Vleuten, Cees P M and Schuwirth, Lambert W T
- Subjects
- *
MEDICAL education , *CURRICULUM , *COLLEGE curriculum , *EDUCATIONAL tests & measurements , *ACADEMIC achievement , *STANDARDS ,STUDY & teaching of medicine - Abstract
We use a utility model to illustrate that, firstly, selecting an assessment method involves context-dependent compromises, and secondly, that assessment is not a measurement problem but an instructional design problem, comprising educational, implementation and resource aspects. In the model, assessment characteristics are differently weighted depending on the purpose and context of the assessment.Of the characteristics in the model, we focus on reliability, validity and educational impact and argue that they are not inherent qualities of any instrument. Reliability depends not on structuring or standardisation but on sampling. Key issues concerning validity are authenticity and integration of competencies. Assessment in medical education addresses complex competencies and thus requires quantitative and qualitative information from different sources as well as professional judgement. Adequate sampling across judges, instruments and contexts can ensure both validity and reliability. Despite recognition that assessment drives learning, this relationship has been little researched, possibly because of its strong context dependence.When assessment should stimulate learning and requires adequate sampling, in authentic contexts, of the performance of complex competencies that cannot be broken down into simple parts, we need to make a shift from individual methods to an integral programme, intertwined with the education programme. Therefore, we need an instructional design perspective.Programmatic instructional design hinges on a careful description and motivation of choices, whose effectiveness should be measured against the intended outcomes. We should not evaluate individual methods, but provide evidence of the utility of the assessment programme as a whole. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
37. Different written assessment methods: what can be said about their strengths and weaknesses?
- Author
-
Schuwirth, Lambert W T and Van Der Vleuten, Cees P M
- Subjects
- *
EDUCATION , *MEDICAL education , *STANDARDS , *CLINICAL competence , *MEDICAL care , *COLLEGE students - Abstract
Written assessment techniques can be subdivided according to their stimulus format – what the questions asks – and their response format – how the answer is recorded. The former is more important in determining the type of competence being asked for than the latter. It is nevertheless important to consider both when selecting the most appropriate types. Some major elements to consider when making such a selection are cueing effect, reliability, validity, educational impact and resource-intensiveness. Open-ended questions should be used solely to test aspects that cannot be tested with multiple-choice questions. In all other cases the loss of reliability and the higher resource-intensiveness represent a significant downside. In such cases, multiple-choice questions are not less valid than open-ended questions. When making this distinction, it is important to consider whether the question is embedded within a relevant case or context and cannot be answered without the case, or not. This appears to be more or less essential according to what is being tested by the question. Context-rich questions test other cognitive skills than do context-free questions. If knowledge alone is the purpose of the test, context-free questions may be useful, but if it is the application of knowledge or knowledge as a part of problem solving that is being tested, then context is indispensable. Every format has its (dis)advantages and a combination of formats based on rational selection is more useful than trying to find or develop a panacea. The response format is less important in this respect than the stimulus. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
38. Assessing medical competence: finding the right answers.
- Author
-
Schuwirth, Lambert W. T.
- Subjects
- *
MEDICAL competence testing , *EVALUATION , *COMPREHENSION , *MEDICINE , *EXAMINATIONS , *EDUCATION - Abstract
The article focuses on assessing medical competence. One way of looking at assessment is to see it as a measurement of medical competence. One could then regard examinations as diagnostic tools. If one lab test is really off the scale it is assumed that there is something wrong with the patient. The diagnosis of medical incompetence can be made only on the basis of various methods. Failures in comprehension may hamper or confuse, even though many of the underlying concepts in education and medicine are the same.
- Published
- 2004
- Full Text
- View/download PDF
39. Changing education, changing assessment, changing research?
- Author
-
Schuwirth, Lambert W T and Van Der Vleuten, Cees P M
- Subjects
MEDICAL education ,EDUCATIONAL evaluation ,CLINICAL competence ,MEDICAL competence testing ,PERFORMANCE - Abstract
In medical education, assessment of medical competence and performance, important changes have taken place in the last 5 decades. These changes have affected the basic concepts in all 3 domains. In education constructivism has provided a completely new view on how students learn best. In assessment the change from trait-orientated to competency- or role-orientated thinking has given rise to a whole range of new approaches. Certain methods of education, such as problem-based learning (PBL), and assessment, however, are often seen as almost synonymous with the underlying concepts, and one tends to forget that it is the concept that is important and that a particular method is but 1 way of using a concept. When doing this, one runs the risk of confusing means and ends, which may hamper or slow down new developments. A similar problem seems to occur often in research of medical education. Here too, methods – or, rather, methodologies – are confused with research questions. This may lead to an overemphasis on research that fits well known methodologies (e.g. the randomised controlled trial) and neglect of what are sometimes even more important research questions because they do not fit well known methodologies. In this paper we advocate a return to the underlying concepts and a careful reflection of their use in various situations. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
40. Differences in knowledge development exposed by multi-curricular progress test data
- Author
-
Arno M. M. Muijtjens, Lambert W. T. Schuwirth, Janke Cohen-Schotanus, Cees P. M. van der Vleuten, and Faculteit Medische Wetenschappen/UMCG
- Subjects
Higher education ,education ,Assessment ,Education ,Mathematics education ,Medicine ,Humans ,Curriculum ,Schools, Medical ,Medicine(all) ,Academic year ,business.industry ,General Medicine ,Benchmarking ,Between-school comparison ,Curriculum diagnosis ,Progress testing ,Trend analysis ,Interinstitutional Relations ,Knowledge ,Data Interpretation, Statistical ,Educational Measurement ,business ,Strengths and weaknesses ,Test data ,Education, Medical, Undergraduate ,Program Evaluation - Abstract
Progress testing provides data on the growth of students' knowledge over the course of the curriculum obtained from the results of all students in the curriculum on periodical similar tests pitched at end-of-curriculum level. Since 2001, three medical schools have jointly constructed and administered four progress tests annually. All students in the 6-year undergraduate curricula of these schools take the same tests resulting in 24 distinct measurements per academic year (four tests for six student year groups), which may be used to compare performance between and within schools. Because single point measurements had proven unreliable, we devised a method to use cumulative information to compare schools' test performance. This cumulative deviation method involves calculation of the deviations of schools' scores from the cross-institutional average score for 24 measurement moments in 1 year. The current study shows that it appears to be feasible to use a combination of the cumulative deviation method and trend analysis for subdomains of medical knowledge to detect strengths and weaknesses in knowledge development in medical curricula. We illustrate the method by applying it to data from 16 consecutive progress tests administered to all students (4,300) of three medical schools in the academic years 2001/2002 through 2004/2005.
- Full Text
- View/download PDF
41. Assuring the quality of programmatic assessment: Moving beyond psychometrics.
- Author
-
Uijtdehaage, Sebastian and Schuwirth, Lambert W. T.
- Subjects
- *
MEDICAL personnel , *EDUCATION , *QUALITY , *TRIANGULATION , *ASSURANCE (Theology) - Abstract
The article offers the information on the concept of programmatic assessment in health professions education that was introduced in 2005 and is rapidly gaining traction. The article also mention about the aspects of programmatic assessment extend existing assessment practices, others are quite new; and fundamental to programmatic assessment of meaningful triangulation across instruments, proportionality of decision making and diversity of quality assurance processes.
- Published
- 2018
- Full Text
- View/download PDF
42. HEALTH CONDITIONS FOR GOOD TEACHING.
- Author
-
Lambert, W. R.
- Subjects
TEACHER attitudes ,DIGNITY ,PARENT-teacher relationships ,RESPECT ,EDUCATION - Abstract
The article contemplates on the characteristics of a successful teacher. It infers the need for teachers to display a great deal of dignity, not only for the discipline of their school, but also in order to garner respect of parents. It also stresses the need for teachers to have an absolute freedom from all appearance of pedagogism out of school.
- Published
- 1881
43. Professionalism: Evolution of the concept
- Author
-
van Mook, Walther N.K.A., de Grave, Willem S., Wass, Valerie, O'Sullivan, Helen, Zwaveling, Jan Harm, Schuwirth, Lambert W., and van der Vleuten, Cees P.M.
- Subjects
- *
OCCUPATIONAL training , *EDUCATION , *TRAINING , *CAREER education - Abstract
Abstract: The concept of professionalism has undergone major changes over the millennia in general and the last century specifically. This article, the first in a series of articles in this Journal on professionalism, attempts to provide the reader with a historical overview of the evolution of the concept of professionalism over time. As a result of these changes, medical school curricula, and contemporary specialist training programs are increasingly becoming competence based, with professionalism becoming an integral part of a resident''s training and assessment program. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.