332 results on '"Eric S. Holmboe"'
Search Results
2. Trust in Group Decisions: a scoping review
- Author
-
Jason E. Sapp, Dario M. Torre, Kelsey L. Larsen, Eric S. Holmboe, and Steven J. Durning
- Subjects
Trust ,Group ,Decision ,Competency committee ,Special aspects of education ,LC8-6691 ,Medicine - Abstract
Abstract Background Trust is a critical component of competency committees given their high-stakes decisions. Research from outside of medicine on group trust has not focused on trust in group decisions, and “group trust” has not been clearly defined. The purpose was twofold: to examine the definition of trust in the context of group decisions and to explore what factors may influence trust from the perspective of those who rely on competency committees through a proposed group trust model. Methods The authors conducted a literature search of four online databases, seeking articles published on trust in group settings. Reviewers extracted, coded, and analyzed key data including definitions of trust and factors pertaining to group trust. Results The authors selected 42 articles for full text review. Although reviewers found multiple general definitions of trust, they were unable to find a clear definition of group trust and propose the following: a group-directed willingness to accept vulnerability to actions of the members based on the expectation that members will perform a particular action important to the group, encompassing social exchange, collective perceptions, and interpersonal trust. Additionally, the authors propose a model encompassing individual level factors (trustor and trustee), interpersonal interactions, group level factors (structure and processes), and environmental factors. Conclusions Higher degrees of trust at the individual and group levels have been associated with attitudinal and performance outcomes, such as quality of group decisions. Developing a deeper understanding of trust in competency committees may help these committees implement more effective and meaningful processes to make collective decisions.
- Published
- 2019
- Full Text
- View/download PDF
3. The Journey to Competency-based Medical Education - Implementing Milestones
- Author
-
Eric S. Holmboe
- Subjects
Medicine (General) ,R5-920 - Published
- 2017
- Full Text
- View/download PDF
4. Emergency Medicine: On the Frontlines of Medical Education Transformation
- Author
-
Eric S. Holmboe
- Subjects
Medicine ,Medical emergencies. Critical care. Intensive care. First aid ,RC86-88.9 - Abstract
Emergency medicine (EM) has always been on the frontlines of healthcare in the United States. I experienced this reality first hand as a young general medical officer assigned to an emergency department (ED) in a small naval hospital in the 1980s. For decades the ED has been the only site where patients could not be legally denied care. Despite increased insurance coverage for millions of Americans as a result of the Affordable Care Act, ED directors report an increase in patient volumes in a recent survey.1 EDs care for patients from across the socioeconomic spectrum suffering from a wide range of clinical conditions. As a result, the ED is still one of few components of the American healthcare system where social justice is enacted on a regular basis. Constant turbulence in the healthcare system, major changes in healthcare delivery, technological advances and shifting demographic trends necessitate that EM constantly adapt and evolve as a discipline in this complex environment.
- Published
- 2015
- Full Text
- View/download PDF
5. Will Any Road Get You There? Examining Warranted and Unwarranted Variation in Medical Education
- Author
-
Eric S, Holmboe and Jennifer R, Kogan
- Subjects
Education, Medical ,Education, Medical, Graduate ,Humans ,Curriculum ,General Medicine ,Delivery of Health Care ,Quality Improvement ,Quality of Health Care ,Education - Abstract
Undergraduate and graduate medical education have long embraced uniqueness and variability in curricular and assessment approaches. Some of this variability is justified (warranted or necessary variation), but a substantial portion represents unwarranted variation. A primary tenet of outcomes-based medical education is ensuring that all learners acquire essential competencies to be publicly accountable to meet societal needs. Unwarranted variation in curricular and assessment practices contributes to suboptimal and variable educational outcomes and, by extension, risks graduates delivering suboptimal health care quality. Medical education can use lessons from the decades of study on unwarranted variation in health care as part of efforts to continuously improve the quality of training programs. To accomplish this, medical educators will first need to recognize the difference between warranted and unwarranted variation in both clinical care and educational practices. Addressing unwarranted variation will require cooperation and collaboration between multiple levels of the health care and educational systems using a quality improvement mindset. These efforts at improvement should acknowledge that some aspects of variability are not scientifically informed and do not support desired outcomes or societal needs. This perspective examines the correlates of unwarranted variation of clinical care in medical education and the need to address the interdependency of unwarranted variation occurring between clinical and educational practices. The authors explore the challenges of variation across multiple levels: community, institution, program, and individual faculty members. The article concludes with recommendations to improve medical education by embracing the principles of continuous quality improvement to reduce the harmful effect of unwarranted variation.
- Published
- 2022
6. The Urgency of Now: Rethinking and Improving Assessment Practices in Medical Education Programs
- Author
-
Eric S. Holmboe, Nora Y. Osman, Christina M. Murphy, and Jennifer R. Kogan
- Subjects
General Medicine ,Education - Published
- 2023
7. Measuring Graduate Medical Education Outcomes to Honor the Social Contract
- Author
-
Robert L, Phillips, Brian C, George, Eric S, Holmboe, Andrew W, Bazemore, John M, Westfall, and Asaf, Bitton
- Subjects
Education, Medical, Graduate ,Physicians ,Humans ,Internship and Residency ,General Medicine ,United States ,Education - Abstract
The graduate medical education (GME) system is heavily subsidized by the public in return for producing physicians who meet society's needs. Under the terms of this implicit social contract, decisions about how this funding is allocated are deferred to the individual training sites. Institutions receiving public funding face potential conflicts of interest, which have at times prioritized institutional purposes and needs over societal needs, highlighting that there is little public accountability for how such funding is used. The cost and institutional burden of assessing many fundamental GME outcomes, such as specialty, geographic physician distribution, training-imprinted cost behaviors, and populations served, could be mitigated as data sources and methods for assessing GME outcomes and guiding training improvement already exist. This new capacity to assess system-level outcomes could help institutions and policymakers strategically address the greatest public needs. Measurement of educational outcomes can also be used to guide training improvement at every level of the educational system (i.e., the individual trainee, individual teaching institution, and collective GME system levels). There are good examples of institutions, states, and training consortia that are already assessing and using GME outcomes in these ways. The ultimate outcome could be a GME system that better meets the needs of society and better honors what is now only an implicit social contract.
- Published
- 2022
8. Competency-Based Medical Education: Considering Its Past, Present, and a Post–COVID-19 Era
- Author
-
Michael S Ryan, Subani Chandra, and Eric S. Holmboe
- Subjects
Medical education ,Modalities ,Coronavirus disease 2019 (COVID-19) ,education ,Graduate medical education ,MEDLINE ,Virtual learning environment ,General Medicine ,Psychology ,Education ,Accreditation ,Variety (cybernetics) ,Graduation - Abstract
Advancement toward competency-based medical education (CBME) has been hindered by inertia and a myriad of implementation challenges, including those associated with assessment of competency, accreditation/regulation, and logistical considerations. The COVID-19 pandemic disrupted medical education at every level. Time-in-training sometimes was shortened or significantly altered and there were reductions in the number and variety of clinical exposures. These and other unanticipated changes to existing models highlighted the need to advance the core principles of CBME. This manuscript describes the impact of COVID-19 on the ongoing transition to CBME, including the effects on training, curricular, and assessment processes for medical school and graduate medical education programs. The authors outline consequences of the COVID-19 disruption on learner training and assessment of competency, such as conversion to virtual learning modalities in medical school, redeployment of residents within health systems, and early graduation of trainees based on achievement of competency. Finally, the authors reflect on what the COVID-19 pandemic taught them about realization of CBME as the medical education community looks forward to a post-pandemic future.
- Published
- 2022
9. Advancing the science of health professions education through a shared understanding of terminology: a content analysis of terms for 'faculty'
- Author
-
Lambert Schuwirth, Anique Atherley, Steven J. Durning, Dujeepa D. Samarasekera, Pim W. Teunissen, Wendy Hu, Jennifer J. Cleland, Hiroshi Nishigori, Eric S. Holmboe, Lauren A. Maggio, Susan van Schalkwyk, RS: SHE - R1 - Research (OvO), Onderwijsontw & Onderwijsresearch, and Amsterdam Reproduction & Development (AR&D)
- Subjects
media_common.quotation_subject ,Literature study ,Education ,Terminology ,law.invention ,Social group ,law ,Health care ,Humans ,media_common ,Faculty terminology ,Medical education ,business.industry ,Mentors ,Subject (documents) ,Ambiguity ,Variety (linguistics) ,Faculty ,Research Personnel ,Content analysis ,Health Occupations ,CLARITY ,Original Article ,Psychology ,business - Abstract
Introduction Health professions educators risk misunderstandings where terms and concepts are not clearly defined, hampering the field’s progress. This risk is especially pronounced with ambiguity in describing roles. This study explores the variety of terms used by researchers and educators to describe “faculty”, with the aim to facilitate definitional clarity, and create a shared terminology and approach to describing this term. Methods The authors analyzed journal article abstracts to identify the specific words and phrases used to describe individuals or groups of people referred to as faculty. To identify abstracts, PubMed articles indexed with the Medical Subject Heading “faculty” published between 2007 and 2017 were retrieved. Authors iteratively extracted data and used content analysis to identify patterns and themes. Results A total of 5,436 citations were retrieved, of which 3,354 were deemed eligible. Based on a sample of 594 abstracts (17.7%), we found 279 unique terms. The most commonly used terms accounted for approximately one-third of the sample and included faculty or faculty member/s (n = 252; 26.4%); teacher/s (n = 59; 6.2%) and medical educator/s (n = 26; 2.7%) were also well represented. Content analysis highlighted that the different descriptors authors used referred to four role types: healthcare (e.g., doctor, physician), education (e.g., educator, teacher), academia (e.g., professor), and/or relationship to the learner (e.g., mentor). Discussion Faculty are described using a wide variety of terms, which can be linked to four role descriptions. The authors propose a template for researchers and educators who want to refer to faculty in their papers. This is important to advance the field and increase readers’ assessment of transferability. Supplementary Information The online version of this article (10.1007/s40037-021-00683-8) contains supplementary material, which is available to authorized users.
- Published
- 2022
10. Faculty Perceptions of Frame of Reference Training to Improve Workplace-Based Assessment
- Author
-
Jennifer R. Kogan, Lisa N. Conforti, and Eric S. Holmboe
- Subjects
General Medicine ,Education ,Original Research - Abstract
BackgroundWorkplace-based assessment (WBA) is a key assessment strategy in competency-based medical education. However, its full potential has not been actualized secondary to concerns with reliability, validity, and accuracy. Frame of reference training (FORT), a rater training technique that helps assessors distinguish between learner performance levels, can improve the accuracy and reliability of WBA, but the effect size is variable. Understanding FORT benefits and challenges help improve this rater training technique.ObjectiveTo explore faculty's perceptions of the benefits and challenges associated with FORT.MethodsSubjects were internal medicine and family medicine physicians (n=41) who participated in a rater training intervention in 2018 consisting of in-person FORT followed by asynchronous online spaced learning. We assessed participants' perceptions of FORT in post-workshop focus groups and an end-of-study survey. Focus groups and survey free text responses were coded using thematic analysis.ResultsAll subjects participated in 1 of 4 focus groups and completed the survey. Four benefits of FORT were identified: (1) opportunity to apply skills frameworks via deliberate practice; (2) demonstration of the importance of certain evidence-based clinical skills; (3) practice that improved the ability to discriminate between resident skill levels; and (4) highlighting the importance of direct observation and the dangers using proxy information in assessment. Challenges included time constraints and task repetitiveness.ConclusionsParticipants believe that FORT training serves multiple purposes, including helping them distinguish between learner skill levels while demonstrating the impact of evidence-based clinical skills and the importance of direct observation.
- Published
- 2023
11. Using Learning Analytics to Examine Achievement of Graduation Targets for Systems-Based Practice and Practice-Based Learning and Improvement: A National Cohort of Vascular Surgery Fellows
- Author
-
Brigitte K. Smith, Stanley J. Hamstra, Abigail Luman, Erica L. Mitchell, Eric S. Holmboe, Kenji Yamazaki, Yoon Soo Park, and Ara Tekian
- Subjects
medicine.medical_specialty ,Systems Analysis ,Graduate medical education ,Learning analytics ,Systems Theory ,030204 cardiovascular system & hematology ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Health care ,medicine ,Milestone (project management) ,Humans ,Learning ,Curriculum ,Accreditation ,Surgeons ,Medical education ,business.industry ,Internship and Residency ,General Medicine ,Vascular surgery ,Competency-Based Education ,Education, Medical, Graduate ,Educational Status ,Surgery ,Clinical Competence ,Educational Measurement ,Cardiology and Cardiovascular Medicine ,business ,Graduation - Abstract
Background Surgeons provide patient care in complex health care systems and must be able to participate in improving both personal performance and the performance of the system. The Accreditation Council for Graduate Medical Education (ACGME) Vascular Surgery Milestones are utilized to assess vascular surgery fellows’ (VSF) achievement of graduation targets in the competencies of Systems Based Practice (SBP) and Practice Based Learning and Improvement (PBLI). We investigate the predictive value of semiannual milestones ratings for final achievement within these competencies at the time of graduation. Methods National ACGME milestones data were utilized for analysis. All trainees entering the 2-year vascular surgery fellowship programs in July 2016 were included in the analysis (n = 122). Predictive probability values (PPVs) were obtained for each SBP and PBLI sub-competencies by biannual review periods, to estimate the probability of VSFs not reaching the recommended graduation target based on their previous milestones ratings. Results The rate of nonachievement of the graduation target level 4.0 on the SBP and PBLI sub-competencies at the time of graduation for VSFs was 13.1–25.4%. At the first time point of assessment, 6 months into the fellowship program, the PPV of the SBP and PBLI milestones for nonachievement of level 4.0 upon graduation ranged from 16.3–60.2%. Six months prior to graduation, the PPVs across the 6 sub-competencies ranged from 14.6–82.9%. Conclusions A significant percentage of VSFs do not achieve the ACGME Vascular Surgery Milestone targets for graduation in the competencies of SBP and PBLI, suggesting a need to improve curricula and assessment strategies in these domains across vascular surgery fellowship programs. Reported milestones levels across all time point are predictive of ultimate achievement upon graduation and should be utilized to provide targeted feedback and individualized learning plans to ensure graduates are prepared to engage in personal and health care system improvement once in unsupervised practice.
- Published
- 2021
12. Exploring the Association Between USMLE Scores and ACGME Milestone Ratings: A Validity Study Using National Data From Emergency Medicine
- Author
-
Stanley J. Hamstra, Michael A. Barone, Kenji Yamazaki, Sally A. Santen, Eric S. Holmboe, Daniel Jurich, Monica M. Cuddy, and John Burkhardt
- Subjects
Adult ,Male ,medicine.medical_specialty ,Graduate medical education ,Bivariate analysis ,Accreditation ,Education ,Young Adult ,Linear regression ,Milestone (project management) ,medicine ,Humans ,Discriminant validity ,Internship and Residency ,Reproducibility of Results ,Research Reports ,General Medicine ,Middle Aged ,Licensure, Medical ,United States Medical Licensing Examination ,United States ,Confidence interval ,Education, Medical, Graduate ,Emergency medicine ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Emergency Medicine ,Multilevel Analysis ,Regression Analysis ,Female ,Clinical Competence ,Educational Measurement ,Psychology ,Incremental validity - Abstract
Supplemental Digital Content is available in the text., Purpose The United States Medical Licensing Examination (USMLE) sequence and the Accreditation Council for Graduate Medical Education (ACGME) milestones represent 2 major components along the continuum of assessment from undergraduate through graduate medical education. This study examines associations between USMLE Step 1 and Step 2 Clinical Knowledge (CK) scores and ACGME emergency medicine (EM) milestone ratings. Method In February 2019, subject matter experts (SMEs) provided judgments of expected associations for each combination of Step examination and EM subcompetency. The resulting sets of subcompetencies with expected strong and weak associations were selected for convergent and discriminant validity analysis, respectively. National-level data for 2013–2018 were provided; the final sample included 6,618 EM residents from 158 training programs. Empirical bivariate correlations between milestone ratings and Step scores were calculated, then those correlations were compared with the SMEs’ judgments. Multilevel regression analyses were conducted on the selected subcompetencies, in which milestone ratings were the dependent variable, and Step 1 score, Step 2 CK score, and cohort year were independent variables. Results Regression results showed small but statistically significant positive relationships between Step 2 CK score and the subcompetencies (regression coefficients ranged from 0.02 [95% confidence interval (CI), 0.01–0.03] to 0.12 [95% CI, 0.11–0.13]; all P < .05), with the degree of association matching the SMEs’ judgments for 7 of the 9 selected subcompetencies. For example, a 1 standard deviation increase in Step 2 CK score predicted a 0.12 increase in MK-01 milestone rating, when controlling for Step 1. Step 1 score showed a small statistically significant effect with only the MK-01 subcompetency (regression coefficient = 0.06 [95% CI, 0.05–0.07], P < .05). Conclusions These results provide incremental validity evidence in support of Step 1 and Step 2 CK score and EM milestone rating uses.
- Published
- 2021
13. Entrustment Unpacked: Aligning Purposes, Stakes, and Processes to Enhance Learner Assessment
- Author
-
Holly Caretta-Weyer, Eric S. Holmboe, Eric J. Warm, David A. Turner, Benjamin Kinnear, Cees P. M. van der Vleuten, Daniel J. Schumacher, RS: SHE - R1 - Research (OvO), and Onderwijsontw & Onderwijsresearch
- Subjects
Value (ethics) ,Knowledge management ,Formative Feedback ,020205 medical informatics ,Computer science ,Process (engineering) ,02 engineering and technology ,Education ,Formative assessment ,03 medical and health sciences ,0302 clinical medicine ,Medical ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Learning ,030212 general & internal medicine ,Competence (human resources) ,Competency-Based Education/methods ,business.industry ,Educational Measurement/methods ,Education, Medical, Graduate/methods ,General Medicine ,Group decision-making ,Risk perception ,Summative assessment ,Graduate/methods ,Key (cryptography) ,Clinical Competence ,business - Abstract
Educators use entrustment, a common framework in competency-based medical education, in multiple ways, including frontline assessment instruments, learner feedback tools, and group decision making within promotions or competence committees. Within these multiple contexts, entrustment decisions can vary in purpose (i.e., intended use), stakes (i.e., perceived risk or consequences), and process (i.e., how entrustment is rendered). Each of these characteristics can be conceptualized as having 2 distinct poles: (1) purpose has formative and summative, (2) stakes has low and high, and (3) process has ad hoc and structured. For each characteristic, entrustment decisions often do not fall squarely at one pole or the other, but rather lie somewhere along a spectrum. While distinct, these continua can, and sometimes should, influence one another, and can be manipulated to optimally integrate entrustment within a program of assessment. In this article, the authors describe each of these continua and depict how key alignments between them can help optimize value when using entrustment in programmatic assessment within competency-based medical education. As they think through these continua, the authors will begin and end with a case study to demonstrate the practical application as it might occur in the clinical learning environment.
- Published
- 2021
14. Advancing Workplace-Based Assessment in Psychiatric Education
- Author
-
Jason R. Frank, Eric S. Holmboe, and John Young
- Subjects
Formative assessment ,Psychiatry and Mental health ,Medical education ,Psychiatric education ,Summative assessment ,education ,Direct observation ,Key (cryptography) ,Independent practice ,Psychology ,Assessment for learning ,Domain (software engineering) - Abstract
With the adoption of competency-based medical education, assessment has shifted from traditional classroom domains of knows and knows how to the workplace domain of doing. This workplace-based assessment has 2 purposes; assessment of learning (summative feedback) and the assessment for learning (formative feedback). What the trainee does becomes the basis for identifying growth edges and determining readiness for advancement and ultimately independent practice. High-quality workplace-based assessment programs require thoughtful choices about the framework of assessment, the tools themselves, the platforms used, and the contexts in which the assessments take place, with an emphasis on direct observation.
- Published
- 2021
15. Competency-Based Assessment in Psychiatric Education
- Author
-
John Young, Eric S. Holmboe, and Jason R. Frank
- Subjects
Medical education ,business.industry ,education ,Learning analytics ,Coaching ,030227 psychiatry ,03 medical and health sciences ,Psychiatry and Mental health ,0302 clinical medicine ,Psychiatric education ,Trustworthiness ,ComputingMilieux_COMPUTERSANDEDUCATION ,Faculty development ,Independent practice ,Psychology ,business ,030217 neurology & neurosurgery ,Health needs - Abstract
Medical education programs are failing to meet the health needs of patients and communities. Misalignments exist on multiple levels, including content (what trainees learn), pedagogy (how trainees learn), and culture (why trainees learn). To address these challenges effectively, competency-based assessment (CBA) for psychiatric medical education must simultaneously produce life-long learners who can self-regulate their own growth and trustworthy processes that determine and accelerate readiness for independent practice. The key to effectively doing so is situating assessment within a carefully designed system with several, critical, interacting components: workplace-based assessment, ongoing faculty development, learning analytics, longitudinal coaching, and fit-for-purpose clinical competency committees.
- Published
- 2021
16. Program Directors Patient Safety and Quality Educators Network: A Learning Collaborative to Improve Resident and Fellow Physician Engagement
- Author
-
Robin Wagner, Kevin B. Weiss, Linda A. Headrick, Rebecca C. Jaffe, Abra L. Fant, Anne Gravel Sullivan, Cormac O. Maher, Jessica Donato, Deborah L. Benzil, Rebecca S. Miltner, Robin R. Hemphill, Eric S. Holmboe, Elizabeth R. Clewett, Deborah Smith Clements, Timothy P. Brigham, and Sanjeev Arora
- Subjects
ACGME News and Views ,General Medicine ,Education - Published
- 2022
17. Can Rater Training Improve the Quality and Accuracy of Workplace-Based Assessment Narrative Comments and Entrustment Ratings? A Randomized Controlled Trial
- Author
-
Jennifer R. Kogan, C. Jessica Dine, Lisa N. Conforti, and Eric S. Holmboe
- Subjects
General Medicine ,Education - Abstract
Prior research evaluating workplace-based assessment (WBA) rater training effectiveness has not measured improvement in narrative comment quality and accuracy, nor accuracy of prospective entrustment-supervision ratings. The purpose of this study was to determine whether rater training, using performance dimension and frame of reference training, could improve WBA narrative comment quality and accuracy. A secondary aim was to assess impact on entrustment rating accuracy.This single blind, multi-institution, randomized controlled trial of a multifaceted, longitudinal rater training intervention consisted of in-person training followed by asynchronous online spaced learning. In 2018, investigators randomized 94 internal medicine and family medicine physicians involved with resident education. Participants assessed 10 scripted standardized resident-patient videos at baseline and follow-up. Differences in holistic assessment of narrative comment accuracy and specificity, accuracy of individual scenario observations, and entrustment rating accuracy were evaluated with t-tests. Linear regression assessed impact of participant demographics and baseline performance.Seventy-seven participants completed the study. At follow-up, the intervention group (n = 41), compared to the control group (n = 36), had higher scores for narrative holistic specificity (2.76 vs 2.31, P.001, Cohen V = .30), accuracy (2.37 vs 2.06, P.001, Cohen V = .20) and mean quantity of accurate (6.14 vs 4.33, P.001), inaccurate (3.53 vs 2.41, P.001), and overall observations (2.61 vs 1.92, P = .002, Cohen V = .47). In aggregate, the intervention group had more accurate entrustment ratings (58.1% vs 49.7%, P = .006, Phi = .30). Baseline performance was significantly associated with performance on final assessments.Quality and specificity of narrative comments improved with rater training; the effect was mitigated by inappropriate stringency. Training improved accuracy of prospective entrustment-supervision ratings but the effect was more limited. Participants with lower baseline rating skill may benefit most from training.
- Published
- 2022
18. The Reliability of Graduate Medical Education Quality of Care Clinical Performance Measures
- Author
-
Jung G. Kim, Hector P. Rodriguez, Eric S. Holmboe, Kathryn M. McDonald, Lindsay Mazotti, Diane R. Rittenhouse, Stephen M. Shortell, and Michael H. Kanter
- Subjects
Education, Medical ,Reproducibility of Results ,Internship and Residency ,General Medicine ,Health Services ,United States ,Education ,Education, Medical, Graduate ,Clinical Research ,Medical ,Humans ,Family Practice ,Graduate ,Digestive Diseases ,Curriculum and Pedagogy ,Original Research - Abstract
Background Graduate medical education (GME) program leaders struggle to incorporate quality measures in the ambulatory care setting, leading to knowledge gaps on how to provide feedback to residents and programs. While nationally collected quality of care data are available, their reliability for individual resident learning and for GME program improvement is understudied. Objective To examine the reliability of the Healthcare Effectiveness Data and Information Set (HEDIS) clinical performance measures in family medicine and internal medicine GME programs and to determine whether HEDIS measures can inform residents and their programs with their quality of care. Methods From 2014 to 2017, we collected HEDIS measures from 566 residents in 8 family medicine and internal medicine programs under one sponsoring institution. Intraclass correlation was performed to establish patient sample sizes required for 0.70 and 0.80 reliability levels at the resident and program levels. Differences between the patient sample sizes required for reliable measurement and the actual patients cared for by residents were calculated. Results The highest reliability levels for residents (0.88) and programs (0.98) were found for the most frequently available HEDIS measure, colorectal cancer screening. At the GME program level, 87.5% of HEDIS measures had sufficient sample sizes for reliable measurement at alpha 0.7 and 75.0% at alpha 0.8. Most resident level measurements were found to be less reliable. Conclusions GME programs may reliably evaluate HEDIS performance pooled at the program level, but less so at the resident level due to patient volume.
- Published
- 2022
19. A multispecialty ethnographic study of clinical competency committees (CCCs)
- Author
-
Andem Ekpenyong, Laura Edgar, LuAnn Wilkerson, and Eric S. Holmboe
- Subjects
Education, Medical, Graduate ,Humans ,Internship and Residency ,General Medicine ,Clinical Competence ,Anthropology, Cultural ,Education - Abstract
Clinical competency committees (CCCs) assess residents' performance on their specialty specific milestones, however there is no 'one-size fits all' blueprint for accomplishing this. Thus, CCCs have had to develop their own procedures. The goal of this study was to examine these efforts to assist new programs embarking on this venture and established programs looking to improve their CCC practices and processes.We purposefully sampled CCCs across multiple specialties and institutions. Data from three sources were triangulated: (1) online demographic survey, (2) ethnographic observations of CCC meetings and (3) post-observation semi-structured interviews with the program director and/or CCC chairperson. Template analysis was used to build the coding structure.Sixteen observations were completed with 15 different CCCs at 9 institutions. Three main thematic categories that impact the operations of CCCs emerged: (1) Membership structure and members roles, (2) Roles of the CCC in residency and 3) CCC processes, including trainee presentation to the committee and decision-making. While effective practices were observed, substantial variation existed in all three thematic areas.While CCCs used some known effective practices, substantial variation in structure and processes was notable across CCCs. Future work should explore the impact of this variation on educational outcomes.
- Published
- 2022
20. Simulation-Based Assessments and Graduating Neurology Residents' Milestones: Status Epilepticus Milestones
- Author
-
Jeffrey H. Barsuk, Kenji Yamazaki, Yara Mikhaeil-Demo, George Culler, Eric S. Holmboe, Danny Bega, Jessica W. Templer, Elaine R. Cohen, Amar B. Bhatt, Elizabeth E. Gerard, Diane B. Wayne, and Neelofer Shafi
- Subjects
medicine.medical_specialty ,Neurology ,Graduate medical education ,Accreditation ,Skills management ,Cohort Studies ,03 medical and health sciences ,Status Epilepticus ,0302 clinical medicine ,Milestone (project management) ,medicine ,Humans ,030212 general & internal medicine ,Simulation based ,Original Research ,Chicago ,business.industry ,Internship and Residency ,General Medicine ,United States ,Checklist ,Education, Medical, Graduate ,Family medicine ,Clinical Competence ,Educational Measurement ,business ,030217 neurology & neurosurgery ,Cohort study - Abstract
Background The American Board of Psychiatry and Neurology and the Accreditation Council for Graduate Medical Education (ACGME) developed Milestones that provide a framework for residents' assessment. However, Milestones do not provide a description for how programs should perform assessments. Objectives We evaluated graduating residents' status epilepticus (SE) identification and management skills and how they correlate with ACGME Milestones reported for epilepsy and management/treatment by their program's clinical competency committee (CCC). Methods We performed a cohort study of graduating neurology residents from 3 academic medical centers in Chicago in 2018. We evaluated residents' skills identifying and managing SE using a simulation-based assessment (26-item checklist). Simulation-based assessment scores were compared to experience (number of SE cases each resident reported identifying and managing during residency), self-confidence in identifying and managing these cases, and their end of residency Milestones assigned by a CCC based on end-of-rotation evaluations. Results Sixteen of 21 (76%) eligible residents participated in the study. Average SE checklist score was 15.6 of 26 checklist items correct (60%, SD 12.2%). There were no significant correlations between resident checklist performance and experience or self-confidence. The average participant's level of Milestone for epilepsy and management/treatment was high at 4.3 of 5 (SD 0.4) and 4.4 of 5 (SD 0.4), respectively. There were no significant associations between checklist skills performance and level of Milestone assigned. Conclusions Simulated SE skills performance of graduating neurology residents was poor. Our study suggests that end-of-rotation evaluations alone are inadequate for assigning Milestones for high-stakes clinical skills such as identification and management of SE.
- Published
- 2021
21. Assessing Interpersonal and Communication Skills
- Author
-
Jennifer R. Kogan, Eric S. Holmboe, and Liana Puscas
- Subjects
Physician-Patient Relations ,Communication ,Applied psychology ,Humans ,Internship and Residency ,Interpersonal Relations ,General Medicine ,Interpersonal communication ,Communication skills ,Psychology ,Perspectives - Published
- 2021
22. Reimagining Feedback for the Milestones Era
- Author
-
Eric S. Holmboe, Laura Edgar, Marygrace Zetkulic, and Andem Ekpenyong
- Subjects
Education, Medical, Graduate ,MEDLINE ,Humans ,Internship and Residency ,Library science ,General Medicine ,Psychology ,Feedback ,Perspectives - Published
- 2021
23. GME on the Frontlines—Health Impacts of COVID-19 Across ACGME-Accredited Programs
- Author
-
Eric S. Holmboe, Thomas J. Nasca, Lynne M. Kirk, and Lauren M. Byrne
- Subjects
2019-20 coronavirus outbreak ,medicine.medical_specialty ,Coronavirus disease 2019 (COVID-19) ,ACGME News and Views ,Political science ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Family medicine ,medicine ,General Medicine ,Faculty medical ,Education ,Accreditation - Published
- 2021
24. Initial Implementation of Resident-Sensitive Quality Measures in the Pediatric Emergency Department
- Author
-
Brad Sobolewski, Eric S. Holmboe, Carol Carraccio, Jamiu O. Busari, Cees P. M. van der Vleuten, Abigail Martini, Terri L. Byczkowski, Daniel J. Schumacher, RS: SHE - R1 - Research (OvO), and Onderwijsontw & Onderwijsresearch
- Subjects
Pediatric emergency ,medicine.medical_specialty ,020205 medical informatics ,Composite score ,media_common.quotation_subject ,02 engineering and technology ,Pediatrics ,Education ,03 medical and health sciences ,0302 clinical medicine ,PHYSICIANS ,Head Injuries, Closed ,MEDICAL-EDUCATION ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Humans ,Quality (business) ,030212 general & internal medicine ,Asthma ,media_common ,Quality Indicators, Health Care ,Quality of Health Care ,OUTCOMES ,business.industry ,Medical record ,General Medicine ,ASSOCIATION ,CARE ,medicine.disease ,Bronchiolitis ,Family medicine ,Closed head injury ,Disease Progression ,business ,Emergency Service, Hospital ,Acute asthma exacerbation - Abstract
PurposeA lack of quality measures aligned with residents' work led to the development of resident-sensitive quality measures (RSQMs). This study sought to describe how often residents complete RSQMs, both individually and collectively, when they are implemented in the clinical environment.MethodDuring academic year 2017-2018, categorical pediatric residents in the Cincinnati Children's Hospital Medical Center pediatric emergency department were assessed using RSQMs for acute asthma exacerbation (21 RSQMs), bronchiolitis (23 RSQMs), and closed head injury (19 RSQMs). Following eligible patient encounters, all individual RSQMs for the illnesses of interest were extracted from the health record. Frequencies of 3 performance classifications (opportunity and met, opportunity and not met, or no opportunity) were detailed for each RSQM. A composite score for each encounter was calculated by determining the proportion of individual RSQMs performed out of the total possible RSQMs that could have been performed.ResultsEighty-three residents cared for 110 patients with asthma, 112 with bronchiolitis, and 77 with closed head injury during the study period. Residents had the opportunity to meet the RSQMs in most encounters, but exceptions existed. There was a wide range in the frequency of residents meeting RSQMs in encounters in which the opportunity existed. One closed head injury measure was met in all encounters in which the opportunity existed. Across illnesses, some RSQMs were met in almost all encounters, while others were met in far fewer encounters. RSQM composite scores demonstrated significant range and variation as well-asthma: mean = 0.81 (standard deviation [SD] = 0.11) and range = 0.47-1.00, bronchiolitis: mean = 0.62 (SD = 0.12) and range = 0.35-0.91, and closed head injury: mean = 0.63 (SD = 0.10) and range = 0.44-0.89.ConclusionsIndividually and collectively, RSQMs can distinguish variations in the tasks residents perform across patient encounters.
- Published
- 2020
25. Conditions Influencing Collaboration Among the Primary Care Disciplines as They Prepare the Future Primary Care Physician Workforce
- Author
-
Carol Carraccio, Larry A. Green, Eric J. Warm, Erin K. Thayer, Eric S. Holmboe, Patricia A. Carney, and M. Patrice Eiff
- Subjects
Medical education ,Complete data ,business.industry ,education ,Stressor ,Primary care physician ,Identity (social science) ,Primary care ,Interprofessional education ,Health care ,Workforce ,Family Practice ,business ,Psychology - Abstract
Background and Objectives: Much can be gained by the three primary care disciplines collaborating on efforts to transform residency training toward interprofessional collaborative practice. We describe findings from a study designed to align primary care disciplines toward implementing interprofessional education. Methods: In this mixed methods study, we included faculty, residents and other interprofessional learners in family medicine, internal medicine, and pediatrics from nine institutions across the United States. We administered a web-based survey in April/May of 2018 and used qualitative analyses of field notes to study resident exposure to team-based care during training, estimates of career choice in programs that are innovating, and supportive and challenging conditions that influence collaboration among the three disciplines. Complete data capture was attained for 96.3% of participants. Results: Among family medicine resident graduates, an estimated 87.1% chose to go into primary care compared to 12.4% of internal medicine, and 36.5% of pediatric resident graduates. Qualitative themes found to positively influence cross-disciplinary collaboration included relationship development, communication of shared goals, alignment with health system/other institutional initiatives, and professional identity as primary care physicians. Challenges included expressed concerns by participants that by working together, the disciplines would experience a loss of identity and would be indistinguishable from one another. Another qualitative finding was that overwhelming stressors plague primary care training programs in the current health care climate—a great concern. These include competing demands, disruptive transitions, and lack of resources. Conclusions: Uniting the primary care disciplines toward educational and clinical transformation toward interprofessional collaborative practice is challenging to accomplish.
- Published
- 2020
26. Factors Associated With Family Medicine and Internal Medicine First-Year Residents’ Ambulatory Care Training Time
- Author
-
Jung G. Kim, Hector P. Rodriguez, Stephen M. Shortell, Eric S. Holmboe, Bruce Fuller, and Diane R. Rittenhouse
- Subjects
Adult ,medicine.medical_specialty ,Time Factors ,020205 medical informatics ,education ,Training time ,Graduate medical education ,MEDLINE ,Context (language use) ,02 engineering and technology ,Environment ,Medicare ,Accreditation ,Education ,03 medical and health sciences ,0302 clinical medicine ,Ambulatory care ,Internal medicine ,Ambulatory Care ,Internal Medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Humans ,030212 general & internal medicine ,Receipt ,Medicaid ,Internship and Residency ,General Medicine ,United States ,Cross-Sectional Studies ,Education, Medical, Graduate ,Family medicine ,Family Practice - Abstract
PURPOSE Despite the importance of training in ambulatory care settings for residents to acquire important competencies, little is known about the organizational and environmental factors influencing the relative amount of time primary care residents train in ambulatory care during residency. The authors examined factors associated with postgraduate year 1 (PGY-1) residents' ambulatory care training time in Accreditation Council for Graduate Medical Education (ACGME)-accredited primary care programs. METHOD U.S.-accredited family medicine (FM) and internal medicine (IM) programs' 2016-2017 National Graduate Medical Education (GME) Census data from 895 programs within 550 sponsoring institutions (representing 13,077 PGY-1s) were linked to the 2016 Centers for Medicare and Medicaid Services Cost Reports and 2015-2016 Area Health Resource File. Multilevel regression models examined the association of GME program characteristics, sponsoring institution characteristics, geography, and environmental factors with PGY-1 residents' percentage of time spent in ambulatory care. RESULTS PGY-1 mean (standard deviation, SD) percent time spent in ambulatory care was 25.4% (SD, 0.4) for both FM and IM programs. In adjusted analyses (% increase [standard error, SE]), larger faculty size (0.03% [SE, 0.01], P < .001), sponsoring institution's receipt of Teaching Health Center (THC) funding (6.6% (SE, 2.7), P < .01), and accreditation warnings (4.8% [SE, 2.5], P < .05) were associated with a greater proportion of PGY-1 time spent in ambulatory care. Programs caring for higher proportions of Medicare beneficiaries spent relatively less time in ambulatory care (< 0.5% [SE, 0.2], P < .01). CONCLUSIONS Ambulatory care time for PGY-1s varies among ACGME-accredited primary care residency programs due to the complex context and factors primary care GME programs operate under. Larger ACGME-accredited FM and IM programs and those receiving federal THC GME funding had relatively more PGY-1 time spent in ambulatory care settings. These findings inform policies to increase resident exposure in ambulatory care, potentially improving learning, competency achievement, and primary care access.
- Published
- 2020
27. Use of Resident-Sensitive Quality Measure Data in Entrustment Decision Making: A Qualitative Study of Clinical Competency Committee Members at One Pediatric Residency
- Author
-
Jamiu O. Busari, Cees P. M. van der Vleuten, Brad Sobolewski, Carol Carraccio, Lorelei Lingard, Abigail Martini, Sue E Poynter, Eric S. Holmboe, Daniel J. Schumacher, RS: SHE - R1 - Research (OvO), and Onderwijsontw & Onderwijsresearch
- Subjects
Male ,Educational measurement ,Faculty, Medical ,020205 medical informatics ,media_common.quotation_subject ,02 engineering and technology ,Pediatrics ,Grounded theory ,Education ,03 medical and health sciences ,0302 clinical medicine ,IMPLEMENTATION ,0202 electrical engineering, electronic engineering, information engineering ,Milestone (project management) ,Humans ,Quality (business) ,030212 general & internal medicine ,Qualitative Research ,Quality of Health Care ,media_common ,WORK ,Medical education ,Data collection ,Internship and Residency ,General Medicine ,Summative assessment ,Education, Medical, Graduate ,Grounded Theory ,Committee Membership ,Female ,Clinical Competence ,Educational Measurement ,Psychology ,Inclusion (education) ,Qualitative research - Abstract
PurposeResident-sensitive quality measures (RSQMs) are quality measures that are likely performed by an individual resident and are important to care quality for a given illness of interest. This study sought to explore how individual clinical competency committee (CCC) members interpret, use, and prioritize RSQMs alongside traditional assessment data when making a summative entrustment decision.MethodIn this constructivist grounded theory study, 19 members of the pediatric residency CCC at Cincinnati Children's Hospital Medical Center were purposively and theoretically sampled between February and July 2019. Participants were provided a deidentified resident assessment portfolio with traditional assessment data (milestone and/or entrustable professional activity ratings as well as narrative comments from 5 rotations) and RSQM performance data for 3 acute, common diagnoses in the pediatric emergency department (asthma, bronchiolitis, and closed head injury) from the emergency medicine rotation. Data collection consisted of 2 phases: ( 1) observation and think out loud while participants reviewed the portfolio and (2) semistructured interviews to probe participants' reviews. Analysis moved from close readings to coding and theme development, followed by the creation of a model illustrating theme interaction. Data collection and analysis were iterative.ResultsFive dimensions for how participants interpret, use, and prioritize RSQMs were identified: (1) ability to orient to RSQMs: confusing to self-explanatory, (2) propensity to use RSQMs: reluctant to enthusiastic, (3) RSQM interpretation: requires contextualization to self-evident, (4) RSQMs for assessment decisions: not sticky to sticky, and (5) expectations for residents: potentially unfair to fair to use RSQMs. The interactions among these dimensions generated 3 RSQM data user profiles: eager incorporation, willing incorporation, and disinclined incorporation.ConclusionsParticipants used RSQMs to varying extents in their review of resident data and found such data helpful to varying degrees, supporting the inclusion of RSQMs as resident assessment data for CCC review.
- Published
- 2020
28. Health Systems Science in Medical Education: Unifying the Components to Catalyze Transformation
- Author
-
Jed D. Gonzalo, Anna Chang, Daniel R. Wolpaw, Michael Dekhtyar, Eric S. Holmboe, and Stephanie R. Starr
- Subjects
020205 medical informatics ,Social Determinants of Health ,02 engineering and technology ,Population health ,Education ,03 medical and health sciences ,Professional Competence ,0302 clinical medicine ,Political science ,Health care ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Systems thinking ,030212 general & internal medicine ,Social determinants of health ,Curriculum ,Accreditation ,Medical education ,Education, Medical ,Population Health ,business.industry ,Core competency ,General Medicine ,Quality Improvement ,United States ,Systems science ,business ,Delivery of Health Care ,Medical Informatics - Abstract
Medical education exists in the service of patients and communities and must continually calibrate its focus to ensure the achievement of these goals. To close gaps in U.S. health outcomes, medical education is steadily evolving to better prepare providers with the knowledge and skills to lead patient- and systems-level improvements. Systems-related competencies, including high-value care, quality improvement, population health, informatics, and systems thinking, are needed to achieve this but are often curricular islands in medical education, dependent on local context, and have lacked a unifying framework. The third pillar of medical education-health systems science (HSS)-complements the basic and clinical sciences and integrates the full range of systems-related competencies. Despite the movement toward HSS, there remains uncertainty and significant inconsistency in the application of HSS concepts and nomenclature within health care and medical education. In this Article, the authors (1) explore the historical context of several key systems-related competency areas; (2) describe HSS and highlight a schema crosswalk between HSS and systems-related national competency recommendations, accreditation standards, national and local curricula, educator recommendations, and textbooks; and (3) articulate 6 rationales for the use and integration of a broad HSS framework within medical education. These rationales include: (1) ensuring core competencies are not marginalized, (2) accounting for related and integrated competencies in curricular design, (3) providing the foundation for comprehensive assessments and evaluations, (4) providing a clear learning pathway for the undergraduate-graduate-workforce continuum, (5) facilitating a shift toward a national standard, and (6) catalyzing a new professional identity as systems citizens. Continued movement toward a cohesive framework will better align the clinical and educational missions by cultivating the next generation of systems-minded health care professionals.
- Published
- 2020
29. Entrustment decisions: Implications for curriculum development and assessment
- Author
-
Ara Tekian, Eric S. Holmboe, Trudie Roberts, Olle ten Cate, and John J. Norcini
- Subjects
Education, Medical ,020205 medical informatics ,Appeal ,Internship and Residency ,Professional practice ,02 engineering and technology ,General Medicine ,Trust ,Competency-Based Education ,Education ,Test (assessment) ,03 medical and health sciences ,0302 clinical medicine ,Work (electrical) ,Blueprint ,0202 electrical engineering, electronic engineering, information engineering ,Curriculum development ,Humans ,Engineering ethics ,Clinical Competence ,Curriculum ,030212 general & internal medicine ,Psychology ,Postgraduate level - Abstract
With increased interest in the use of entrustable professional activities (EPAs) in undergraduate and postgraduate medical education, comes questions about their implications for curriculum development and assessment. This paper addresses some of those questions, discussed at a symposium of the 2017 conference of AMEE, by presenting the components of an EPA, describing their importance and application, identifying their implications for assessment, and pinpointing some of challenges they pose in undergraduate and postgraduate settings. It defines entrustment, describes the three levels of trust, and presents trainee and supervisor factors that influence it as well as perceived benefits, and risks. Two aspects of EPAs have implications for assessment: units of professional practice and decisions based on entrustment, which impact an assessment's blueprint, test methods, scores, and standards. In an undergraduate setting EPAs have great appeal, but work is needed to identify and develop a robust assessment system for core EPAs. At the postgraduate level, there is tension between the granularity of the competencies and the integrated nature of the EPAs. Even though work remains, EPAs offer an important step in the evolution of competency-based education.
- Published
- 2020
30. Comparison of Male and Female Resident Milestone Assessments During Emergency Medicine Residency Training
- Author
-
Stanley J. Hamstra, Eric S. Holmboe, Lalena M. Yarris, Kenji Yamazaki, and Sally A. Santen
- Subjects
medicine.medical_specialty ,020205 medical informatics ,business.industry ,Multilevel model ,Graduate medical education ,MEDLINE ,Research Reports ,Regression analysis ,02 engineering and technology ,General Medicine ,Education ,03 medical and health sciences ,0302 clinical medicine ,Emergency medicine ,0202 electrical engineering, electronic engineering, information engineering ,Milestone (project management) ,Medicine ,030212 general & internal medicine ,business ,Residency training ,Accreditation ,Graduation - Abstract
Purpose A previous study found that milestone ratings at the end of training were higher for male than for female residents in emergency medicine (EM). However, that study was restricted to a sample of 8 EM residency programs and used individual faculty ratings from milestone reporting forms that were designed for use by the program's Clinical Competency Committee (CCC). The objective of this study was to investigate whether similar results would be found when examining the entire national cohort of EM milestone ratings reported by programs after CCC consensus review. Method This study examined longitudinal milestone ratings for all EM residents (n = 1,363; 125 programs) reported to the Accreditation Council for Graduate Medical Education every 6 months from 2014 to 2017. A multilevel linear regression model was used to estimate differences in slope for all subcompetencies, and predicted marginal means between genders were compared at time of graduation. Results There were small but statistically significant differences between males' and females' increase in ratings from initial rating to graduation on 6 of the 22 subcompetencies. Marginal mean comparisons at time of graduation demonstrated gender effects for 4 patient care subcompetencies. For these subcompetencies, males were rated as performing better than females; differences ranged from 0.048 to 0.074 milestone ratings. Conclusions In this national dataset of EM resident milestone assessments by CCCs, males and females were rated similarly at the end of their training for the majority of subcompetencies. Statistically significant but small absolute differences were noted in 4 patient care subcompetencies.
- Published
- 2020
31. Building the Bridge to Quality
- Author
-
Emma Vaux, Eric S. Holmboe, Eric J. Warm, Fiona Moss, Greg Ogrinc, Kaveh G. Shojania, Linda A. Headrick, Jason R. Frank, Brian M. Wong, and Karyn D. Baum
- Subjects
Canada ,Consensus ,Quality management ,020205 medical informatics ,Process (engineering) ,media_common.quotation_subject ,MEDLINE ,International Educational Exchange ,02 engineering and technology ,Bridge (nautical) ,Education ,03 medical and health sciences ,Patient safety ,0302 clinical medicine ,Physicians ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Learning ,Quality (business) ,Patient Reported Outcome Measures ,030212 general & internal medicine ,Road map ,Clinical care ,media_common ,Ontario ,Surgeons ,Medical education ,Standard of Care ,General Medicine ,Quality Improvement ,Health Occupations ,Clinical Competence ,Patient Safety ,Psychology ,Delivery of Health Care - Abstract
Current models of quality improvement and patient safety (QIPS) education are not fully integrated with clinical care delivery, representing a major impediment toward achieving widespread QIPS competency among health professions learners and practitioners. The Royal College of Physicians and Surgeons of Canada organized a 2-day consensus conference in Niagara Falls, Ontario, Canada, called Building the Bridge to Quality, in September 2016. Its goal was to convene an international group of educational and health system leaders, educators, frontline clinicians, learners, and patients to engage in a consensus-building process and generate a list of actionable strategies that individuals and organizations can use to better integrate QIPS education with clinical care.Four strategic directions emerged: prioritize the integration of QIPS education and clinical care, build structures and implement processes to integrate QIPS education and clinical care, build capacity for QIPS education at multiple levels, and align educational and patient outcomes to improve quality and patient safety. Individuals and organizations can refer to the specific tactics associated with the 4 strategic directions to create a road map of targeted actions most relevant to their organizational starting point.To achieve widespread change, collaborative efforts and alignment of intrinsic and extrinsic motivators are needed on an international scale to shift the culture of educational and clinical environments and build bridges that connect training programs and clinical environments, align educational and health system priorities, and improve both learning and care, with the ultimate goal of achieving improved outcomes and experiences for patients, their families, and communities.
- Published
- 2020
32. Using Longitudinal Milestones Data and Learning Analytics to Facilitate the Professional Development of Residents
- Author
-
Stanley J. Hamstra, Thomas J. Nasca, Kenji Yamazaki, and Eric S. Holmboe
- Subjects
medicine.medical_specialty ,020205 medical informatics ,Specialty ,MEDLINE ,02 engineering and technology ,Accreditation ,Education ,Formative assessment ,03 medical and health sciences ,0302 clinical medicine ,Predictive Value of Tests ,Internal Medicine ,0202 electrical engineering, electronic engineering, information engineering ,Milestone (project management) ,Humans ,Learning ,Medicine ,Longitudinal Studies ,030212 general & internal medicine ,business.industry ,Professional development ,Internship and Residency ,Research Reports ,General Medicine ,Probability Theory ,Competency-Based Education ,United States ,Predictive value of tests ,Family medicine ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Emergency Medicine ,Predictive power ,Family Practice ,business ,Graduation - Abstract
Supplemental Digital Content is available in the text., Purpose To investigate the effectiveness of using national, longitudinal milestones data to provide formative assessments to identify residents at risk of not achieving recommended competency milestone goals by residency completion. The investigators hypothesized that specific, lower milestone ratings at earlier time points in residency would be predictive of not achieving recommended Level (L) 4 milestones by graduation. Method In 2018, the investigators conducted a longitudinal cohort study of emergency medicine (EM), family medicine (FM), and internal medicine (IM) residents who completed their residency programs from 2015 to 2018. They calculated predictive values and odds ratios, adjusting for nesting within programs, for specific milestone rating thresholds at 6-month intervals for all subcompetencies within each specialty. They used final milestones ratings (May–June 2018) as the outcome variables, setting L4 as the ideal educational outcome. Results The investigators included 1,386 (98.9%) EM residents, 3,276 (98.0%) FM residents, and 7,399 (98.0%) IM residents in their analysis. The percentage of residents not reaching L4 by graduation ranged from 11% to 31% in EM, 16% to 53% in FM, and 5% to 15% in IM. Using a milestone rating of L2.5 or lower at the end of post-graduate year 2, the predictive probability of not attaining the L4 milestone graduation goal ranged from 32% to 56% in EM, 32% to 67% in FM, and 15% to 36% in IM. Conclusions Longitudinal milestones ratings may provide educationally useful, predictive information to help individual residents address potential competency gaps, but the predictive power of the milestones ratings varies by specialty and subcompetency within these 3 adult care specialties.
- Published
- 2020
33. The Use of Learning Analytics to Enable Detection of Underperforming Trainees: An Analysis of National Vascular Surgery Trainee ACGME Milestones Assessment Data
- Author
-
Kenji Yamazaki, Ara Tekian, Erica L. Mitchell, Stanley J. Hamstra, Yoon Soo Park, Brigitte K. Smith, and Eric S. Holmboe
- Subjects
medicine.medical_specialty ,Medical education ,Assessment data ,business.industry ,Learning analytics ,medicine ,Surgery ,Vascular surgery ,business - Abstract
This study aims to investigate at-risk scores of semiannual Accreditation Council for Graduate Medical Education (ACGME) Milestone ratings for vascular surgical trainees' final achievement of competency targets.ACGME Milestones assessments have been collected since 2015 for Vascular Surgery. It is unclear whether milestone ratings throughout training predict achievement of recommended performance targets upon graduation.National ACGME Milestones data were utilized for analyses. All trainees completing 2-year vascular surgery fellowships in June 2018 and 5-year integrated vascular surgery residencies in June 2019 were included. A generalized estimating equations model was used to obtain at-risk scores for each of the 31 sub-competencies by semiannual review periods, to estimate the probability of trainees achieving the recommended graduation target based on their previous ratings.122 VSFs (95.3%) and 52 IVSRs (100%) were included. VSFs and IVSRs did not achieve level 4.0 competency targets at a rate of 1.6-25.4% across sub-competencies, which was not significantly different between the two groups for any of the sub-competencies (p=0.161-0.999). Trainees were found to be at greater risk of not achieving competency targets when lower milestone ratings were assigned, and at later time-points in training. At a milestone rating of ≤ 2.5, with one year remaining prior to graduation, the at-risk score for not achieving the target level 4.0 milestone ranged from 2.9% - 77.9% for VSFs and 33.3% - 75.0% for IVSRs.The ACGME Milestones provide early diagnostic and predictive information for vascular surgery trainees' achievement of competence at completion of training.
- Published
- 2022
34. The Power of Contribution and Attribution in Assessing Educational Outcomes for Individuals, Teams, and Programs
- Author
-
Dana Sall, Eric J. Warm, Daniel J. Schumacher, Carol Carraccio, Jamiu O. Busari, Cees P. M. van der Vleuten, Matthew Kelleher, Benjamin Kinnear, Eric Dornoff, Eric S. Holmboe, Abigail Martini, RS: SHE - R1 - Research (OvO), and Onderwijsontw & Onderwijsresearch
- Subjects
Male ,Program evaluation ,020205 medical informatics ,IMPACT ,Nurses ,02 engineering and technology ,Certification ,Outcome (game theory) ,Education ,03 medical and health sciences ,0302 clinical medicine ,PATIENT OUTCOMES ,Physicians ,Outcome Assessment, Health Care ,MEDICAL-EDUCATION ,Health care ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,030212 general & internal medicine ,Quality of Health Care ,Medical education ,Education, Medical ,business.industry ,General Medicine ,Emergency department ,Competency-Based Education ,Patient Discharge ,Personal development ,Incentive ,Female ,Clinical Competence ,Educational Measurement ,Emergency Service, Hospital ,business ,Attribution ,Psychology ,Delivery of Health Care ,Program Evaluation - Abstract
Recent discussions have brought attention to the utility of contribution analysis for evaluating the effectiveness and outcomes of medical education programs, especially for complex initiatives such as competency-based medical education. Contribution analysis focuses on the extent to which different entities contribute to an outcome. Given that health care is provided by teams, contribution analysis is well suited to evaluating the outcomes of care delivery. Furthermore, contribution analysis plays an important role in analyzing program-and system-level outcomes that inform program evaluation and program-level improvements for the future. Equally important in health care, however, is the role of the individual. In the overall contribution of a team to an outcome, some aspects of this outcome can be attributed to individual team members. For example, a recently discharged patient with an unplanned return to the emergency department to seek care may not have understood the discharge instructions given by the nurse or may not have received any discharge guidance from the resident physician. In this example, if it is the nurse's responsibility to provide discharge instructions, that activity is attributed to him or her. This and other activities attributed to different individuals (e.g., nurse, resident) combine to contribute to the outcome for the patient. Determining how to tease out such attributions is important for several reasons. First, it is physicians, not teams, that graduate and are granted certification and credentials for medical practice. Second, incentive-based payment models focus on the quality of care provided by an individual. Third, an individual can use data about his or her performance on the team to help drive personal improvement. In this article, the authors explored how attribution and contribution analyses can be used in a complimentary fashion to discern which outcomes can and should be attributed to individuals, which to teams, and which to programs.
- Published
- 2019
35. Racial and Ethnic Differences in Internal Medicine Residency Assessments
- Author
-
Dowin, Boatright, Nientara, Anderson, Jung G, Kim, Eric S, Holmboe, William A, McDade, Tonya, Fancher, Cary P, Gross, Sarwat, Chaudhry, Mytien, Nguyen, Max Jordan, Nguemeni Tiako, Eve, Colson, Yunshan, Xu, Fangyong, Li, James D, Dziura, and Somnath, Saha
- Subjects
General Medicine - Abstract
ImportancePrevious studies have demonstrated racial and ethnic inequities in medical student assessments, awards, and faculty promotions at academic medical centers. Few data exist about similar racial and ethnic disparities at the level of graduate medical education.ObjectiveTo examine the association between race and ethnicity and performance assessments among a national cohort of internal medicine residents.Design, Setting, and ParticipantsThis retrospective cohort study evaluated assessments of performance for 9026 internal medicine residents from the graduating classes of 2016 and 2017 at Accreditation Council of Graduate Medical Education (ACGME)–accredited internal medicine residency programs in the US. Analyses were conducted between July 1, 2020, and June 31, 2022.Main Outcomes and MeasuresThe primary outcome was midyear and year-end total ACGME Milestone scores for underrepresented in medicine (URiM [Hispanic only; non-Hispanic American Indian, Alaska Native, or Native Hawaiian/Pacific Islander only; or non-Hispanic Black/African American]) and Asian residents compared with White residents as determined by their Clinical Competency Committees and residency program directors. Differences in scores between Asian and URiM residents compared with White residents were also compared for each of the 6 competency domains as supportive outcomes.ResultsThe study cohort included 9026 residents from 305 internal medicine residency programs. Of these residents, 3994 (44.2%) were female, 3258 (36.1%) were Asian, 1216 (13.5%) were URiM, and 4552 (50.4%) were White. In the fully adjusted model, no difference was found in the initial midyear total Milestone scores between URiM and White residents, but there was a difference between Asian and White residents, which favored White residents (mean [SD] difference in scores for Asian residents: −1.27 [0.38]; P P P Conclusions and RelevanceIn this cohort study, URiM and Asian internal medicine residents received lower ratings on performance assessments than their White peers during the first and second years of training, which may reflect racial bias in assessment. This disparity in assessment may limit opportunities for physicians from minoritized racial and ethnic groups and hinder physician workforce diversity.
- Published
- 2022
36. Competency-based medical education across the continuum: How well aligned are medical school EPAs to residency milestones?
- Author
-
Eric S. Holmboe, Sally A. Santen, William F. Iobst, and Michael S. Ryan
- Subjects
Medical education ,Continuum (measurement) ,Graduate medical education ,Medical school ,Internship and Residency ,Context (language use) ,General Medicine ,Competency-Based Education ,Education ,Patient safety ,Education, Medical, Graduate ,Humans ,Clinical Competence ,Educational Measurement ,Psychology ,Schools, Medical ,Education, Medical, Undergraduate - Abstract
INTRODUCTION Competency-based medical education (CBME) provides a framework for describing learner progression throughout training. However, specific approaches to CBME implementation vary widely across educational settings. Alignment between various methods used across the continuum is critical to support transitions and assess learner performance. The purpose of this study was to investigate alignment between CBME frameworks used in undergraduate medical education (UME) and graduate medical education (GME) settings using the US context as a model. METHOD The authors analyzed content from the core entrustable professional activities for entering residency (Core EPAs; UME model) and residency milestones (GME model). From that analysis, they performed a series of cross-walk activities to investigate alignment between frameworks. After independent review, authors discussed findings until consensus was reached. RESULTS Some alignment was found for activities associated with history taking, physical examination, differential diagnosis, patient safety, and interprofessional care; however, there were far more examples of misalignment. CONCLUSIONS These findings highlight challenges creating alignment of assessment frameworks across the continuum of training. The importance of these findings includes implications for assessment and persistence of the educational gap across UME and GME. The authors provide four next steps to improve upon the continuum of education.
- Published
- 2021
37. In Reply to Ibrahim and Abdel-Razig
- Author
-
Michael S, Ryan, Eric S, Holmboe, and Subani, Chandra
- Subjects
General Medicine ,Education - Published
- 2022
38. An Empirical Investigation Into Milestones Factor Structure Using National Data Derived From Clinical Competency Committees
- Author
-
Stanley J. Hamstra, Kenji Yamazaki, and Eric S. Holmboe
- Subjects
Medical education ,business.industry ,Specialty ,Internship and Residency ,General Medicine ,Confirmatory factor analysis ,Education ,Accreditation ,Obstetrics and gynaecology ,Ambulatory care ,Sample size determination ,Education, Medical, Graduate ,Milestone (project management) ,Humans ,Clinical Competence ,Educational Measurement ,business ,Psychology ,Categorical variable - Abstract
Purpose To investigate whether milestone data obtained from clinical competency committee (CCC) ratings in a single specialty reflected the 6 general competency domains framework. Method The authors examined milestone ratings from all 275 U.S. Accreditation Council for Graduate Medical Education-accredited categorical obstetrics and gynecology (OBGYN) programs from July 1, 2018, to June 30, 2019. The sample size ranged from 1,371 to 1,438 residents from 275 programs across 4 postgraduate years (PGYs), each with 2 assessment periods. The OBGYN milestones reporting form consisted of 28 subcompetencies under the 6 general competency domains. Milestone ratings were determined by each program's CCC. Intraclass correlations (ICCs) and design effects were calculated for each subcompetency by PGY and assessment period. A multilevel confirmatory factor analysis (CFA) perspective was used, and the pooled within-program covariance matrix was obtained to compare the fit of the 6-domain factor model against 3 other plausible models. Results Milestone ratings from 5,618 OBGYN residents were examined. Moderate to high ICCs and design effects greater than 2.0 were prevalent among all subcompetencies for both assessment periods, warranting the use of the multilevel approach in applying CFA to the milestone data. The theory-aided split-patient care (PC) factor model, which used the 6 general competency domains but also included 3 factors within the PC domain (obstetric technical skills, gynecology technical skills, and ambulatory care), was consistently shown as the best-fitting model across all PGY by assessment period conditions, except for one. Conclusions The findings indicate that in addition to using the 6 general competency domains framework in their rating process, CCCs may have further distinguished the PC competency domain into 3 meaningful factors. This study provides internal structure validity evidence for the milestones within a single specialty and may shed light on CCCs' understanding of the distinctive content embedded within the milestones.
- Published
- 2021
39. Entrustment Unpacked: Aligning Purposes, Stakes, and Processes to Enhance Learner Assessment
- Author
-
Benjamin, Kinnear, Eric J, Warm, Holly, Caretta-Weyer, Eric S, Holmboe, David A, Turner, Cees, van der Vleuten, and Daniel J, Schumacher
- Subjects
Formative Feedback ,Education, Medical, Graduate ,Humans ,Learning ,Clinical Competence ,Educational Measurement ,Competency-Based Education - Abstract
Educators use entrustment, a common framework in competency-based medical education, in multiple ways, including frontline assessment instruments, learner feedback tools, and group decision making within promotions or competence committees. Within these multiple contexts, entrustment decisions can vary in purpose (i.e., intended use), stakes (i.e., perceived risk or consequences), and process (i.e., how entrustment is rendered). Each of these characteristics can be conceptualized as having 2 distinct poles: (1) purpose has formative and summative, (2) stakes has low and high, and (3) process has ad hoc and structured. For each characteristic, entrustment decisions often do not fall squarely at one pole or the other, but rather lie somewhere along a spectrum. While distinct, these continua can, and sometimes should, influence one another, and can be manipulated to optimally integrate entrustment within a program of assessment. In this article, the authors describe each of these continua and depict how key alignments between them can help optimize value when using entrustment in programmatic assessment within competency-based medical education. As they think through these continua, the authors will begin and end with a case study to demonstrate the practical application as it might occur in the clinical learning environment.
- Published
- 2021
40. The Dissolution of the Step 2 Clinical Skills Examination and the Duty of Medical Educators to Step Up the Effectiveness of Clinical Skills Assessment
- Author
-
Jennifer R. Kogan, Eric S. Holmboe, and Karen E. Hauer
- Subjects
Medical education ,Education, Medical ,media_common.quotation_subject ,education ,MEDLINE ,General Medicine ,United States ,Education ,Accreditation ,Coproduction ,Resource (project management) ,Work (electrical) ,Humans ,Augmented reality ,Clinical Competence ,Educational Measurement ,Psychology ,Competence (human resources) ,Duty ,Schools, Medical ,media_common - Abstract
In this Invited Commentary, the authors explore the implications of the dissolution of the Step 2 Clinical Skills Examination (Step 2 CS) for medical student clinical skills assessment. The authors describe the need for medical educators (at both the undergraduate and graduate levels) to work collaboratively to improve medical student clinical skills assessment to assure the public that medical school graduates have the requisite skills to begin residency training. The authors outline 6 specific recommendations for how to capitalize on the discontinuation of Step 2 CS to improve clinical skills assessment: (1) defining national, end-of-clerkship, and transition-to-residency standards for required clinical skills and for levels of competence; (2) creating a national resource for standardized patient, augmented reality, and virtual reality assessments; (3) improving workplace-based assessment through local collaborations and national resources; (4) improving learner engagement in and coproduction of assessments; (5) requiring, as a new standard for accreditation, medical schools to establish and maintain competency committees; and (6) establishing a national registry of assessment data for research and evaluation. Together, these actions will help the medical education community earn the public's trust by enhancing the rigor of assessment to ensure the mastery of skills that are essential to providing safe, high-quality care for patients.
- Published
- 2021
41. Milestones in Family Medicine: Lessons for the Specialty
- Author
-
Warren P. Newton, Eric S. Holmboe, and Deborah S. Clements
- Subjects
Engineering ,medicine.medical_specialty ,business.industry ,Family medicine ,medicine ,Specialty ,Humans ,Internship and Residency ,Clinical Competence ,Family Practice ,business ,Accreditation - Published
- 2021
42. Expanding boundaries: a transtheoretical model of clinical reasoning and diagnostic error
- Author
-
Valerie J. Lang, Eric Wilson, Eric S. Holmboe, Steven J. Durning, Jospeh J Rencic, Colleen M. Seifert, Dario Torre, and Michelle Daniel
- Subjects
Health Policy ,Biochemistry (medical) ,Clinical Biochemistry ,Applied psychology ,Public Health, Environmental and Occupational Health ,Transtheoretical model ,Clinical reasoning ,MEDLINE ,Medicine (miscellaneous) ,Clinical Reasoning ,Cognition ,Transtheoretical Model ,Humans ,Diagnostic Errors ,Psychology - Published
- 2020
43. A National Study of Longitudinal Consistency in ACGME Milestone Ratings by Clinical Competency Committees
- Author
-
Michael S. Beeson, Sally A. Santen, Melissa A. Barton, Stanley J. Hamstra, Eric S. Holmboe, and Kenji Yamazaki
- Subjects
medicine.medical_specialty ,Educational measurement ,020205 medical informatics ,Urology ,Graduate medical education ,MEDLINE ,02 engineering and technology ,Education ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Milestone (project management) ,Humans ,Longitudinal Studies ,030212 general & internal medicine ,Competence (human resources) ,Accreditation ,Multilevel model ,Internship and Residency ,Reproducibility of Results ,Research Reports ,General Medicine ,Education, Medical, Graduate ,Family medicine ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Emergency Medicine ,Multilevel Analysis ,National study ,Clinical Competence ,Educational Measurement ,Radiology ,Psychology - Abstract
Supplemental Digital Content is available in the text., Purpose To investigate whether clinical competency committees (CCCs) were consistent in applying milestone ratings for first-year residents over time or whether ratings increased or decreased. Method Beginning in December 2013, the Accreditation Council for Graduate Medical Education (ACGME) initiated a phased-in requirement for reporting milestones; emergency medicine (EM), diagnostic radiology (DR), and urology (UR) were among the earliest reporting specialties. The authors analyzed CCC milestone ratings of first-year residents from 2013 to 2016 from all ACGME-accredited EM, DR, and UR programs for which they had data. The number of first-year residents in these programs ranged from 2,838 to 2,928 over this time period. The program-level average milestone rating for each subcompetency was regressed onto the time of observation using a random coefficient multilevel regression model. Results National average program-level milestone ratings of first-year residents decreased significantly over the observed time period for 32 of the 56 subcompetencies examined. None of the other subcompetencies showed a significant change. National average in-training examination scores for each of the specialties remained essentially unchanged over the time period, suggesting that differences between the cohorts were not likely an explanatory factor. Conclusions The findings indicate that CCCs tend to become more stringent or maintain consistency in their ratings of beginning residents over time. One explanation for these results is that CCCs may become increasingly comfortable in assigning lower ratings when appropriate. This finding is consistent with an increase in confidence with the milestone rating process and the quality of feedback it provides.
- Published
- 2019
44. A Feasibility Study to Attribute Patients to Primary Interns on Inpatient Ward Teams Using Electronic Health Record Data
- Author
-
Jamiu O. Busari, Carol Carraccio, Daniel J. Schumacher, Danny T. Y. Wu, Dana Sall, Eric J. Warm, Karthikeyan Meganathan, Eric S. Holmboe, Lezhi Li, Daniel P. Schauer, Cees P. M. van der Vleuten, Matthew Kelleher, Benjamin Kinnear, RS: SHE - R1 - Research (OvO), and Onderwijsontw & Onderwijsresearch
- Subjects
Adult ,Male ,medicine.medical_specialty ,020205 medical informatics ,COMPETENCES ,health care facilities, manpower, and services ,education ,Graduate medical education ,MEDLINE ,02 engineering and technology ,Education ,Young Adult ,03 medical and health sciences ,0302 clinical medicine ,Electronic health record ,MEDICAL-EDUCATION ,Internal Medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Electronic Health Records ,Humans ,030212 general & internal medicine ,health care economics and organizations ,Ohio ,Quality of Health Care ,Patient Care Team ,business.industry ,Internship and Residency ,General Medicine ,Quality Improvement ,Education, Medical, Graduate ,Family medicine ,Feasibility Studies ,Female ,Clinical Competence ,business - Abstract
Purpose To inform graduate medical education (GME) outcomes at the individual resident level, this study sought a method for attributing care for individual patients to individual interns based on "footprints" in the electronic health record (EHR). Method Primary interns caring for patients on an internal medicine inpatient service were recorded daily by five attending physicians of record at University of Cincinnati Medical Center in August 2017 and January 2018. These records were considered gold standard identification of primary interns. The following EHR variables were explored to determine representation of primary intern involvement in care: postgraduate year, progress note author, discharge summary author, physician order placement, and logging clicks in the patient record. These variables were turned into quantitative attributes (e.g., progress note author: yes/no), and informative attributes were selected and modeled using a decision tree algorithm. Results A total of 1,511 access records were generated; 116 were marked as having a primary intern assigned. All variables except discharge summary author displayed at least some level of importance in the models. The best model achieved 78.95% sensitivity, 97.61% specificity, and an area under the receiver-operator curve of approximately 91%. Conclusions This study successfully predicted primary interns caring for patients on inpatient teams using EHR data with excellent model performance. This provides a foundation for attributing patients to primary interns for the purposes of determining patient diagnoses and complexity the interns see as well as supporting continuous quality improvement efforts in GME.
- Published
- 2019
45. Developing Resident-Sensitive Quality Measures
- Author
-
Daniel J. Schumacher, Abigail Martini, Eric S. Holmboe, Carol Carraccio, Cees P. M. van der Vleuten, Jamiu O. Busari, Kartik Varadarajan, RS: SHE - R1 - Research (OvO), and Onderwijsontw & Onderwijsresearch
- Subjects
Delphi Technique ,ERRORS ,Pediatrics ,03 medical and health sciences ,0302 clinical medicine ,Documentation ,PATIENT OUTCOMES ,PROGRAMS ,Stakeholder Participation ,030225 pediatrics ,Contextual variable ,Head Injuries, Closed ,quality care ,MEDICAL-EDUCATION ,Humans ,030212 general & internal medicine ,PERSPECTIVE ,computer.programming_language ,outcomes-based assessment ,Medical education ,FOCUS ,Disease Management ,Internship and Residency ,Nominal group ,Focus Groups ,Priority areas ,Focus group ,Asthma ,Hospital medicine ,resident assessment ,Education, Medical, Graduate ,Pediatrics, Perinatology and Child Health ,General pediatrics ,Bronchiolitis ,Clinical Competence ,Educational Measurement ,Psychology ,computer ,Delphi - Abstract
OBJECTIVE: Despite the need for quality measures relevant to the work residents complete, few attempts have been made to address this gap. Resident-sensitive quality measures (RSQMs) can help fill this void. This study engaged resident and supervisor stakeholders to develop and inform next steps in creating such measures.METHODS: Two separate nominal group techniques (NGTs), one with residents and one with faculty and fellow supervisors, were used to generate RSQMs for 3 specific illnesses (asthma, bronchiolitis, and closed head injury) as well as general care for the pediatric emergency department. Two separate Delphi processes were then used to prioritize identified RSQMs. The measures produced by each group were compared side by side, illuminating similarities and differences that were explored through focus groups with residents and supervisors. These focus groups also probed future settings in which to develop RSQMs.RESULTS: In the NGT and Delphi groups, residents and supervisors placed considerable focus on measures in 3 areas across the illnesses of interest: 1) appropriate medication dosing, 2) documentation, and 3) information provided at patient discharge. Focus groups highlighted hospital medicine and general pediatrics as priority areas for developing future RSQMs but also noted contextual variables that influence the application of similar measures in different settings. Residents and supervisors had both similar as well as unique insights into developing RSQMs.CONCLUSIONS: This study continues to pave the path forward in developing future RSQMs by exploring specific settings, measures, and stakeholders to consider when undertaking this work.
- Published
- 2019
46. Managing tensions in assessment
- Author
-
Cees P. M. van der Vleuten, Marjan J. B. Govaerts, Eric S. Holmboe, RS: SHE - R1 - Research (OvO), and Onderwijsontw & Onderwijsresearch
- Subjects
PERCEPTIONS ,020205 medical informatics ,WORKPLACE-BASED ASSESSMENT ,media_common.quotation_subject ,Context (language use) ,STUDENTS ,02 engineering and technology ,Assessment ,Alchemy ,Education ,Formative assessment ,CULTURE ,03 medical and health sciences ,0302 clinical medicine ,Leverage (negotiation) ,Perception ,Health care ,MEDICAL-EDUCATION ,0202 electrical engineering, electronic engineering, information engineering ,Learning ,030212 general & internal medicine ,Sociology ,VALIDITY ,media_common ,LICENSING EXAMINATIONS ,FEEDBACK ,business.industry ,General Medicine ,Health professions ,Summative assessment ,PARADOX ,Accountability ,SUMMATIVE ASSESSMENT ,Engineering ethics ,business - Abstract
Context In health professions education, assessment systems are bound to be rife with tensions as they must fulfil formative and summative assessment purposes, be efficient and effective, and meet the needs of learners and education institutes, as well as those of patients and health care organisations. The way we respond to these tensions determines the fate of assessment practices and reform. In this study, we argue that traditional ‘fix‐the‐problem’ approaches (i.e. either–or solutions) are generally inadequate and that we need alternative strategies to help us further understand, accept and actually engage with the multiple recurring tensions in assessment programmes. Methods Drawing from research in organisation science and health care, we outline how the Polarity Thinking™ model and its ‘both–and’ approach offer ways to systematically leverage assessment tensions as opportunities to drive improvement, rather than as intractable problems. In reviewing the assessment literature, we highlight and discuss exemplars of specific assessment polarities and tensions in educational settings. Using key concepts and principles of the Polarity Thinking™ model, and two examples of common tensions in assessment design, we describe how the model can be applied in a stepwise approach to the management of key polarities in assessment. Discussion Assessment polarities and tensions are likely to surface with the continued rise of complexity and change in education and health care organisations. With increasing pressures of accountability in times of stretched resources, assessment tensions and dilemmas will become more pronounced. We propose to add to our repertoire of strategies for managing key dilemmas in education and assessment design through the adoption of the polarity framework. Its ‘both–and’ approach may advance our efforts to transform assessment systems to meet complex 21st century education, health and health care needs., The authors argue that traditional ‘either‐or’ solutions to assessment problems need to be replaced with ‘both‐and’ strategies if we are to advance our efforts to transform assessment in the era of competency‐based education.
- Published
- 2019
47. Advancing Workplace-Based Assessment in Psychiatric Education: Key Design and Implementation Issues
- Author
-
John Q, Young, Jason R, Frank, and Eric S, Holmboe
- Subjects
Education, Medical, Graduate ,Humans ,Learning ,Clinical Competence ,Workplace ,Competency-Based Education - Abstract
With the adoption of competency-based medical education, assessment has shifted from traditional classroom domains of knows and knows how to the workplace domain of doing. This workplace-based assessment has 2 purposes; assessment of learning (summative feedback) and the assessment for learning (formative feedback). What the trainee does becomes the basis for identifying growth edges and determining readiness for advancement and ultimately independent practice. High-quality workplace-based assessment programs require thoughtful choices about the framework of assessment, the tools themselves, the platforms used, and the contexts in which the assessments take place, with an emphasis on direct observation.
- Published
- 2021
48. Longitudinal Milestone Assessment Extending Through Subspecialty Training: The Relationship Between ACGME Internal Medicine Residency Milestones and Subsequent Pulmonary and Critical Care Fellowship Milestones
- Author
-
Kenji Yamazaki, Joshua L. Denson, Lekshmi Santhosh, Tisha Wang, Eric S. Holmboe, Janae K. Heath, Alison S. Clay, and W. Graham Carlos
- Subjects
Adult ,Male ,medicine.medical_specialty ,Critical Care ,education ,MEDLINE ,Graduate medical education ,Subspecialty ,Critical Care Fellowship ,Education ,Accreditation ,Cohort Studies ,Social Skills ,Internal medicine ,Individualized learning ,medicine ,Milestone (project management) ,Internal Medicine ,Pulmonary Medicine ,Humans ,Fellowships and Scholarships ,health care economics and organizations ,Retrospective Studies ,business.industry ,Communication ,Internship and Residency ,Retrospective cohort study ,General Medicine ,Logistic Models ,Education, Medical, Graduate ,Female ,Clinical Competence ,Educational Measurement ,business - Abstract
Purpose The Accreditation Council for Graduate Medical Education (ACGME) milestones were implemented across medical subspecialties in 2015. Although milestones were proposed as a longitudinal assessment tool potentially providing opportunities for early implementation of individualized learning plans in fellowship, the association of subspecialty fellowship ratings with prior residency ratings remains unclear. This study aimed to assess the relationship between internal medicine (IM) residency milestones and pulmonary-critical care medicine (PCCM) fellowship milestones. Method A multicenter retrospective cohort analysis was conducted for all PCCM trainees enrolled in ACGME-accredited PCCM fellowship programs in 2017-2018 who had complete prior IM milestone ratings from 2014-2017. Only professionalism and interpersonal and communication skills (ICS) were included based on shared anchors between IM and PCCM milestones. Using a generalized estimating equations model, the association of PCCM milestones ≤ 2.5 during the first year of fellowship with corresponding IM subcompetencies was assessed at each time-point, nested by program. Statistical significance was determined using logistic regression. Results The study included 354 unique PCCM fellows. For both ICS and professionalism subcompetencies, fellows with higher IM ratings were less likely to obtain PCCM ratings ≤ 2.5 during the first fellowship year. Each ICS subcompetency was significantly associated with future lapses in fellowship (ICS01: β = -0.67, P = 0.003; ICS02: β = -0.70, P = 0.001; ICS03: β = -0.60, P = 0.004) at various residency timepoints. Similar associations were noted for PROF03 (β = -0.57, P = 0.007). Conclusions Findings demonstrated an association between IM milestone ratings and low milestone ratings during PCCM fellowship. IM trainees with low ratings in several professionalism and ICS subcompetencies were more likely to be rated ≤ 2.5 during their first year in PCCM fellowship. This highlights a potential use of longitudinal milestones to target educational gaps at the beginning of PCCM fellowship.
- Published
- 2021
49. The Transformational Path Ahead: Competency-Based Medical Education in Family Medicine
- Author
-
Eric S. Holmboe
- Subjects
medicine.medical_specialty ,Medical education ,Root (linguistics) ,Education, Medical ,business.industry ,Graduate medical education ,Equity (finance) ,Internship and Residency ,Context (language use) ,Primary care ,Competency-Based Education ,Transformational leadership ,Work (electrical) ,Education, Medical, Graduate ,Family medicine ,Health care ,medicine ,Humans ,Clinical Competence ,business ,Psychology ,Family Practice - Abstract
Competency-based medical education (CBME) is an outcomes-based approach that has taken root in residency training nationally and internationally. CBME explicitly places the patient, family, and community at the center of training with the primary goals of concomitantly improving both educational and clinical outcomes. Family medicine, as the foundational primary care discipline, has always embraced the importance of linking training with health system needs and performance since its inception. While CBME is no longer a new concept, full implementation of this outcomes-based approach has been daunting and challenging. Gaps in the effectiveness, safety, equity, efficiency, timeliness, and patient/family centeredness of health and health care in the United States continue to be persistent and pernicious. These gaps summon family medicine and the entire graduate medical education system to take stock of its current state and to examine how more fully embracing an outcomes-based educational approach can help to close these gaps. This article provides a brief history of the CBME movement, and more importantly, its key underlying educational principles and science. I will explore the key inflection points of progress, including identifying core CBME components, introduction of competency Milestones, experimental pilots of time variable training, advancements in mastery-based learning, and advances in work-based assessment, within the context of family medicine. I will conclude with suggestions for accelerating the adoption and implementation of CBME within family medicine residency training.
- Published
- 2021
50. Purposeful Imprinting in Graduate Medical Education: Opportunities for Partnership
- Author
-
Eric S. Holmboe, Andrew Bazemore, Robert L. Phillips, and Brian C. George
- Subjects
Medical education ,Education, Medical, Graduate ,General partnership ,Graduate medical education ,MEDLINE ,Humans ,Internship and Residency ,Family Practice ,Psychology ,Imprinting (organizational theory) - Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.