68 results on '"Touchie C"'
Search Results
2. Entrustment Decision Making in Clinical Training
- Author
-
ten Cate, O, Hart, D, Ankel, F, Busari, J, Englander, R, Glasgow, Nicholas, Holmboe, E, Iobst, W, Lovell, E, Snell, L.S., Touchie, C, Van Melle, E, Wycliffe-Jones, University of Calgary, ten Cate, O, Hart, D, Ankel, F, Busari, J, Englander, R, Glasgow, Nicholas, Holmboe, E, Iobst, W, Lovell, E, Snell, L.S., Touchie, C, Van Melle, E, and Wycliffe-Jones, University of Calgary
- Abstract
The decision to trust a medical trainee with the critical responsibility to care for a patient is fundamental to clinical training. When carefully and deliberately made, such decisions can serve as significant stimuli for learning and also shape the assessment of trainees. Holding back entrustment decisions too much may hamper the trainee’s development toward unsupervised practice. When carelessly made, however, they jeopardize patient safety. Entrustment decision-making processes, therefore, deserve careful analysis.
- Published
- 2016
3. Assessing change in clinical teaching skills: are we up for the challenge?
- Author
-
Marks MB, Wood TJ, Nuth J, Touchie C, O'Brien H, and Dugan A
- Abstract
BACKGROUND: The faculty development community has been challenged to more rigorously assess program impact and move beyond traditional outcomes of knowledge tests and self ratings. PURPOSE: The purpose was to (a) assess our ability to measure supervisors' feedback skills as demonstrated in a clinical setting and (b) compare the results with traditional outcome measures of faculty development interventions. METHODS: A pre-post study design was used. Resident and expert ratings of supervisors' demonstrated feedback skills were compared with traditional outcomes, including a knowledge test and participant self-evaluation. RESULTS: Pre-post knowledge increased significantly (pre = 61%, post = 85%; p < .001) as did participant's self-evaluation scores (pre = 4.13, post = 4.79; p < .001). Participants' self-evaluations were moderately to poorly correlated with resident (pre r = .20, post r = .08) and expert ratings (pre r = .43, post r = -.52). Residents and experts would need to evaluate 110 and 200 participants, respectively, to reach significance. CONCLUSIONS: It is possible to measure feedback skills in a clinical setting. Although traditional outcome measures show a significant effect, demonstrating change in teaching behaviors used in practice will require larger scale studies than typically undertaken currently. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
4. Teaching the musculoskeletal examination: are patient educators as effective as rheumatology faculty?
- Author
-
Humphrey-Murto S, Smith CD, Touchie C, and Wood TC
- Abstract
BACKGROUND: Effective education of clinical skills is essential if doctors are to meet the needs of patients with rheumatic disease, but shrinking faculty numbers has made clinical teaching difficult. A solution to this problem is to utilize patient educators. PURPOSE: This study evaluates the teaching effectiveness of patient educators compared to rheumatology faculty using the musculoskeletal (MSK) examination. METHOD: Sixty-two 2nd-year medical students were randomized to receive instruction from patient educators or faculty. Tutorial groups received instructions during three, 3-hr sessions. Clinical skills were evaluated by a 9 station objective structured clinical examination. Students completed a tutor evaluation form to assess their level of satisfaction with the process. RESULTS: Faculty-taught students received a higher overall mark (66.5% vs. 62.1%,) and fewer failed than patient educator-taught students (5 vs. 0, p = 0.02). Students rated faculty educators higher than patient educators (4.13 vs. 3.58 on a 5-point Likert scale). CONCLUSION: Rheumatology faculty appear to be more effective teachers of the MSK physical exam than patient educators. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
5. Four-day incubation for detection of bacteremia using the BACTEC 9240
- Author
-
Johnson, A. S., Touchie, C., Haldane, D. J., and Forward, K. R.
- Published
- 2000
- Full Text
- View/download PDF
6. Cervical cancer screening among HIV-positive women: Retrospective cohort study from a tertiary care HIV clinic
- Author
-
Leece, P., Kendall, C., Touchie, C., Pottie, K., Jonathan B Angel, and Jaffey, J.
- Subjects
Adult ,Ontario ,Vaginal Smears ,Primary Health Care ,Research ,Uterine Cervical Neoplasms ,Middle Aged ,Viral Load ,CD4 Lymphocyte Count ,HIV Seropositivity ,Humans ,Female ,Early Detection of Cancer ,Retrospective Studies - Abstract
OBJECTIVE To determine the rate of cervical screening among HIV-positive women who received care at a tertiary care clinic, and to determine whether screening rates were influenced by having a primary care provider.DESIGN Retrospective chart review.SETTING Tertiary care outpatient clinic in Ottawa, Ont. PARTICIPANTS Women who were HIV-positive receiving care at the Ottawa Hospital General Campus Immuno deficiency Clinic between July 1, 2002, and June 30, 2005.MAIN OUTCOME MEASURES Whether patients had primary care providers and whether they received cervical screening. We recorded information on patient demographics, HIV status, primary care providers, and cervical screening, including date, results, and type of health care provider ordering the screening.RESULTS Fifty-eight percent (126 of 218) of the women had at least 1 cervical screening test during the 3-year period. Thirty-three percent (42 of 126) of the women who underwent cervical screening had at least 1 abnormal test result. The proportion of women who did not have any cervical tests performed was higher among women who did not have primary care providers (8 of 12 [67%] vs 84 of 206 [41%]; relative risk 1.6, 95%confidence interval 1.06 to 2.52, P.05), although this group was small.CONCLUSION Despite the high proportion of abnormal cervical screening test results among HIV-positive women, screening rates remained low. Our results support our hypothesis that those women who do not have primary care providers are less likely to undergo cervical screening.
7. Stakeholder perceptions and experiences of competency-based training with entrustable professional activities (SPECTRE): protocol of a systematic review and thematic synthesis of qualitative research.
- Author
-
Phung J, Cowley L, Sikora L, Humphrey-Murto S, LaDonna KA, Touchie C, and Khalife R
- Subjects
- Humans, Stakeholder Participation, Curriculum, Competency-Based Education, Qualitative Research, Systematic Reviews as Topic, Clinical Competence
- Abstract
Background: Competency-Based Medical Education (CBME) aims to align educational outcomes with the demands of modern healthcare. Entrustable Professional Activities (EPAs) serve as key tools for feedback and professional development within CBME. With the growing body of literature on EPAs, there is a need to synthesize existing research on stakeholders' experiences and perceptions to enhance understanding of the implementation and impact of EPAs. In this synthesis, we will address the following research questions: How are Entrustable Professional Activities experienced and perceived by stakeholders in various healthcare settings, and what specific challenges and successes do they encounter during their implementation?, Methods: Using Thomas and Harden's thematic synthesis method, we will systematically review and integrate findings from qualitative and mixed-methods research on EPAs. The process includes a purposive literature search, assessment of evidence quality, data extraction, and synthesis to combine descriptive and analytical themes., Discussion: This study aims to provide insights into the use of EPAs for competency-based education, reflecting diverse contexts and viewpoints, and identifying literature gaps. The outcomes will guide curriculum and policy development, improve educational practices, and set future research directions, ultimately aligning CBME with clinical realities., Trial Registration: Not required., Competing Interests: Declarations. Ethics approval and consent to participate: Not applicable. Consent for publication: Not applicable. Competing interests: The authors declare no competing interests., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
8. Can all roads lead to competency? School levels effects in Licensing examinations scores.
- Author
-
Kulasegaram K, Archibald D, Ma IB, Chahine S, Kirpalani A, Wilson C, Ross B, Cameron E, Hogenbirk J, Barber C, Burgess R, MEd EK, Touchie C, and Grierson L
- Abstract
At the foundation of research concerned with professional training is the idea of an assumed causal chain between the policies and practices of education and the eventual behaviours of those that graduate these programs. In medicine, given the social accountability to ensure that teaching and learning gives way to a health human resource that is willing and able to provide the healthcare that patients and communities need, it is of critical importance to generate evidence regarding this causal relationship. One question that medical education scholars ask regularly is the degree to which the unique features of training programs and learning environments impact trainee achievement of the intended learning outcomes. To date, this evidence has been difficult to generate because data pertaining to learners is only rarely systematically brought together across institutions or periods of training. We describe new research which leverages an inter-institutional data-driven approach to investigate the influence of school-level factors on the licensing outcomes of medical students. Specifically, we bring together sociodemographic, admissions, and in-training assessment variables pertaining to medical trainee graduates at each of the six medical schools in Ontario, Canada into multilevel stepwise regression models that determine the degree of association between these variables and graduate performances on the Medical Council of Canada Qualifying Examinations (Part 1, n = 1097 observations; Part 2, n = 616 observations), established predictors of downstream physician performance. As part of this analysis, we include an anonymized school-level (School 1, School 2) independent variable in each of these models. Our results demonstrate that the largest variable associated with performance on both the first and second parts of the licensing examinations is prior academic achievement, notably clerkship performance. Ratings of biomedical knowledge were also significantly associated with the first examination, while clerkship OSCE scores and enrollment in a family medicine residency were significantly associated with the Part 2. Small significant school effects were realized in both models accounting for 4% and 2% of the variance realized in the first and second examinations, respectively. These findings highlight that school enrollment plays a minor role relative to individual student performance in influencing examination outcomes., Competing Interests: Declarations. Competing interests: The authors declare no competing interests. Conflict of interest: Claire Touchie was the Chief Medical Education Officer of the MCC when the research was being done and Ilona Bartman is employed by the Medical Council. Ethics approval: Ethics approval was provided by the following Research Ethics Boards: University of Western Health Sciences Ethics Board (11087); University of Toronto Health Sciences Ethics Boards (009134), McMaster University Health Sciences Ethics Board, University of Ottawa Research Ethics Board, and the Northern Ontario School of Medicine. Additionally, this research was governed and approved by the Data Governance Board of the Council of Ontario Faculties of Medicine., (© 2024. The Author(s), under exclusive licence to Springer Nature B.V.)
- Published
- 2024
- Full Text
- View/download PDF
9. Learning Plan Use in Undergraduate Medical Education: A Scoping Review.
- Author
-
Romanova A, Touchie C, Ruller S, Kaka S, Moschella A, Zucker M, Cole V, and Humphrey-Murto S
- Subjects
- Humans, Learning, Students, Medical psychology, Students, Medical statistics & numerical data, Competency-Based Education methods, Clinical Competence, Curriculum, Education, Medical, Undergraduate methods
- Abstract
Purpose: How to best support self-regulated learning (SRL) skills development and track trainees' progress along their competency-based medical education learning trajectory is unclear. Learning plans (LPs) may be the answer; however, information on their use in undergraduate medical education (UME) is limited. This study summarizes the literature regarding LP use in UME, explores the student's role in LP development and implementation, and identifies additional research areas., Method: MEDLINE, Embase, PsycInfo, Education Source, and Web of Science databases were searched for articles published from database inception to March 6, 2024, and relevant reference lists were manually searched. The review included studies of undergraduate medical students, studies of LP use, and studies of the UME stage in any geographic setting. Data were analyzed using quantitative and qualitative content analyses., Results: The database search found 7,871 titles and abstracts with an additional 25 found from the manual search for a total of 7,896 articles, of which 39 met inclusion criteria. Many LPs lacked a guiding framework. LPs were associated with self-reported improved SRL skill development, learning structure, and learning outcomes. Barriers to their use for students and faculty were time to create and implement LPs, lack of training on LP development and implementation, and lack of engagement. Facilitators included SRL skill development, LP cocreation, and guidance by a trained mentor. Identified research gaps include objective outcome measures, longitudinal impact beyond UME, standardized framework for LP development and quality assessment, and training on SRL skills and LPs., Conclusions: This review demonstrates variability of LP use in UME. LPs appear to have potential to support medical student education and facilitate translation of SRL skills into residency training. Successful use requires training and an experienced mentor. However, more research is required to determine whether benefits of LPs outweigh the resources required for their use., (Copyright © 2024 the Association of American Medical Colleges.)
- Published
- 2024
- Full Text
- View/download PDF
10. Implicit versus explicit first impressions in performance-based assessment: will raters overcome their first impressions when learner performance changes?
- Author
-
Wood TJ, Daniels VJ, Pugh D, Touchie C, Halman S, and Humphrey-Murto S
- Subjects
- Humans, Female, Male, Observer Variation, Adult, Video Recording, Educational Measurement methods, Educational Measurement standards, Clinical Competence standards, Judgment
- Abstract
First impressions can influence rater-based judgments but their contribution to rater bias is unclear. Research suggests raters can overcome first impressions in experimental exam contexts with explicit first impressions, but these findings may not generalize to a workplace context with implicit first impressions. The study had two aims. First, to assess if first impressions affect raters' judgments when workplace performance changes. Second, whether explicitly stating these impressions affects subsequent ratings compared to implicitly-formed first impressions. Physician raters viewed six videos where learner performance either changed (Strong to Weak or Weak to Strong) or remained consistent. Raters were assigned two groups. Group one (n = 23, Explicit) made a first impression global rating (FIGR), then scored learners using the Mini-CEX. Group two (n = 22, Implicit) scored learners at the end of the video solely with the Mini-CEX. For the Explicit group, in the Strong to Weak condition, the FIGR (M = 5.94) was higher than the Mini-CEX Global rating (GR) (M = 3.02, p < .001). In the Weak to Strong condition, the FIGR (M = 2.44) was lower than the Mini-CEX GR (M = 3.96 p < .001). There was no difference between the FIGR and the Mini-CEX GR in the consistent condition (M = 6.61, M = 6.65 respectively, p = .84). There were no statistically significant differences in any of the conditions when comparing both groups' Mini-CEX GR. Therefore, raters adjusted their judgments based on the learners' performances. Furthermore, raters who made their first impressions explicit showed similar rater bias to raters who followed a more naturalistic process., (© 2023. The Author(s), under exclusive licence to Springer Nature B.V.)
- Published
- 2024
- Full Text
- View/download PDF
11. Protocol for a scoping review study on learning plan use in undergraduate medical education.
- Author
-
Romanova A, Touchie C, Ruller S, Cole V, and Humphrey-Murto S
- Subjects
- Humans, Clinical Competence, Competency-Based Education methods, Scoping Reviews As Topic, Education, Medical, Undergraduate methods, Learning
- Abstract
Background: The current paradigm of competency-based medical education and learner-centredness requires learners to take an active role in their training. However, deliberate and planned continual assessment and performance improvement is hindered by the fragmented nature of many medical training programs. Attempts to bridge this continuity gap between supervision and feedback through learner handover have been controversial. Learning plans are an alternate educational tool that helps trainees identify their learning needs and facilitate longitudinal assessment by providing supervisors with a roadmap of their goals. Informed by self-regulated learning theory, learning plans may be the answer to track trainees' progress along their learning trajectory. The purpose of this study is to summarise the literature regarding learning plan use specifically in undergraduate medical education and explore the student's role in all stages of learning plan development and implementation., Methods: Following Arksey and O'Malley's framework, a scoping review will be conducted to explore the use of learning plans in undergraduate medical education. Literature searches will be conducted using multiple databases by a librarian with expertise in scoping reviews. Through an iterative process, inclusion and exclusion criteria will be developed and a data extraction form refined. Data will be analysed using quantitative and qualitative content analyses., Discussion: By summarising the literature on learning plan use in undergraduate medical education, this study aims to better understand how to support self-regulated learning in undergraduate medical education. The results from this project will inform future scholarly work in competency-based medical education at the undergraduate level and have implications for improving feedback and supporting learners at all levels of competence., Scoping Review Registration: Open Science Framework osf.io/wvzbx., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
12. Data sharing and big data in health professions education: Ottawa consensus statement and recommendations for scholarship.
- Author
-
Kulasegaram KM, Grierson L, Barber C, Chahine S, Chou FC, Cleland J, Ellis R, Holmboe ES, Pusic M, Schumacher D, Tolsgaard MG, Tsai CC, Wenghofer E, and Touchie C
- Subjects
- Humans, Consensus, Information Dissemination, Big Data, Health Occupations education
- Abstract
Changes in digital technology, increasing volume of data collection, and advances in methods have the potential to unleash the value of big data generated through the education of health professionals. Coupled with this potential are legitimate concerns about how data can be used or misused in ways that limit autonomy, equity, or harm stakeholders. This consensus statement is intended to address these issues by foregrounding the ethical imperatives for engaging with big data as well as the potential risks and challenges. Recognizing the wide and ever evolving scope of big data scholarship, we focus on foundational issues for framing and engaging in research. We ground our recommendations in the context of big data created through data sharing across and within the stages of the continuum of the education and training of health professionals. Ultimately, the goal of this statement is to support a culture of trust and quality for big data research to deliver on its promises for health professions education (HPE) and the health of society. Based on expert consensus and review of the literature, we report 19 recommendations in (1) framing scholarship and research through research, (2) considering unique ethical practices, (3) governance of data sharing collaborations that engage stakeholders, (4) data sharing processes best practices, (5) the importance of knowledge translation, and (6) advancing the quality of scholarship through multidisciplinary collaboration. The recommendations were modified and refined based on feedback from the 2022 Ottawa Conference attendees and subsequent public engagement. Adoption of these recommendations can help HPE scholars share data ethically and engage in high impact big data scholarship, which in turn can help the field meet the ultimate goal: high-quality education that leads to high-quality healthcare.
- Published
- 2024
- Full Text
- View/download PDF
13. How much is enough? Proposing achievement thresholds for core EPAs of graduating medical students in Canada.
- Author
-
Harvey A, Paget M, McLaughlin K, Busche K, Touchie C, Naugler C, and Desy J
- Subjects
- Humans, Pandemics, Canada, Clinical Competence, Competency-Based Education methods, Students, Medical, COVID-19 epidemiology, Internship and Residency
- Abstract
Purpose: The transition towards Competency-Based Medical Education at the Cumming School of Medicine was accelerated by the reduced clinical time caused by the COVID-19 pandemic. The purpose of this study was to define a standard protocol for setting Entrustable Professional Activity (EPA) achievement thresholds and examine their feasibility within the clinical clerkship., Methods: Achievement thresholds for each of the 12 AFMC EPAs for graduating Canadian medical students were set by using sequential rounds of revision by three consecutive groups of stakeholders and evaluation experts. Structured communication was guided by a modified Delphi technique. The feasibility/consequence models of these EPAs were then assessed by tracking their completion by the graduating class of 2021., Results: The threshold-setting process resulted in set EPA achievement levels ranging from 1 to 8 across the 12 AFMC EPAs. Estimates were stable after the first round for 9 of 12 EPAs. 96.27% of EPAs were successfully completed by clerkship students despite the shortened clinical period. Feasibility was predicted by the slowing rate of EPA accumulation overtime during the clerkship., Conclusion: The process described led to consensus on EPA achievement thresholds. Successful completion of the assigned thresholds was feasible within the shortened clerkship.[Box: see text].
- Published
- 2023
- Full Text
- View/download PDF
14. Family Physician Quality Improvement Plans: A Realist Inquiry Into What Works, for Whom, Under What Circumstances.
- Author
-
Roy M, Lockyer J, and Touchie C
- Abstract
Introduction: Evaluation of quality improvement programs shows variable impact on physician performance often neglecting to examine how implementation varies across contexts and mechanisms that affect uptake. Realist evaluation enables the generation, refinement, and testing theories of change by unpacking what works for whom under what circumstances and why. This study used realist methods to explore relationships between outcomes, mechanisms (resources and reasoning), and context factors of a national multisource feedback (MSF) program., Methods: Linked data for 50 physicians were examined to determine relationships between action plan completion status (outcomes), MSF ratings, MSF comments and prescribing data (resource mechanisms), a report summarizing the conversation between a facilitator and physician (reasoning mechanism), and practice risk factors (context). Working backward from outcomes enabled exploration of similarities and differences in mechanisms and context., Results: The derived model showed that the completion status of plans was influenced by interaction of resource and reasoning mechanisms with context mediating the relationships. Two patterns were emerged. Physicians who implemented all their plans within six months received feedback with consistent messaging, reviewed data ahead of facilitation, coconstructed plan(s) with the facilitator, and had fewer risks to competence (dyscompetence). Physicians who were unable to implement any plans had data with fewer repeated messages and did not incorporate these into plans, had difficult plans, or needed to involve others and were physician-led, and were at higher risk for dyscompetence., Discussion: Evaluation of quality improvement initiatives should examine program outcomes taking into consideration the interplay of resources, reasoning, and risk factors for dyscompetence., Competing Interests: Disclosures: C. Touchie is a paid consultant for the Medical Council of Canada. M. Roy and J. Lockyer have no conflict of interest., (Copyright © 2022 The Alliance for Continuing Education in the Health Professions, the Association for Hospital Medical Education, and the Society for Academic Continuing Medical Education.)
- Published
- 2023
- Full Text
- View/download PDF
15. Exploring Content Relationships Among Components of a Multisource Feedback Program.
- Author
-
Roy M, Kain N, and Touchie C
- Subjects
- Humans, Feedback, Surveys and Questionnaires, Quality Improvement, Clinical Competence, Physicians
- Abstract
Introduction: A new multisource feedback (MSF) program was specifically designed to support physician quality improvement (QI) around the CanMEDS roles of Collaborator , Communicator , and Professional . Quantitative ratings and qualitative comments are collected from a sample of physician colleagues, co-workers (C), and patients (PT). These data are supplemented with self-ratings and given back to physicians in individualized reports. Each physician reviews the report with a trained feedback facilitator and creates one-to-three action plans for QI. This study explores how the content of the four aforementioned multisource feedback program components supports the elicitation and translation of feedback into a QI plan for change., Methods: Data included survey items, rater comments, a portion of facilitator reports, and action plans components for 159 physicians. Word frequency queries were used to identify common words and explore relationships among data sources., Results: Overlap between high frequency words in surveys and rater comments was substantial. The language used to describe goals in physician action plans was highly related to respondent comments, but less so to survey items. High frequency words in facilitator reports related heavily to action plan content., Discussion: All components of the program relate to one another indicating that each plays a part in the process. Patterns of overlap suggest unique functions conducted by program components. This demonstration of coherence across components of this program is one piece of evidence that supports the program's validity., Competing Interests: Disclosures: The authors declare no conflict of interest., (Copyright © 2021 The Alliance for Continuing Education in the Health Professions, the Association for Hospital Medical Education, and the Society for Academic Continuing Medical Education.)
- Published
- 2022
- Full Text
- View/download PDF
16. Cancel culture: exploring the unintended consequences of cancelling the Canadian national licensing clinical examination.
- Author
-
Touchie C and Pugh D
- Abstract
Assessment drives learning. However, when it comes to high-stakes examinations (e.g., for licensure or certification), these assessments of learning may be seen as unnecessary hurdles by some. Licensing clinical skills assessment in particular have come under fire over the years. Recently, assessments such as the Medical Council of Canada Qualifying Examination Part II, a clinical skills objective structured clinical examination, have been permanently cancelled. The authors explore potential consequences of this cancellation including those that are inadvertent and undesirable. Future next steps for clinical skills assessment are explored., Competing Interests: D. Pugh is a paid employee of the Medical Council of Canada. C. Touchie is a paid consultant advisor for the Medical Council of Canada. The views expressed in this manuscript are those of the authors and do not necessarily reflect the views of the Medical Council of Canada., (© 2022 Touchie, Pugh; licensee Synergies Partners.)
- Published
- 2022
- Full Text
- View/download PDF
17. Who can do this procedure? Using entrustable professional activities to determine curriculum and entrustment in anesthesiology - An international survey.
- Author
-
Burkhart CS, Dell-Kuster S, and Touchie C
- Subjects
- Clinical Competence, Competency-Based Education methods, Curriculum, Humans, Surveys and Questionnaires, Anesthesiology education, Internship and Residency
- Abstract
Introduction: As competency-based curricula get increasing attention in postgraduate medical education, Entrustable Professional Activities (EPAs) are gaining in popularity. The aim of this survey was to determine the use of EPAs in anesthesiology training programs across Europe and North America., Methods: A survey was developed and distributed to anesthesiology residency training program directors in Switzerland, Germany, Austria, Netherlands, USA and Canada. A convergent design mixed-methods approach was used to analyze both quantitative and qualitative data., Results: The survey response rate was 38% (108 of 284). Seven percent of respondents used EPAs for making entrustment decisions. Fifty-three percent of institutions have not implemented any specific system to make such decisions. The majority of respondents agree that EPAs should become an integral part of the training of residents in anesthesiology as they are universal and easy to use., Conclusion: Although recommended by several national societies, EPAs are used in few anesthesiology training programs. Over half of responding programs have no specific system for making entrustment decisions. Although several countries are adopting or planning to adopt EPAs and national societies are recommending the use of EPAs as a framework in their competency-based programs, few are yet using these to make "competence" decisions.
- Published
- 2022
- Full Text
- View/download PDF
18. Written-Based Progress Testing: A Scoping Review.
- Author
-
Dion V, St-Onge C, Bartman I, Touchie C, and Pugh D
- Subjects
- Humans, Delivery of Health Care, Knowledge
- Abstract
Purpose: Progress testing is an increasingly popular form of assessment in which a comprehensive test is administered to learners repeatedly over time. To inform potential users, this scoping review aimed to document barriers, facilitators, and potential outcomes of the use of written progress tests in higher education., Method: The authors followed Arksey and O'Malley's scoping review methodology to identify and summarize the literature on progress testing. They searched 6 databases (Academic Search Complete, CINAHL, ERIC, Education Source, MEDLINE, and PsycINFO) on 2 occasions (May 22, 2018, and April 21, 2020) and included articles written in English or French and pertaining to written progress tests in higher education. Two authors screened articles for the inclusion criteria (90% agreement), then data extraction was performed by pairs of authors. Using a snowball approach, the authors also screened additional articles identified from the included reference lists. They completed a thematic analysis through an iterative process., Results: A total of 104 articles were included. The majority of progress tests used a multiple-choice and/or true-or-false question format (95, 91.3%) and were administered 4 times a year (38, 36.5%). The most documented source of validity evidence was internal consistency (38, 36.5%). Four major themes were identified: (1) barriers and challenges to the implementation of progress testing (e.g., need for additional resources); (2) established collaboration as a facilitator of progress testing implementation; (3) factors that increase the acceptance of progress testing (e.g., formative use); and (4) outcomes and consequences of progress test use (e.g., progress testing contributes to an increase in knowledge)., Conclusions: Progress testing appears to have a positive impact on learning, and there is significant validity evidence to support its use. Although progress testing is resource- and time-intensive, strategies such as collaboration with other institutions may facilitate its use., (Copyright © 2021 by the Association of American Medical Colleges.)
- Published
- 2022
- Full Text
- View/download PDF
19. Wrestling With the Invincibility Myth: Exploring Physicians' Resistance to Wellness and Resilience-Building Interventions.
- Author
-
LaDonna KA, Cowley L, Touchie C, LeBlanc VR, and Spilg EG
- Subjects
- Burnout, Psychological, Humans, Work-Life Balance, Burnout, Professional prevention & control, Medicine, Physicians
- Abstract
Purpose: Physicians are expected to provide compassionate, error-free care while navigating systemic challenges and organizational demands. Many are burning out. While organizations are scrambling to address the burnout crisis, physicians often resist interventions aimed at enhancing their wellness and building their resilience. The purpose of this research was to empirically study this phenomenon., Method: Constructivist grounded theory was used to inform the iterative data collection and analysis process. In spring 2018, 22 faculty physicians working in Canada participated in semistructured interviews to discuss their experiences of wellness and burnout, their perceptions of wellness initiatives, and how their experiences and perceptions influence their uptake of the rapidly proliferating strategies aimed at nurturing their resilience. Themes were identified using constant comparative analysis., Results: Participants suggested that the values of compassion espoused by health care organizations do not extend to physicians, and they described feeling dehumanized by professional values steeped in an invincibility myth in which physicians are expected to be "superhuman" and "sacrifice everything" for medicine. Participants described that professional values and organizational norms impeded work-life balance, hindered personal and professional fulfillment, and discouraged disclosure of struggles. In turn, participants seemed to resist wellness and resilience-building interventions focused on fixing individuals rather than broader systemic, organizational, and professional issues. Participants perceived that efforts aimed at building individual resilience are futile without changes in professional values and sustained organizational support., Conclusions: Findings suggest that professional and organizational norms and expectations trigger feelings of dehumanization for some physicians. These feelings likely exacerbate burnout and may partly explain physicians' resistance to resilience-building strategies. Mitigating burnout and developing and sustaining a resilient physician workforce will require both individual resistance to problematic professional values and an institutional commitment to creating a culture of compassion for patients and physicians alike., (Copyright © 2022 by the Association of American Medical Colleges.)
- Published
- 2022
- Full Text
- View/download PDF
20. Are raters influenced by prior information about a learner? A review of assimilation and contrast effects in assessment.
- Author
-
Humphrey-Murto S, Shaw T, Touchie C, Pugh D, Cowley L, and Wood TJ
- Subjects
- Humans, Judgment, Observer Variation, Research Personnel, Clinical Competence, Educational Measurement
- Abstract
Understanding which factors can impact rater judgments in assessments is important to ensure quality ratings. One such factor is whether prior performance information (PPI) about learners influences subsequent decision making. The information can be acquired directly, when the rater sees the same learner, or different learners over multiple performances, or indirectly, when the rater is provided with external information about the same learner prior to rating a performance (i.e., learner handover). The purpose of this narrative review was to summarize and highlight key concepts from multiple disciplines regarding the influence of PPI on subsequent ratings, discuss implications for assessment and provide a common conceptualization to inform research. Key findings include (a) assimilation (rater judgments are biased towards the PPI) occurs with indirect PPI and contrast (rater judgments are biased away from the PPI) with direct PPI; (b) negative PPI appears to have a greater effect than positive PPI; (c) when viewing multiple performances, context effects of indirect PPI appear to diminish over time; and (d) context effects may occur with any level of target performance. Furthermore, some raters are not susceptible to context effects, but it is unclear what factors are predictive. Rater expertise and training do not consistently reduce effects. Making raters more accountable, providing specific standards and reducing rater cognitive load may reduce context effects. Theoretical explanations for these findings will be discussed., (© 2021. The Author(s), under exclusive licence to Springer Nature B.V. part of Springer Nature.)
- Published
- 2021
- Full Text
- View/download PDF
21. On the validity of summative entrustment decisions.
- Author
-
Touchie C, Kinnear B, Schumacher D, Caretta-Weyer H, Hamstra SJ, Hart D, Gruppen L, Ross S, Warm E, and Ten Cate O
- Subjects
- Clinical Competence, Competency-Based Education, Decision Making, Humans, Trust, Internship and Residency, Physicians
- Abstract
Health care revolves around trust. Patients are often in a position that gives them no other choice than to trust the people taking care of them. Educational programs thus have the responsibility to develop physicians who can be trusted to deliver safe and effective care, ultimately making a final decision to entrust trainees to graduate to unsupervised practice. Such entrustment decisions deserve to be scrutinized for their validity. This end-of-training entrustment decision is arguably the most important one, although earlier entrustment decisions, for smaller units of professional practice, should also be scrutinized for their validity. Validity of entrustment decisions implies a defensible argument that can be analyzed in components that together support the decision. According to Kane, building a validity argument is a process designed to support inferences of scoring, generalization across observations, extrapolation to new instances, and implications of the decision. A lack of validity can be caused by inadequate evidence in terms of, according to Messick, content, response process, internal structure (coherence) and relationship to other variables, and in misinterpreted consequences. These two leading frameworks (Kane and Messick) in educational and psychological testing can be well applied to summative entrustment decision-making. The authors elaborate the types of questions that need to be answered to arrive at defensible, well-argued summative decisions regarding performance to provide a grounding for high-quality safe patient care.
- Published
- 2021
- Full Text
- View/download PDF
22. Clarifying essential terminology in entrustment.
- Author
-
Schumacher DJ, Cate OT, Damodaran A, Richardson D, Hamstra SJ, Ross S, Hodgson J, Touchie C, Molgaard L, Gofton W, and Carraccio C
- Subjects
- Clinical Competence, Competency-Based Education, Education, Medical, Graduate, Prospective Studies, Retrospective Studies, Education, Medical, Undergraduate, Internship and Residency
- Abstract
With the rapid uptake of entrustable professional activties and entrustment decision-making as an approach in undergraduate and graduate education in medicine and other health professions, there is a risk of confusion in the use of new terminologies. The authors seek to clarify the use of many words related to the concept of entrustment, based on existing literature, with the aim to establish logical consistency in their use. The list of proposed definitions includes independence, autonomy, supervision, unsupervised practice, oversight, general and task-specific trustworthiness, trust, entrust(ment), entrustable professional activity, entrustment decision, entrustability, entrustment-supervision scale, retrospective and prospective entrustment-supervision scales, and entrustment-based discussion. The authors conclude that a shared understanding of the language around entrustment is critical to strengthen bridges among stages of training and practice, such as undergraduate medical education, graduate medical education, and continuing professional development. Shared language and understanding provide the foundation for consistency in interpretation and implementation across the educational continuum.
- Published
- 2021
- Full Text
- View/download PDF
23. How biased are you? The effect of prior performance information on attending physician ratings and implications for learner handover.
- Author
-
Shaw T, Wood TJ, Touchie C, Pugh D, and Humphrey-Murto SM
- Subjects
- Adult, Canada, Competency-Based Education, Educational Measurement methods, Female, Humans, Internship and Residency standards, Male, Middle Aged, Sex Factors, Clinical Competence standards, Educational Measurement standards, Internship and Residency organization & administration, Observer Variation
- Abstract
Learner handover (LH), the process of sharing of information about learners between faculty supervisors, allows for longitudinal assessment fundamental in the competency-based education model. However, the potential to bias future assessments has been raised as a concern. The purpose of this study is to determine whether prior performance information such as LH influences the assessment of learners in the clinical context. Between December 2017 and June 2018, forty-two faculty members and final-year residents from the Department of Medicine at the University of Ottawa were assigned to one of three study groups through quasi-randomisation, taking into account gender, speciality and rater experience. In a counter-balanced design, each group received either positive, negative or no LH prior to watching six simulated learner-patient encounter videos. Participants rated each video using the mini-CEX and completed a questionnaire on the raters' general impressions of LH. A significant difference in the mean mini-CEX competency scale scores between the negative (M = 5.29) and positive (M = 5.97) LH groups (P < .001, d = 0.81) was noted. Similar findings were found for the single overall clinical competence ratings. In the post-study questionnaire, 22/28 (78%) of participants had correctly deduced the purpose of the study and 14/28 (50%) felt LH did not influence their assessment. LH influenced mini-CEX scores despite raters' awareness of the potential for bias. These results suggest that LH could influence a rater's performance assessment and careful consideration of the potential implications of LH is required.
- Published
- 2021
- Full Text
- View/download PDF
24. Will I publish this abstract? Determining the characteristics of medical education oral abstracts linked to publication.
- Author
-
Guay JM, Wood TJ, Touchie C, Ta CA, and Halman S
- Abstract
Background: Prior studies have shown that most conference submissions fail to be published. Understanding factors that facilitate publication may be of benefit to authors. Using data from the Canadian Conference on Medical Education (CCME), our goal was to identify characteristics of conference submissions that predict the likelihood of publication with a specific focus on the utility of peer-review ratings., Methods: Study characteristics (scholarship type, methodology, population, sites, institutions) from all oral abstracts from 2011-2015 and peer-review ratings for 2014-2015 were extracted by two raters. Publication data was obtained using online database searches. The impact of variables on publication success was analyzed using logistic regressions., Results: In total, 953 oral abstracts were reviewed from 2011 to 2015. Overall, the publication rate was 30.5% (291/953). Of 531 abstracts with peer-review ratings, between 2014 and 2015, 162 (31%) were published. Of the nine analyzed variables, those associated with a greater odds of publication were: multiple vs. single institutions (odds ratio (OR) = 1.72), post-graduate research vs. others (OR=1.81) and peer-review ratings (OR=1.60). Factors with decreased odds of publication were curriculum development (OR=0.17) and innovation vs. others (OR=0.22)., Conclusion: Similar to other studies, the publication rate of CCME presentations is low. However, peer ratings were predictive of publication success suggesting that ratings could be a useful form of feedback to authors., Competing Interests: Conflicts of interest: Timothy J. Wood, Claire Touchie and Samantha Halman are current or past members of the CCME scientific planning committee, but no other conflicts are identified. Conference abstract selection is a multi-rater process and thus none of the authors are uniquely responsible for abstract, (© 2020 Guay, Wood, Touchie, Ta, Halman; licensee Synergies Partners.)
- Published
- 2020
- Full Text
- View/download PDF
25. The Influence of Prior Performance Information on Ratings of Current Performance and Implications for Learner Handover: A Scoping Review.
- Author
-
Humphrey-Murto S, LeBlanc A, Touchie C, Pugh D, Wood TJ, Cowley L, and Shaw T
- Subjects
- Educational Measurement methods, Humans, Motivation, Time Factors, Work Performance standards, Educational Measurement standards, Observer Variation, Work Performance education
- Abstract
Purpose: Learner handover (LH) is the sharing of information about trainees between faculty supervisors. This scoping review aimed to summarize key concepts across disciplines surrounding the influence of prior performance information (PPI) on current performance ratings and implications for LH in medical education., Method: The authors used the Arksey and O'Malley framework to systematically select and summarize the literature. Cross-disciplinary searches were conducted in six databases in 2017-2018 for articles published after 1969. To represent PPI relevant to LH in medical education, eligible studies included within-subject indirect PPI for work-type performance and rating of an individual current performance. Quantitative and thematic analyses were conducted., Results: Of 24,442 records identified through database searches and 807 through other searches, 23 articles containing 24 studies were included. Twenty-two studies (92%) reported an assimilation effect (current ratings were biased toward the direction of the PPI). Factors modifying the effect of PPI were observed, with larger effects for highly polarized PPI, negative (vs positive) PPI, and early (vs subsequent) performances. Specific standards, rater motivation, and certain rater characteristics mitigated context effects, whereas increased rater processing demands heightened them. Mixed effects were seen with nature of the performance and with rater expertise and training., Conclusions: PPI appears likely to influence ratings of current performance, and an assimilation effect is seen with indirect PPI. Whether these findings generalize to medical education is unknown, but they should be considered by educators wanting to implement LH. Future studies should explore PPI in medical education contexts and real-world settings.
- Published
- 2019
- Full Text
- View/download PDF
26. Plus ça change, plus c'est pareil: Making a continued case for the use of MCQs in medical education.
- Author
-
Pugh D, De Champlain A, and Touchie C
- Subjects
- Cognition, Competency-Based Education, Computer-Assisted Instruction methods, Humans, Education, Medical, Undergraduate, Educational Measurement methods
- Abstract
Despite the increased emphasis on the use of workplace-based assessment in competency-based education models, there is still an important role for the use of multiple choice questions (MCQs) in the assessment of health professionals. The challenge, however, is to ensure that MCQs are developed in a way to allow educators to derive meaningful information about examinees' abilities. As educators' needs for high-quality test items have evolved so has our approach to developing MCQs. This evolution has been reflected in a number of ways including: the use of different stimulus formats; the creation of novel response formats; the development of new approaches to problem conceptualization; and the incorporation of technology. The purpose of this narrative review is to provide the reader with an overview of how our understanding of the use of MCQs in the assessment of health professionals has evolved to better measure clinical reasoning and to improve both efficiency and item quality.
- Published
- 2019
- Full Text
- View/download PDF
27. Choosing Our Own Pathway to Competency-Based Undergraduate Medical Education.
- Author
-
Veale P, Busche K, Touchie C, Coderre S, and McLaughlin K
- Subjects
- Adult, Canada, Female, Humans, Male, North America, Young Adult, Clinical Competence, Competency-Based Education organization & administration, Curriculum, Education, Medical, Undergraduate organization & administration, Educational Measurement methods, Students, Medical psychology
- Abstract
After many years in the making, an increasing number of postgraduate medical education (PGME) training programs in North America are now adopting a competency-based medical education (CBME) framework based on entrustable professional activities (EPAs) that, in turn, encompass a larger number of competencies and training milestones. Following the lead of PGME, CBME is now being incorporated into undergraduate medical education (UME) in an attempt to improve integration across the medical education continuum and to facilitate a smooth transition from clerkship to residency by ensuring that all graduates are ready for indirect supervision of required EPAs on day one of residency training. The Association of Faculties of Medicine of Canada recently finalized its list of 12 EPAs, which closely parallels the list of 13 EPAs published earlier by the Association of American Medical Colleges, and defines the "core" EPAs that are an expectation of all medical school graduates.In this article, the authors focus on important, practical considerations for the transition to CBME that they feel have not been adequately addressed in the existing literature. They suggest that the transition to CBME should not threaten diversity in UME or require a major curricular upheaval. However, each UME program must make important decisions that will define its version of CBME, including which terminology to use when describing the construct being evaluated, which rating tools and raters to include in the assessment program, and how to make promotion decisions based on all of the available data on EPAs.
- Published
- 2019
- Full Text
- View/download PDF
28. Overcoming the barriers of teaching physical examination at the bedside: more than just curriculum design.
- Author
-
Rousseau M, Könings KD, and Touchie C
- Subjects
- Adult, Attitude of Health Personnel, Female, Focus Groups, Humans, Male, Qualitative Research, Clinical Competence standards, Curriculum, Education, Medical, Graduate, Internship and Residency, Physical Examination standards, Point-of-Care Testing standards
- Abstract
Background: Physicians in training must achieve a high degree of proficiency in performing physical examinations and must strive to become experts in the field. Concerns are emerging about physicians' abilities to perform these basic skills, essential for clinical decision making. Learning at the bedside has the potential to support skill acquisition through deliberate practice. Previous skills improvement programs, targeted at teaching physical examinations, have been successful at increasing the frequency of performing and teaching physical examinations. It remains unclear what barriers might persist after such program implementation. This study explores residents' and physicians' perceptions of physical examinations teaching at the bedside following the implementation of a new structured bedside curriculum: What are the potentially persisting barriers and proposed solutions for improvement?, Methods: The study used a constructivist approach using a qualitative inductive thematic analysis that was oriented to construct an understanding of the barriers and facilitators of physical examination teaching in the context of a new bedside curriculum. Participants took part in individual interviews and subsequently focus groups. Transcripts were coded and themes were identified., Results: Data analyses yielded three main themes: (1) the culture of teaching physical examination at the bedside is shaped and threatened by the lack of hospital support, physicians' motivation and expertise, residents' attitudes and dependence on technology, (2) the hospital environment makes bedside teaching difficult because of its chaotic nature, time constraints and conflicting responsibilities, and finally (3) structured physical examination curricula create missed opportunities in being restrictive and pose difficulties in identifying patients with findings., Conclusions: Despite the implementation of a structured bedside curriculum for physical examination teaching, our study suggests that cultural, environmental and curriculum-related barriers remain important issues to be addressed. Institutions wishing to develop and implement similar bedside curricula should prioritize recruitment of expert clinical teachers, recognizing their time and efforts. Teaching should be delivered in a protected environment, away from clinical duties, and with patients with real findings. Physicians must value teaching and learning of physical examination skills, with multiple hands-on opportunities for direct role modeling, coaching, observation and deliberate practice. Ideally, clinical teachers should master the art of combining both patient care and educational activities.
- Published
- 2018
- Full Text
- View/download PDF
29. Can physician examiners overcome their first impression when examinee performance changes?
- Author
-
Wood TJ, Pugh D, Touchie C, Chan J, and Humphrey-Murto S
- Subjects
- Adult, Female, Humans, Judgment, Male, Middle Aged, Socioeconomic Factors, Clinical Competence standards, Educational Measurement methods, Educational Measurement standards, Observer Variation
- Abstract
There is an increasing focus on factors that influence the variability of rater-based judgments. First impressions are one such factor. First impressions are judgments about people that are made quickly and are based on little information. Under some circumstances, these judgments can be predictive of subsequent decisions. A concern for both examinees and test administrators is whether the relationship remains stable when the performance of the examinee changes. That is, once a first impression is formed, to what degree will an examiner be willing to modify it? The purpose of this study is to determine the degree that first impressions influence final ratings when the performance of examinees changes within the context of an objective structured clinical examination (OSCE). Physician examiners (n = 29) viewed seven videos of examinees (i.e., actors) performing a physical exam on a single OSCE station. They rated the examinees' clinical abilities on a six-point global rating scale after 60 s (first impression or FIGR). They then observed the examinee for the remainder of the station and provided a final global rating (GRS). For three of the videos, the examinees' performance remained consistent throughout the videos. For two videos, examinee performance changed from initially strong to weak and for two videos, performance changed from initially weak to strong. The mean FIGR rating for the Consistent condition (M = 4.80) and the Strong to Weak condition (M = 4.87) were higher compared to their respective GRS ratings (M = 3.93, M = 2.73) with a greater decline for the Strong to Weak condition. The mean FIGR rating for the Weak to Strong condition was lower (3.60) than the corresponding mean GRS (4.81). This pattern of findings suggests that raters were willing to change their judgments based on examinee performance. Future work should explore the impact of making a first impression judgment explicit versus implicit and the role of context on the relationship between a first impression and a subsequent judgment.
- Published
- 2018
- Full Text
- View/download PDF
30. A Call to Investigate the Relationship Between Education and Health Outcomes Using Big Data.
- Author
-
Chahine S, Kulasegaram KM, Wright S, Monteiro S, Grierson LEM, Barber C, Sebok-Syer SS, McConnell M, Yen W, De Champlain A, and Touchie C
- Subjects
- Humans, Big Data, Education, Medical statistics & numerical data, Needs Assessment, Outcome Assessment, Health Care statistics & numerical data
- Abstract
There exists an assumption that improving medical education will improve patient care. While seemingly logical, this premise has rarely been investigated. In this Invited Commentary, the authors propose the use of big data to test this assumption. The authors present a few example research studies linking education and patient care outcomes and argue that using big data may more easily facilitate the process needed to investigate this assumption. The authors also propose that collaboration is needed to link educational and health care data. They then introduce a grassroots initiative, inclusive of universities in one Canadian province and national licensing organizations that are working together to collect, organize, link, and analyze big data to study the relationship between pedagogical approaches to medical training and patient care outcomes. While the authors acknowledge the possible challenges and issues associated with harnessing big data, they believe that the benefits supersede these. There is a need for medical education research to go beyond the outcomes of training to study practice and clinical outcomes as well. Without a coordinated effort to harness big data, policy makers, regulators, medical educators, and researchers are left with sometimes costly guesses and assumptions about what works and what does not. As the social, time, and financial investments in medical education continue to increase, it is imperative to understand the relationship between education and health outcomes.
- Published
- 2018
- Full Text
- View/download PDF
31. Assessment Pearls for Competency-Based Medical Education.
- Author
-
Humphrey-Murto S, Wood TJ, Ross S, Tavares W, Kvern B, Sidhu R, Sargeant J, and Touchie C
- Subjects
- Humans, Clinical Competence, Competency-Based Education, Educational Measurement, Internship and Residency
- Published
- 2017
- Full Text
- View/download PDF
32. EQual, a Novel Rubric to Evaluate Entrustable Professional Activities for Quality and Structure.
- Author
-
Taylor DR, Park YS, Egan R, Chan MK, Karpinski J, Touchie C, Snell LS, and Tekian A
- Subjects
- Canada, Curriculum, Humans, Reproducibility of Results, Clinical Competence, Competency-Based Education, Education, Medical, Graduate, Internal Medicine education, Internship and Residency
- Abstract
Purpose: Entrustable professional activities (EPAs) have become a cornerstone of assessment in competency-based medical education (CBME). Increasingly, EPAs are being adopted that do not conform to EPA standards. This study aimed to develop and validate a scoring rubric to evaluate EPAs for alignment with their purpose, and to identify substandard EPAs., Method: The EQual rubric was developed and revised by a team of education scholars with expertise in EPAs. It was then applied by four residency program directors/CBME leads (PDs) and four nonclinician support staff to 31 stage-specific EPAs developed for internal medicine in the Royal College of Physicians and Surgeons of Canada's Competency by Design framework. Results were analyzed using a generalizability study to evaluate overall reliability, with the EPAs as the object of measurement. Item-level analysis was performed to determine reliability and discrimination value for each item. Scores from the PDs were also compared with decisions about revisions made independently by the education scholars group., Results: The EQual rubric demonstrated high reliability in the G-study with a phi-coefficient of 0.84 when applied by the PDs, and moderate reliability when applied by the support staff at 0.67. Item-level analysis identified three items that performed poorly with low item discrimination and low interrater reliability indices. Scores from support staff only moderately correlated with PDs. Using the preestablished cut score, PDs identified 9 of 10 EPAs deemed to require major revision., Conclusions: EQual rubric scores reliably measured alignment of EPAs with literature-described standards. Further, its application accurately identified EPAs requiring major revisions.
- Published
- 2017
- Full Text
- View/download PDF
33. The influence of first impressions on subsequent ratings within an OSCE station.
- Author
-
Wood TJ, Chan J, Humphrey-Murto S, Pugh D, and Touchie C
- Subjects
- Clinical Competence standards, Education, Medical standards, Faculty, Medical standards, Humans, Medical History Taking standards, Observer Variation, Reproducibility of Results, Videotape Recording, Education, Medical methods, Educational Measurement methods, Educational Measurement standards, Faculty, Medical psychology
- Abstract
Competency-based assessment is placing increasing emphasis on the direct observation of learners. For this process to produce valid results, it is important that raters provide quality judgments that are accurate. Unfortunately, the quality of these judgments is variable and the roles of factors that influence the accuracy of those judgments are not clearly understood. One such factor is first impressions: that is, judgments about people we do not know, made quickly and based on very little information. This study explores the influence of first impressions in an OSCE. Specifically, the purpose is to begin to examine the accuracy of a first impression and its influence on subsequent ratings. We created six videotapes of history-taking performance. Each video was scripted from a real performance by six examinee residents within a single OSCE station. Each performance was re-enacted with six different actors playing the role of the examinees and one actor playing the role of the patient and videotaped. A total of 23 raters (i.e., physician examiners) reviewed each video and were asked to make a global judgment of the examinee's clinical abilities after 60 s (First Impression GR) by providing a rating on a six-point global rating scale and then to rate their confidence in the accuracy of that judgment by providing a rating on a five-point rating scale (Confidence GR). After making these ratings, raters then watched the remainder of the examinee's performance and made another global rating of performance (Final GR) before moving on to the next video. First impression ratings of ability varied across examinees and were moderately correlated to expert ratings (r = .59, 95% CI [-.13, .90]). There were significant differences in mean ratings for three examinees. Correlations ranged from .05 to .56 but were only significant for three examinees. Rater confidence in their first impression was not related to the likelihood of a rater changing their rating between the first impression and a subsequent rating. The findings suggest that first impressions could play a role in explaining variability in judgments, but their importance was determined by the videotaped performance of the examinees. More work is needed to clarify conditions that support or discourage the use of first impressions.
- Published
- 2017
- Full Text
- View/download PDF
34. Core principles of assessment in competency-based medical education.
- Author
-
Lockyer J, Carraccio C, Chan MK, Hart D, Smee S, Touchie C, Holmboe ES, and Frank JR
- Subjects
- Education, Medical standards, Educational Measurement standards, Feedback, Humans, Psychometrics, Clinical Competence, Competency-Based Education, Education, Medical methods, Educational Measurement methods, Learning
- Abstract
The meaningful assessment of competence is critical for the implementation of effective competency-based medical education (CBME). Timely ongoing assessments are needed along with comprehensive periodic reviews to ensure that trainees continue to progress. New approaches are needed to optimize the use of multiple assessors and assessments; to synthesize the data collected from multiple assessors and multiple types of assessments; to develop faculty competence in assessment; and to ensure that relationships between the givers and receivers of feedback are appropriate. This paper describes the core principles of assessment for learning and assessment of learning. It addresses several ways to ensure the effectiveness of assessment programs, including using the right combination of assessment methods and conducting careful assessor selection and training. It provides a reconceptualization of the role of psychometrics and articulates the importance of a group process in determining trainees' progress. In addition, it notes that, to reach its potential as a driver in trainee development, quality care, and patient safety, CBME requires effective information management and documentation as well as ongoing consideration of ways to improve the assessment system.
- Published
- 2017
- Full Text
- View/download PDF
35. Direct Observation of Clinical Skills Feedback Scale: Development and Validity Evidence.
- Author
-
Halman S, Dudek N, Wood T, Pugh D, Touchie C, McAleer S, and Humphrey-Murto S
- Subjects
- Clinical Competence, Humans, Students, Medical, Competency-Based Education, Education, Medical, Graduate, Feedback
- Abstract
Construct: This article describes the development and validity evidence behind a new rating scale to assess feedback quality in the clinical workplace., Background: Competency-based medical education has mandated a shift to learner-centeredness, authentic observation, and frequent formative assessments with a focus on the delivery of effective feedback. Because feedback has been shown to be of variable quality and effectiveness, an assessment of feedback quality in the workplace is important to ensure we are providing trainees with optimal learning opportunities. The purposes of this project were to develop a rating scale for the quality of verbal feedback in the workplace (the Direct Observation of Clinical Skills Feedback Scale [DOCS-FBS]) and to gather validity evidence for its use., Approach: Two panels of experts (local and national) took part in a nominal group technique to identify features of high-quality feedback. Through multiple iterations and review, 9 features were developed into the DOCS-FBS. Four rater types (residents n = 21, medical students n = 8, faculty n = 12, and educators n = 12) used the DOCS-FBS to rate videotaped feedback encounters of variable quality. The psychometric properties of the scale were determined using a generalizability analysis. Participants also completed a survey to gather data on a 5-point Likert scale to inform the ease of use, clarity, knowledge acquisition, and acceptability of the scale., Results: Mean video ratings ranged from 1.38 to 2.96 out of 3 and followed the intended pattern suggesting that the tool allowed raters to distinguish between examples of higher and lower quality feedback. There were no significant differences between rater type (range = 2.36-2.49), suggesting that all groups of raters used the tool in the same way. The generalizability coefficients for the scale ranged from 0.97 to 0.99. Item-total correlations were all above 0.80, suggesting some redundancy in items. Participants found the scale easy to use (M = 4.31/5) and clear (M = 4.23/5), and most would recommend its use (M = 4.15/5). Use of DOCS-FBS was acceptable to both trainees (M = 4.34/5) and supervisors (M = 4.22/5)., Conclusions: The DOCS-FBS can reliably differentiate between feedback encounters of higher and lower quality. The scale has been shown to have excellent internal consistency. We foresee the DOCS-FBS being used as a means to provide objective evidence that faculty development efforts aimed at improving feedback skills can yield results through formal assessment of feedback quality.
- Published
- 2016
- Full Text
- View/download PDF
36. Using cognitive models to develop quality multiple-choice questions.
- Author
-
Pugh D, De Champlain A, Gierl M, Lai H, and Touchie C
- Subjects
- Competency-Based Education, Humans, Education, Medical, Undergraduate, Educational Measurement methods, Educational Measurement standards, Models, Psychological
- Abstract
With the recent interest in competency-based education, educators are being challenged to develop more assessment opportunities. As such, there is increased demand for exam content development, which can be a very labor-intense process. An innovative solution to this challenge has been the use of automatic item generation (AIG) to develop multiple-choice questions (MCQs). In AIG, computer technology is used to generate test items from cognitive models (i.e. representations of the knowledge and skills that are required to solve a problem). The main advantage yielded by AIG is the efficiency in generating items. Although technology for AIG relies on a linear programming approach, the same principles can also be used to improve traditional committee-based processes used in the development of MCQs. Using this approach, content experts deconstruct their clinical reasoning process to develop a cognitive model which, in turn, is used to create MCQs. This approach is appealing because it: (1) is efficient; (2) has been shown to produce items with psychometric properties comparable to those generated using a traditional approach; and (3) can be used to assess higher order skills (i.e. application of knowledge). The purpose of this article is to provide a novel framework for the development of high-quality MCQs using cognitive models.
- Published
- 2016
- Full Text
- View/download PDF
37. Do OSCE progress test scores predict performance in a national high-stakes examination?
- Author
-
Pugh D, Bhanji F, Cole G, Dupre J, Hatala R, Humphrey-Murto S, Touchie C, and Wood TJ
- Subjects
- Canada, Internal Medicine education, Clinical Competence standards, Educational Measurement methods, Internship and Residency standards, Licensure, Medical
- Abstract
Context: Progress tests, in which learners are repeatedly assessed on equivalent content at different times in their training and provided with feedback, would seem to lend themselves well to a competency-based framework, which requires more frequent formative assessments. The objective structured clinical examination (OSCE) progress test is a relatively new form of assessment that is used to assess the progression of clinical skills. The purpose of this study was to establish further evidence for the use of an OSCE progress test by demonstrating an association between scores from this assessment method and those from a national high-stakes examination., Methods: The results of 8 years' of data from an Internal Medicine Residency OSCE (IM-OSCE) progress test were compared with scores on the Royal College of Physicians and Surgeons of Canada Comprehensive Objective Examination in Internal Medicine (RCPSC IM examination), which is comprised of both a written and performance-based component (n = 180). Correlations between scores in the two examinations were calculated. Logistic regression analyses were performed comparing IM-OSCE progress test scores with an 'elevated risk of failure' on either component of the RCPSC IM examination., Results: Correlations between scores from the IM-OSCE (for PGY-1 residents to PGY-4 residents) and those from the RCPSC IM examination ranged from 0.316 (p = 0.001) to 0.554 (<.001) for the performance-based component and 0.305 (p = 0.002) to 0.516 (p < 0.001) for the written component. Logistic regression models demonstrated that PGY-2 and PGY-4 scores from the IM-OSCE were predictive of an 'elevated risk of failure' on both components of the RCPSC IM examination., Conclusions: This study provides further evidence for the use of OSCE progress testing by demonstrating a correlation between scores from an OSCE progress test and a national high-stakes examination. Furthermore, there is evidence that OSCE progress test scores are predictive of future performance on a national high-stakes examination., (© 2016 John Wiley & Sons Ltd.)
- Published
- 2016
- Full Text
- View/download PDF
38. Entrustment Decision Making in Clinical Training.
- Author
-
Ten Cate O, Hart D, Ankel F, Busari J, Englander R, Glasgow N, Holmboe E, Iobst W, Lovell E, Snell LS, Touchie C, Van Melle E, and Wycliffe-Jones K
- Subjects
- Humans, Clinical Competence, Competency-Based Education methods, Decision Making, Education, Medical, Graduate methods, Internship and Residency methods, Interprofessional Relations
- Abstract
The decision to trust a medical trainee with the critical responsibility to care for a patient is fundamental to clinical training. When carefully and deliberately made, such decisions can serve as significant stimuli for learning and also shape the assessment of trainees. Holding back entrustment decisions too much may hamper the trainee's development toward unsupervised practice. When carelessly made, however, they jeopardize patient safety. Entrustment decision-making processes, therefore, deserve careful analysis.Members (including the authors) of the International Competency-Based Medical Education Collaborative conducted a content analysis of the entrustment decision-making process in health care training during a two-day summit in September 2013 and subsequently reviewed the pertinent literature to arrive at a description of the critical features of this process, which informs this article.The authors discuss theoretical backgrounds and terminology of trust and entrustment in the clinical workplace. The competency-based movement and the introduction of entrustable professional activities force educators to rethink the grounds for assessment in the workplace. Anticipating a decision to grant autonomy at a designated level of supervision appears to align better with health care practice than do most current assessment practices. The authors distinguish different modes of trust and entrustment decisions and elaborate five categories, each with related factors, that determine when decisions to trust trainees are made: the trainee, supervisor, situation, task, and the relationship between trainee and supervisor. The authors' aim in this article is to lay a theoretical foundation for a new approach to workplace training and assessment.
- Published
- 2016
- Full Text
- View/download PDF
39. Feedback in the OSCE: What Do Residents Remember?
- Author
-
Humphrey-Murto S, Mihok M, Pugh D, Touchie C, Halman S, and Wood TJ
- Subjects
- Educational Measurement, Humans, Physicians, Surveys and Questionnaires, Tape Recording, Feedback, Internal Medicine education, Internship and Residency, Mental Recall
- Abstract
Theory: The move to competency-based education has heightened the importance of direct observation of clinical skills and effective feedback. The Objective Structured Clinical Examination (OSCE) is widely used for assessment and affords an opportunity for both direct observation and feedback to occur simultaneously. For feedback to be effective, it should include direct observation, assessment of performance, provision of feedback, reflection, decision making, and use of feedback for learning and change., Hypotheses: If one of the goals of feedback is to engage students to think about their performance (i.e., reflection), it would seem imperative that they can recall this feedback both immediately and into the future. This study explores recall of feedback in the context of an OSCE. Specifically, the purpose of this study was to (a) determine the amount and the accuracy of feedback that trainees remember immediately after an OSCE, as well as 1 month later, and (b) assess whether prompting immediate recall improved delayed recall., Methods: Internal medicine residents received 2 minutes of verbal feedback from physician examiners in the context of an OSCE. The feedback was audio-recorded and later transcribed. Residents were randomly allocated to the immediate recall group (immediate-RG; n = 10) or the delayed recall group (delayed-RG; n = 8). The immediate-RG completed a questionnaire prompting recall of feedback received immediately after the OSCE, and then again 1 month later. The delayed-RG completed a questionnaire only 1 month after the OSCE. The total number and accuracy of feedback points provided by examiners were compared to the points recalled by residents. Results comparing recall at 1 month between the immediate-RG and the delayed-RG were also studied., Results: Physician examiners provided considerably more feedback points (M = 16.3) than the residents recalled immediately after the OSCE (M = 2.61, p < .001). There was no significant difference between the number of feedback points recalled upon completion of the OSCE (2.61) compared to 1 month later (M = 1.96, p = .06, Cohen's d = .70). Prompting immediate recall did not improve later recall. The mean accuracy score for feedback recall immediately after the OSCE was 4.3/9 or "somewhat representative," and at 1 month the score dropped to 3.5/9 or "not representative" (ns)., Conclusion: Residents recall very few feedback points immediately after the OSCE and 1 month later. The feedback points that are recalled are neither very accurate nor representative of the feedback actually provided.
- Published
- 2016
- Full Text
- View/download PDF
40. The OSCE progress test--Measuring clinical skill development over residency training.
- Author
-
Pugh D, Touchie C, Humphrey-Murto S, and Wood TJ
- Subjects
- Humans, Ontario, Retrospective Studies, Clinical Competence standards, Educational Measurement methods, Internal Medicine education, Internship and Residency
- Abstract
Purpose: The purpose of this study was to explore the use of an objective structured clinical examination for Internal Medicine residents (IM-OSCE) as a progress test for clinical skills., Methods: Data from eight administrations of an IM-OSCE were analyzed retrospectively. Data were scaled to a mean of 500 and standard deviation (SD) of 100. A time-based comparison, treating post-graduate year (PGY) as a repeated-measures factor, was used to determine how residents' performance progressed over time., Results: Residents' total IM-OSCE scores (n = 244) increased over training from a mean of 445 (SD = 84) in PGY-1 to 534 (SD = 71) in PGY-3 (p < 0.001). In an analysis of sub-scores, including only those who participated in the IM OSCE for all three years of training (n = 46), mean structured oral scores increased from 464 (SD = 92) to 533 (SD = 83) (p < 0.001), physical examination scores increased from 464 (SD = 82) to 520 (SD = 75) (p < 0.001), and procedural skills increased from 495 (SD = 99) to 555 (SD = 67) (p = 0.033). There was no significant change in communication scores (p = 0.97)., Conclusions: The IM-OSCE can be used to demonstrate progression of clinical skills throughout residency training. Although most of the clinical skills assessed improved as residents progressed through their training, communication skills did not appear to change.
- Published
- 2016
- Full Text
- View/download PDF
41. Using Automatic Item Generation to Improve the Quality of MCQ Distractors.
- Author
-
Lai H, Gierl MJ, Touchie C, Pugh D, Boulais AP, and De Champlain A
- Subjects
- Automation, Humans, Jaundice diagnosis, Jaundice therapy, Models, Educational, Psychometrics, Computer-Assisted Instruction methods, Education, Medical, Undergraduate methods, Educational Measurement methods, Quality Improvement
- Abstract
Unlabelled: CONSTRUCT: Automatic item generation (AIG) is an alternative method for producing large numbers of test items that integrate cognitive modeling with computer technology to systematically generate multiple-choice questions (MCQs). The purpose of our study is to describe and validate a method of generating plausible but incorrect distractors. Initial applications of AIG demonstrated its effectiveness in producing test items. However, expert review of the initial items identified a key limitation where the generation of implausible incorrect options, or distractors, might limit the applicability of items in real testing situations., Background: Medical educators require development of test items in large quantities to facilitate the continual assessment of student knowledge. Traditional item development processes are time-consuming and resource intensive. Studies have validated the quality of generated items through content expert review. However, no study has yet documented how generated items perform in a test administration. Moreover, no study has yet to validate AIG through student responses to generated test items., Approach: To validate our refined AIG method in generating plausible distractors, we collected psychometric evidence from a field test of the generated test items. A three-step process was used to generate test items in the area of jaundice. At least 455 Canadian and international medical graduates responded to each of the 13 generated items embedded in a high-stake exam administration. Item difficulty, discrimination, and index of discrimination estimates were calculated for the correct option as well as each distractor., Results: Item analysis results for the correct options suggest that the generated items measured candidate performances across a range of ability levels while providing a consistent level of discrimination for each item. Results for the distractors reveal that the generated items differentiated the low- from the high-performing candidates., Conclusions: Previous research on AIG highlighted how this item development method can be used to produce high-quality stems and correct options for MCQ exams. The purpose of the current study was to describe, illustrate, and evaluate a method for modeling plausible but incorrect options. Evidence provided in this study demonstrates that AIG can produce psychometrically sound test items. More important, by adapting the distractors to match the unique features presented in the stem and correct option, the generation of MCQs using automated procedure has the potential to produce plausible distractors and yield large numbers of high-quality items for medical education.
- Published
- 2016
- Full Text
- View/download PDF
42. The promise, perils, problems and progress of competency-based medical education.
- Author
-
Touchie C and ten Cate O
- Subjects
- Educational Measurement, Internship and Residency, Clinical Competence, Competency-Based Education, Education, Medical methods
- Abstract
Context: Competency-based medical education (CBME) is being adopted wholeheartedly by organisations worldwide in the hope of meeting today's expectations for training a competent doctor. But are we, as medical educators, fulfilling this promise?, Methods: The authors explore, through a personal viewpoint, the problems identified with CBME and the progress made through the development of milestones and entrustable professional activities (EPAs)., Results: Proponents of CBME have strong reasons to keep developing and supporting this broad movement in medical education. Critics, however, have legitimate reservations. The authors observe that the recent increase in use of milestones and EPAs can strengthen the purpose of CBME and counter some of the concerns voiced, if properly implemented., Conclusions: The authors conclude with suggestions for the future and how using EPAs could lead us one step closer to the goals of not only competency-based medical education but also competency-based medical practice., (© 2015 John Wiley & Sons Ltd.)
- Published
- 2016
- Full Text
- View/download PDF
43. A procedural skills OSCE: assessing technical and non-technical skills of internal medicine residents.
- Author
-
Pugh D, Hamstra SJ, Wood TJ, Humphrey-Murto S, Touchie C, Yudkowsky R, and Bordage G
- Subjects
- Adult, Female, Humans, Male, Models, Educational, Ontario, Reproducibility of Results, Clinical Competence, Education, Medical, Graduate standards, Educational Measurement methods, Internal Medicine education, Internship and Residency
- Abstract
Internists are required to perform a number of procedures that require mastery of technical and non-technical skills, however, formal assessment of these skills is often lacking. The purpose of this study was to develop, implement, and gather validity evidence for a procedural skills objective structured clinical examination (PS-OSCE) for internal medicine (IM) residents to assess their technical and non-technical skills when performing procedures. Thirty-five first to third-year IM residents participated in a 5-station PS-OSCE, which combined partial task models, standardized patients, and allied health professionals. Formal blueprinting was performed and content experts were used to develop the cases and rating instruments. Examiners underwent a frame-of-reference training session to prepare them for their rater role. Scores were compared by levels of training, experience, and to evaluation data from a non-procedural OSCE (IM-OSCE). Reliability was calculated using Generalizability analyses. Reliabilities for the technical and non-technical scores were 0.68 and 0.76, respectively. Third-year residents scored significantly higher than first-year residents on the technical (73.5 vs. 62.2%) and non-technical (83.2 vs. 75.1%) components of the PS-OSCE (p < 0.05). Residents who had performed the procedures more frequently scored higher on three of the five stations (p < 0.05). There was a moderate disattenuated correlation (r = 0.77) between the IM-OSCE and the technical component of the PS-OSCE scores. The PS-OSCE is a feasible method for assessing multiple competencies related to performing procedures and this study provides validity evidence to support its use as an in-training examination.
- Published
- 2015
- Full Text
- View/download PDF
44. Supervising incoming first-year residents: faculty expectations versus residents' experiences.
- Author
-
Touchie C, De Champlain A, Pugh D, Downing S, and Bordage G
- Subjects
- Attitude of Health Personnel, Faculty, Medical, Humans, Male, Ontario, Professional Practice standards, Clinical Competence standards, Internship and Residency standards
- Abstract
Context: First-year residents begin clinical practice in settings in which attending staff and senior residents are available to supervise their work. There is an expectation that, while being supervised and as they become more experienced, residents will gradually take on more responsibilities and function independently., Objectives: This study was conducted to define 'entrustable professional activities' (EPAs) and determine the extent of agreement between the level of supervision expected by clinical supervisors (CSs) and the level of supervision reported by first-year residents., Methods: Using a nominal group technique, subject matter experts (SMEs) from multiple specialties defined EPAs for incoming residents; these represented a set of activities to be performed independently by residents by the end of the first year of residency, regardless of specialty. We then surveyed CSs and first-year residents from one institution in order to compare the levels of supervision expected and received during the day and night for each EPA., Results: The SMEs defined 10 EPAs (e.g. completing admission orders, obtaining informed consent) that were ratified by a national panel. A total of 113 CSs and 48 residents completed the survey. Clinical supervisors had the same expectations regardless of time of day. For three EPAs (managing i.v. fluids, obtaining informed consent, obtaining advanced directives) the level of supervision reported by first-year residents was lower than that expected by CSs (p < 0.001) regardless of time of day (i.e. day or night). For four more EPAs (initiating the management of a critically ill patient, handing over the care of a patient to colleagues, writing a discharge prescription, coordinating a patient discharge) differences applied only to night-time work (p ≤ 0.001)., Conclusions: First-year residents reported performing EPAs with less supervision than expected by CSs, especially during the night. Using EPAs to guide the content of the undergraduate curriculum and during examinations could help better align CSs' and residents' expectations about early residency supervision., (© 2014 John Wiley & Sons Ltd.)
- Published
- 2014
- Full Text
- View/download PDF
45. Progress testing: is there a role for the OSCE?
- Author
-
Pugh D, Touchie C, Wood TJ, and Humphrey-Murto S
- Subjects
- Analysis of Variance, Canada, Educational Measurement methods, Educational Measurement standards, Faculty, Medical, Humans, Reproducibility of Results, Clinical Competence standards, Competency-Based Education standards, Educational Measurement statistics & numerical data, Internship and Residency, Psychometrics
- Abstract
Context: The shift from a time-based to a competency-based framework in medical education has created a need for frequent formative assessments. Many educational programmes use some form of written progress test to identify areas of strength and weakness and to promote continuous improvement in their learners. However, the role of performance-based assessments, such as objective structured clinical examinations (OSCEs), in progress testing remains unclear., Objective: The aims of this paper are to describe the use of an OSCE to assess learners at different stages of training, describe a structure for reporting scores, and provide evidence for the psychometric properties of different rating tools., Methods: A 10-station OSCE was administered to internal medicine residents in postgraduate years (PGYs) 1-4. Candidates were assessed using a checklist (CL), a global rating scale (GRS) and a training level rating scale (TLRS). Reliability was calculated for each measure using Cronbach's alpha. Differences in performance by year of training were explored using analysis of variance (anova). Correlations between scores obtained using the different rating instruments were calculated., Results: Sixty-nine residents participated in the OSCE. Inter-station reliability was greater (0.88) using the TLRS compared with the CL (0.84) and GRS (0.79). Using all three rating instruments, scores varied significantly by year of training (p < 0.001). Scores from the different rating instruments were highly correlated: CL and GRS, r = 0.93; CL and TLRS, r = 0.90, and GRS and TLRS, r = 0.94 (p < 0.001). Candidates received feedback on their performance relative to examiner expectations for their PGY level., Conclusions: Scores were found to have high reliability and demonstrated significant differences in performance by year of training. This provides evidence for the validity of using scores achieved on an OSCE as markers of progress in learners at different levels of training. Future studies will focus on assessing individual progress on the OSCE over time., (© 2014 John Wiley & Sons Ltd.)
- Published
- 2014
- Full Text
- View/download PDF
46. The impact of cueing on written examinations of clinical decision making: a case study.
- Author
-
Desjardins I, Touchie C, Pugh D, Wood TJ, and Humphrey-Murto S
- Subjects
- Abdomen, Acute diagnostic imaging, Clinical Competence standards, Decision Making, Diagnosis, Differential, Educational Measurement statistics & numerical data, Heart Failure diagnostic imaging, Humans, Radiography, Random Allocation, Cues, Education, Medical, Educational Measurement methods, Students, Medical psychology
- Abstract
Context: Selected-response (SR) formats (e.g. multiple-choice questions) and constructed-response (CR) formats (e.g. short-answer questions) are commonly used to test the knowledge of examinees. Scores on SR formats are typically higher than scores on CR formats. This difference is often attributed to examinees being cued by options within an SR question, but there could be alternative explanations. The purpose of this study was to expand on previous work with regards to the cueing effect of SR formats by directly contrasting conditions that support cueing versus memory of previously seen questions., Methods: During an objective structured clinical examination, students (n = 144) completed two consecutive stations in which they were presented with the same written cases but in different formats. Group 1 students were presented with CR questions followed by SR questions. Group 2 students were presented with questions in reverse order. Participants were asked to describe their testing experience., Results: Selected-response scores (M = 4.21/10) were statistically higher than the CR scores (M = 3.82/10). However, there was no significant interaction between sequence and format (F(1,142) = 1.59, p = 0.21, ηp2 = 0.01) with scores increasing from 3.49/10 to 4.06/10 in the group that started with CR and decreasing (4.38/10-4.15/10) in the group that started with SR first. Correlations between SR scores and CR scores were high (CR first = 0.78, SR first = 0.89). Questionnaire results indicated that students felt the SR format was easier and led to cueing., Conclusion: To better understand test performance, it is important to know how different response formats could influence results. Because SR scores were higher than CR scores, irrespective of the format seen first, the pattern is consistent with what would be expected if cueing rather than memory for prior questions led to higher SR scores. This could have implications for test designers, especially when selecting question formats., (© 2014 John Wiley & Sons Ltd.)
- Published
- 2014
- Full Text
- View/download PDF
47. Teaching and assessing procedural skills: a qualitative study.
- Author
-
Touchie C, Humphrey-Murto S, and Varpio L
- Subjects
- Feedback, Focus Groups, Humans, Internal Medicine education, Internal Medicine standards, Internship and Residency methods, Internship and Residency standards, Teaching methods, Clinical Competence standards, Educational Measurement methods, Teaching standards
- Abstract
Background: Graduating Internal Medicine residents must possess sufficient skills to perform a variety of medical procedures. Little is known about resident experiences of acquiring procedural skills proficiency, of practicing these techniques, or of being assessed on their proficiency. The purpose of this study was to qualitatively investigate resident 1) experiences of the acquisition of procedural skills and 2) perceptions of procedural skills assessment methods available to them., Methods: Focus groups were conducted in the weeks following an assessment of procedural skills incorporated into an objective structured clinical examination (OSCE). Using fundamental qualitative description, emergent themes were identified and analyzed., Results: Residents perceived procedural skills assessment on the OSCE as a useful formative tool for direct observation and immediate feedback. This positive reaction was regularly expressed in conjunction with a frustration with available assessment systems. Participants reported that proficiency was acquired through resident directed learning with no formal mechanism to ensure acquisition or maintenance of skills., Conclusions: The acquisition and assessment of procedural skills in Internal Medicine programs should move toward a more structured system of teaching, deliberate practice and objective assessment. We propose that directed, self-guided learning might meet these needs.
- Published
- 2013
- Full Text
- View/download PDF
48. Spontaneous tumour lysis syndrome.
- Author
-
Kekre N, Djordjevic B, and Touchie C
- Subjects
- Aged, Carcinoma, Hepatocellular complications, Carcinoma, Hepatocellular pathology, Fatal Outcome, Humans, Liver pathology, Liver Neoplasms complications, Liver Neoplasms pathology, Male, Tumor Lysis Syndrome etiology, Tumor Lysis Syndrome pathology, Tumor Lysis Syndrome prevention & control, Tumor Lysis Syndrome diagnosis
- Published
- 2012
- Full Text
- View/download PDF
49. Cervical cancer screening among HIV-positive women. Retrospective cohort study from a tertiary care HIV clinic.
- Author
-
Leece P, Kendall C, Touchie C, Pottie K, Angel JB, and Jaffey J
- Subjects
- Adult, CD4 Lymphocyte Count, Female, Humans, Middle Aged, Ontario, Retrospective Studies, Viral Load, Early Detection of Cancer statistics & numerical data, HIV Seropositivity, Primary Health Care, Uterine Cervical Neoplasms diagnosis, Vaginal Smears statistics & numerical data
- Abstract
OBJECTIVE To determine the rate of cervical screening among HIV-positive women who received care at a tertiary care clinic, and to determine whether screening rates were influenced by having a primary care provider.DESIGN Retrospective chart review.SETTING Tertiary care outpatient clinic in Ottawa, Ont. PARTICIPANTS Women who were HIV-positive receiving care at the Ottawa Hospital General Campus Immuno deficiency Clinic between July 1, 2002, and June 30, 2005.MAIN OUTCOME MEASURES Whether patients had primary care providers and whether they received cervical screening. We recorded information on patient demographics, HIV status, primary care providers, and cervical screening, including date, results, and type of health care provider ordering the screening.RESULTS Fifty-eight percent (126 of 218) of the women had at least 1 cervical screening test during the 3-year period. Thirty-three percent (42 of 126) of the women who underwent cervical screening had at least 1 abnormal test result. The proportion of women who did not have any cervical tests performed was higher among women who did not have primary care providers (8 of 12 [67%] vs 84 of 206 [41%]; relative risk 1.6, 95%confidence interval 1.06 to 2.52, P < .05), although this group was small.CONCLUSION Despite the high proportion of abnormal cervical screening test results among HIV-positive women, screening rates remained low. Our results support our hypothesis that those women who do not have primary care providers are less likely to undergo cervical screening.
- Published
- 2010
50. How clinical features are presented matters to weaker diagnosticians.
- Author
-
Eva KW, Wood TJ, Riddle J, Touchie C, and Bordage G
- Subjects
- Diagnosis, Humans, Clinical Competence, Language, Terminology as Topic
- Abstract
Objectives: This study aimed to test the extent to which the use of medicalese (i.e. formal medical terminology and semantic qualifiers) alters the test performance of medical graduates; to tease apart the extent to which any observed differences are driven by language difficulties versus differences in medical knowledge; and to assess the impact of varying the language used to present clinical features on the ability of the test to consistently discriminate between candidates., Methods: Six clinical cases were manipulated in the context of pilot items on the Canadian national qualifying examination. Features indicative of two diagnoses were presented uniformly in lay terms, medical terminology and semantic qualifiers, respectively, and in mixed combinations (e.g. features of one diagnosis were presented using lay terminology and features of the other using medicalese). The rate at which the indicated diagnoses were named was considered as a function of language used, site of training, birthplace and medical knowledge (as measured by overall performance on the examination)., Results: In the mixed conditions, Canadian medical graduates were not influenced by the language used to present the cases, whereas international medical graduates (IMGs) were more likely to favour the diagnosis associated with medical terminology relative to that associated with lay terms. This was true regardless of whether the entire sample or only North American-born candidates were considered. Within the IMG cohort, high performers were not influenced by the language manipulation, whereas low performers were. Uniform use of lay terminology resulted in the highest test reliability compared with the other experimental conditions., Conclusions: The results indicate that the influence of medical terminology is driven more by substandard medical knowledge than by the language issues that challenge some candidates. Implications for both the assessment and education of medical professionals are discussed.
- Published
- 2010
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.