9 results on '"Meagan Rawls"'
Search Results
2. Understanding Academic Self-Concept and Performance in Medical Education
- Author
-
Kelly M. Harrell, Meagan Rawls, J.K. Stringer, Cherie D. Edwards, Sally A. Santen, and Diane Biskobing
- Subjects
General Medicine ,Education - Published
- 2023
3. Examining Bloom’s Taxonomy in Multiple Choice Questions: Students’ Approach to Questions
- Author
-
Alicia Richards, Diane M. Biskobing, Sally A. Santen, Meagan Rawls, J. K. Stringer, Eun D. Lee, Jean M. Bailey, and Robert A. Perera
- Subjects
020205 medical informatics ,media_common.quotation_subject ,Medicine (miscellaneous) ,02 engineering and technology ,Assessment ,Education ,03 medical and health sciences ,0302 clinical medicine ,Taxonomy (general) ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,Mathematics education ,Bloom's taxonomy ,030212 general & internal medicine ,Multiple choice questions ,Association (psychology) ,Clinical reasoning ,Multiple choice ,media_common ,Original Research ,Interpretation (philosophy) ,Contrast (statistics) ,Medical students ,Order (business) ,Psychology - Abstract
Background Analytic thinking skills are important to the development of physicians. Therefore, educators and licensing boards utilize multiple-choice questions (MCQs) to assess these knowledge and skills. MCQs are written under two assumptions: that they can be written as higher or lower order according to Bloom’s taxonomy, and students will perceive questions to be the same taxonomical level as intended. This study seeks to understand the students’ approach to questions by analyzing differences in students’ perception of the Bloom’s level of MCQs in relation to their knowledge and confidence. Methods A total of 137 students responded to practice endocrine MCQs. Participants indicated the answer to the question, their interpretation of it as higher or lower order, and the degree of confidence in their response to the question. Results Although there was no significant association between students’ average performance on the content and their question classification (higher or lower), individual students who were less confident in their answer were more than five times as likely (OR = 5.49) to identify a question as higher order than their more confident peers. Students who responded incorrectly to the MCQ were 4 times as likely to identify a question as higher order than their peers who responded correctly. Conclusions The results suggest that higher performing, more confident students rely on identifying patterns (even if the question was intended to be higher order). In contrast, less confident students engage in higher-order, analytic thinking even if the question is intended to be lower order. Better understanding of the processes through which students interpret MCQs will help us to better understand the development of clinical reasoning skills.
- Published
- 2021
4. Rapid Feedback: Assessing Pre-clinical Teaching in the Era of Online Learning
- Author
-
Daniel Walden, Meagan Rawls, Sally A. Santen, Moshe Feldman, Anna Vinnikova, and Alan Dow
- Subjects
Medicine (miscellaneous) ,Education - Abstract
Medical schools vary in their approach to providing feedback to faculty. The purpose of this study was to test the effects of rapid student feedback in a course utilizing novel virtual learning methods.Second-year medical students were supplied with an optional, short questionnaire at the end of each class session and asked to provide feedback within 48 h. At the close of each survey, results were emailed to faculty. After the course, students and faculty were asked to rate the effectiveness of this method. This study did not affect administration of the usual end-of-course summative evaluations.Ninety-one percent of students who participated noted increased engagement in the feedback process, but only 18% on average chose to participate. Faculty rated rapid feedback as more actionable than summative feedback (67%), 50% rated it as more specific, and 42% rated it as more helpful. Some wrote that comments were too granular, and others noted a negative personal emotional response.Rapid feedback engaged students, provided actionable feedback, and increased communication between students and instructors, suggesting that this approach added value. Care must be taken to reduce the student burden and support relational aspects of the process.
- Published
- 2022
5. Grounded Theory Approach to Building Antiracism Cultures in Undergraduate Medical Education
- Author
-
T’keyah Vaughan, Cherie Edwards, Meagan Rawls, Nastassia Savage, Katherine Donowitz, Priyadarshini Pattath, Tiffany Camp, and Debbie DiazGranados
- Subjects
General Medicine ,Education - Published
- 2022
6. Developing Comprehensive Strategies to Evaluate Medical School Curricula
- Author
-
Moshe Feldman, Meagan Rawls, Susan R. DiGiovanni, Sally A. Santen, Courtney Blondino, and Sara Weir
- Subjects
Program evaluation ,Monograph ,Medical education ,020205 medical informatics ,Learning environment ,education ,Lifelong learning ,Specialty ,Medicine (miscellaneous) ,02 engineering and technology ,Focus group ,Education ,03 medical and health sciences ,Patient safety ,0302 clinical medicine ,ComputingMilieux_COMPUTERSANDEDUCATION ,0202 electrical engineering, electronic engineering, information engineering ,030212 general & internal medicine ,Psychology ,Curriculum ,Accreditation - Abstract
Evaluation of medical school curriculum is important to document outcomes, effectiveness of learning, engagement in quality improvement, and to meet accreditation compliance. This monograph provides a roadmap and resource for medical schools to meaningfully evaluate their curriculum based on specific metrics. The method of evaluation includes an examination of Kirkpatrick's levels of outcomes including reactions, learning, behavior, and impact. It is important that student outcomes are mapped in relation to curricular objectives. There are specific outcomes that may be utilized to determine if the curriculum has met the institution's goals. The first is comparison to national metrics (United States Medical Licensing Examinations and American Association of Medical Colleges Graduation Questionnaire). Second, medical schools collect internal program metrics, which include specific student performance metrics, such as number of students graduating, attrition, and matching to specialty. Further, schools may examine student performance and surveys in the preclerkship and clinical phases (e.g., grades, failing courses, survey responses about the curriculum), including qualitative responses on surveys or focus groups. As the learning environment is critical to learning, a deep dive to understand the environment and mistreatment may be important for program evaluation. This may be performed by specifically examining the Graduation Questionnaire, internal surveys, and mistreatment reporting. Finally, there are numerous attitudinal instruments that may help medical schools understand their students' development at one point or over time. These include measurements of stress, wellness, burnout, lifelong learning, and attitudes toward patient safety. Together, examining the composite of outcomes helps to understand and improve the medical school curriculum.
- Published
- 2018
7. Order matters: Alternative survey forms may impact responses
- Author
-
Meagan Rawls and Moshe Feldman
- Subjects
Management science ,Order (business) ,Surveys and Questionnaires ,MEDLINE ,Humans ,General Medicine ,Psychology ,Education - Published
- 2020
8. Versatility in multiple mini-interview implementation: Rater background does not significantly influence assessment scoring
- Author
-
Keith D. Baker, Meagan Rawls, Sally A. Santen, Moshe Feldman, and Roy T. Sabo
- Subjects
Core set ,Medical education ,Students, Medical ,Interview ,education ,Medical school ,MEDLINE ,Reproducibility of Results ,Cognition ,General Medicine ,Education ,Consistency (negotiation) ,Surveys and Questionnaires ,Humans ,School Admission Criteria ,Psychology ,Reliability (statistics) ,Schools, Medical - Abstract
The medical school admissions process seeks to assess a core set of cognitive and non-cognitive competencies that reflect professional readiness and institutional mission alignment. The standardized format of multiple mini-interviews (MMIs) can enhance assessments, and thus many medical schools have switched to this for candidate interviews. However, because MMIs are resource-intensive, admissions deans use a variety of interviewers from different backgrounds/professions. Here, we analyze the MMI process for the 2018 admissions cycle at the VCU School of Medicine, where 578 applicants were interviewed by 126 raters from five distinct backgrounds: clinical faculty, basic science faculty, medical students, medical school administrative staff, and community members. We found that interviewer background did not significantly influence MMI evaluative performance scoring, which eliminates a potential concern about the consistency and reliability of assessment.
- Published
- 2019
9. 461. Electronic Hand Hygiene Compliance Monitoring Systems: Not All Are Created Equal
- Author
-
Nadia Masroor, Allison Fisher, Meagan Rawls, Michelle Doll, Nital Appelbaum, Michael P. Stevens, Gonzalo Bearman, Trina Trimmer, and Kaila Cooper
- Subjects
Abstracts ,Infectious Diseases ,Oncology ,B. Poster Abstracts ,Hygiene ,business.industry ,media_common.quotation_subject ,medicine ,Medical emergency ,medicine.disease ,business ,media_common ,Compliance Monitoring - Abstract
Background While direct observation is considered the gold standard for hand hygiene (HH) surveillance, there is a growing interest in the implementation of electronic monitoring systems, which claim to accurately capture individual-level HH performance. Methods Two types of electronic hand hygiene monitoring systems (EHHMS) were trialed at an 865-bed, academic medical center over an 18-month period. Each type of EHHMS was piloted in two inpatient units, and hospital employees who had contact with patients and/or the patient environment were eligible to participate. In each trial, participants received standard training and were then asked to wear EHHMS badges while continuing their normal workflow. Methods of assessment included regular review of EHHMS reports, an inter-rater reliability analysis to compare EHHMS to direct observation by trained HH observer, and a qualitative electronic survey to assess the acceptability of EHHMS. HH compliance goal was set at 90%. Results In the first pilot, 279 employees volunteered to trial Type A EHHMS for 14 weeks, with an overall HH compliance of 30% (87,688 opportunities). In the second pilot, 169 employees volunteered to trial Type B EHHMS for 12 weeks, with an overall HH compliance of 93% (363,272 opportunities). Voluntary survey response rate for Type A was 32% (90/279) and for Type B was 40% (67/169). The majority of respondents consistently used EHHMS in daily workflow (Type A: 82%, 68/83) (Type B: 82%, 55/67) and most did not felt apprehensive about using the EHHMS (Type A: 19%, 16/83) (Type B: 22%, 15/67). Inter-rater reliability assessment of piloted EHHMS Type of Technology Unit Number of beds Technology Compliance HH Observer Compliance Kappa Statistic Technology Accuracy Type A Unit 1 20 15%(N = 86) 90.8%(N = 308) 0.039 11% Unit 2 30 42%(N = 98) 89%(N = 470) 0.180 54% Type B Unit 3 30 93%(N = 116) 90%(N = 48) 0.81 97% Unit 4 30 87%(N = 141) 92%(N = 60) 0.74 95% Conclusion Type B EHHMS captured our healthcare workers’ HH performance during clinical workflow with a greater accuracy and more HH events than Type A. EHHMS may provide an alternative method to capture HH compliance in the healthcare setting. Hospitals considering the use of an EHHMS should assess the technology’s ability to accurately capture HH performance in the clinical workflow prior full housewide implementation. Disclosures All authors: No reported disclosures.
- Published
- 2018
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.