151. Teaching practicing surgeons what not to do: An analysis of instruction fluidity during a simulation-based continuing medical education course
- Author
-
Carla M. Pugh, Ajit K. Sachdeva, Alexandra A. Rosser, David Williamson Shaffer, Sarah A. Jung, and Martha Godfrey
- Subjects
education ,MEDLINE ,030230 surgery ,Session (web analytics) ,03 medical and health sciences ,0302 clinical medicine ,Continuing medical education ,ComputingMilieux_COMPUTERSANDEDUCATION ,Medicine ,Humans ,Learning ,Curriculum ,Simulation based ,Simulation Training ,Herniorrhaphy ,Surgeons ,Medical education ,business.industry ,Teaching ,030220 oncology & carcinogenesis ,Surgery ,Education, Medical, Continuing ,Laparoscopy ,Clinical Competence ,Clinical competence ,business ,human activities - Abstract
Background Interest is growing in simulation-based continuing medical education courses for practicing surgeons. However, little research has explored the instruction employed during these courses. This study examines instruction practices used during an annual simulation-based continuing medical education course. Methods Audio–video data were collected from surgeon instructors (n = 12) who taught a simulated laparoscopic hernia repair continuing medical education course across 2 years. Surgeon learners (n = 58) were grouped by their self-reported laparoscopic and hernia repair experience. Instructors’ transcribed dialogue was automatically coded for 5 types of responses to the following questions: anecdotes, confirming, correcting, guidance, and what not to do. Differences in these responses were measured against the progress of the simulations and across learners with different experience levels. Postcourse interviews with instructors were conducted for additional qualitative validation. Results Performing t tests of instructor responses revealed that they were significantly more likely to answer in forms coded as anecdotes when responding to relative experts and in forms coded as what not to do when responding to novices. Linear regressions of each code against normalized progressions of each simulation revealed a significant relationship between progression through a simulation and frequency of the what not to do code for less-experienced learners. Postcourse interviews revealed that instructors continuously assess participants throughout a session and modify their teaching strategies. Conclusion Instructors significantly modified the focus of their teaching as a function both of their learners’ self-reported experience levels, their assessment of learner needs, and learner progression through the training sessions.
- Published
- 2019