34 results on '"Gallagher, Anthony G."'
Search Results
2. Live Observational Objective Assessment of Operative Performance in a Cadaveric Model is Equivalent to Delayed Video-Based Assessment.
- Author
-
Angelo, Richard L., St. Pierre, Pat, Tauro, Joe, and Gallagher, Anthony G.
- Abstract
Purpose: The purpose of our study was to compare real-time, live observational scoring with delayed retrospective video review of operative performance and to determine whether the evaluation method affected the attainment of proficiency benchmarks.Methods: Sixteen arthroscopy/sports medicine fellows and 2 senior residents completed training to perform arthroscopic Bankart repairs (ABRs) and arthroscopic rotator cuff repairs (ARCRs) using a proficiency-based progression curriculum. Each final operative performance for 15 randomly selected ABRs and 13 ARCRs performed on cadavers were scored live (observation during the operative performance) and on delayed video review (6-8 weeks) by 1 of 15 trained raters using validated metric-based (step and error) assessment tools. The inter-rater reliability (IRR) of live versus video review by a single rater was calculated, and changes to the trainee's attainment of the proficiency benchmarks were noted. The correlation coefficient (r) and the R2 were also calculated for the paired scores from the randomly selected performances.Results: No significant differences in the observed IRR agreement or the attainment of the proficiency benchmarks were found when comparing live to video assessment for either ABR or ARCR. The correlation coefficients r and R2 were considerably lower than the agreement coefficient (IRR) for rotator cuff steps (e.g., R2 = 0.74 vs. IRR = 0.97, P = 0.001); Bankart errors (R2 = 0.73 vs. IRR = 0.98, P = 0.006); and rotator cuff errors (R2 = 0.48 vs. IRR = 0.98, P = 0.0002).Conclusions: Real-time live and delayed video-based scoring of operative performance are essentially equivalent for the metric-based assessments of operative performance in ABRs and ARCRs. When the IRR agreement coefficient was compared with the correlation coefficients, the former was found to have greater homogeneity and measurement precision.Clinical Relevance: Metric-based live scoring is reliable and accurate for operative performance assessment, including high-stakes evaluations. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
3. Perceptual, visuospatial, and psychomotor abilities correlate with duration of training required on a virtual-reality flexible endoscopy simulator
- Author
-
Ritter, E. Matt, McClusky, David A., Gallagher, Anthony G., Enochsson, Lars, and Smith, C. Daniel
- Subjects
Medical colleges -- Training ,Synthetic training device industry -- Training ,Endoscopic surgery -- Training ,Endoscopy -- Training ,Health - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.amjsurg.2006.03.003 Byline: E. Matt Ritter (a)(b), David A. McClusky (a), Anthony G. Gallagher (a), Lars Enochsson (c), C. Daniel Smith (a) Keywords: Abilities; Flexible endoscopy; Objective assessment; Simulation; Visuospatial abilities Abstract: Trainees acquire endoscopic skills at different rates. Fundamental abilities testing could predict the amount of training required to reach a performance goal on a virtual-reality simulator. Author Affiliation: (a) Emory Simulation, Training, and Robotics (E*STAR) Laboratory, Emory Endosurgery Unit, Department of Surgery, Emory University School of Medicine, Atlanta, GA, United States (b) National Capitol Area Medical Simulation Center, Norman M. Rich Department of Surgery, Uniformed Services University, 4301 Jones Bridge Rd., Bethesda, MD 20814, United States (c) Center for Advanced Medical Simulation, Karolinska Institute at Karolinska University Hospital, Stockholm, Sweden Article History: Received 4 July 2005; Revised 15 March 2006
- Published
- 2006
4. Video-assisted surgery represents more than a loss of three-dimensional vision
- Author
-
Gallagher, Anthony G., Ritter, E. Matt, Lederman, Andrew B., McClusky, David A., and Smith, C. Daniel
- Subjects
Health - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.amjsurg.2004.04.008 Byline: Anthony G. Gallagher (a)(b), E. Matt Ritter (a), Andrew B. Lederman (a), David A. McClusky (a), C. Daniel Smith (a) Keywords: Video-assisted surgery; Video imaging; Laparoscopy; Depth perception Abstract: Loss of depth cues is a major challenge facing surgeons performing video-assisted surgery (VAS). Whether the degradation of image quality from a video-displayed image plays a direct role in performance of VAS has not been studied. Author Affiliation: (a) Emory Endosurgery Unit, Department of Surgery, H-124, Emory University School of Medicine, 1364 Clifton Road, NE, Atlanta, GA 30322, USA (b) School of Psychology, Queens University, Belfast, Northern Ireland Article History: Received 26 September 2003; Revised 15 April 2004
- Published
- 2005
5. A comparison between randomly alternating imaging, normal laparoscopic imaging, and virtual reality training in laparoscopic psychomotor skill acquisition
- Author
-
Jordan, Julie-Anne, Gallagher, Anthony G., McGuigan, Jim, McGlade, Kieran, and McClure, Neil
- Subjects
Virtual reality -- Usage ,Laparoscopic surgery -- Study and teaching ,Surgeons -- Training ,Health - Published
- 2000
6. Virtual reality training for the operating room and cardiac catheterisation laboratory
- Author
-
Gallagher, Anthony G. and Cates, Christopher U.
- Subjects
Cardiologists -- Practice ,Cardiologists -- Training ,Cardiac catheterization -- Training ,Medical errors -- Prevention ,Heart -- Surgery ,Heart -- Training - Published
- 2004
7. Proficiency-based Progression Training: A Scientific Approach to Learning Surgical Skills
- Author
-
Gallagher, Anthony G., De Groote, Ruben, Paciotti, Marco, and Mottrie, Alexandre
- Published
- 2022
- Full Text
- View/download PDF
8. Inter-rater Reliability for Metrics Scored in a Binary Fashion-Performance Assessment for an Arthroscopic Bankart Repair.
- Author
-
Gallagher, Anthony G., Ryu, Richard K.N., Pedowitz, Robert A., Henn, Patrick, and Angelo, Richard L.
- Abstract
Purpose: To determine the inter-rater reliability (IRR) of a procedure-specific checklist scored in a binary fashion for the evaluation of surgical skill and whether it meets a minimum level of agreement (≥0.8 between 2 raters) required for high-stakes assessment.Methods: In a prospective randomized and blinded fashion, and after detailed assessment training, 10 Arthroscopy Association of North America Master/Associate Master faculty arthroscopic surgeons (in 5 pairs) with an average of 21 years of surgical experience assessed the video-recorded 3-anchor arthroscopic Bankart repair performance of 44 postgraduate year 4 or 5 residents from 21 Accreditation Council for Graduate Medical Education orthopaedic residency training programs from across the United States.Results: No paired scores of resident surgeon performance evaluated by the 5 teams of faculty assessors dropped below the 0.8 IRR level (mean = 0.93; range 0.84-0.99; standard deviation = 0.035). A comparison between the 5 assessor groups with 1 factor analysis of variance showed that there was no significant difference between the groups (P = .205). Pearson's product-moment correlation coefficient revealed a strong and statistically significant negative correlation, that is, -0.856 (P < .000), indicating that as intra-operative error rate scores increased, the IRR decreased.Conclusions: Arthroscopy Association of North America shoulder faculty raters from across the United States showed high levels of IRR in the assessment of an arthroscopic 3-anchor Bankart repair procedure. All paired assessments were above the 0.8 level and the mean IRR of all resident assessments was 0.93, indicating that they could be used for high-stakes decisions.Clinical Relevance: With the move toward outcomes-based performance evaluation for graduate medical education, high-stakes assessments of surgical skill will require robust, reliable measurement tools that are able to withstand challenge. Surgical checklists employing metrics scored in a binary fashion meet the need and can show a high (>80%) IRR. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
9. Objective Assessment of Knot-Tying Proficiency With the Fundamentals of Arthroscopic Surgery Training Program Workstation and Knot Tester.
- Author
-
Pedowitz, Robert A., Nicandri, Gregg T., Angelo, Richard L., Ryu, Richard K.N., and Gallagher, Anthony G.
- Abstract
Purpose: To assess a new method for biomechanical assessment of arthroscopic knots and to establish proficiency benchmarks using the Fundamentals of Arthroscopic Surgery Training (FAST) Program workstation and knot tester.Methods: The first study group included 20 faculty at an Arthroscopy Association of North America resident arthroscopy course (19.9 ± 8.25 years in practice). The second group comprised 30 experienced surgeons attending an Arthroscopy Association of North America fall course (17.1 ± 19.3 years in practice). The training group included 44 postgraduate year 4 or 5 orthopaedic residents in a randomized, prospective study of proficiency-based training, with 3 subgroups: group A, standard training (n = 14); group B, workstation practice (n = 14); and group C, proficiency-based progression using the knot tester (n = 16). Each subject tied 5 arthroscopic knots backed up by 3 reversed hitches on alternating posts. Knots were tied under video control around a metal mandrel through a cannula within an opaque dome (FAST workstation). Each suture loop was stressed statically at 15 lb for 15 seconds. A calibrated sizer measured loop expansion. Knot failure was defined as 3 mm of loop expansion or greater.Results: In the faculty group, 24% of knots "failed" under load. Performance was inconsistent: 12 faculty had all knots pass, whereas 2 had all knots fail. In the second group of practicing surgeons, 21% of the knots failed under load. Overall, 56 of 250 knots (22%) tied by experienced surgeons failed. For the postgraduate year 4 or 5 residents, the aggregate knot failure rate was 26% for the 220 knots tied. Group C residents had an 11% knot failure rate (half the overall faculty rate, P = .013).Conclusions: The FAST workstation and knot tester offer a simple and reproducible educational approach for enhancement of arthroscopic knot-tying skills. Our data suggest that there is significant room for improvement in the quality and consistency of these important arthroscopic skills, even for experienced arthroscopic surgeons.Level Of Evidence: Level II, prospective comparative study. [ABSTRACT FROM AUTHOR]- Published
- 2015
- Full Text
- View/download PDF
10. A Proficiency-Based Progression Training Curriculum Coupled With a Model Simulator Results in the Acquisition of a Superior Arthroscopic Bankart Skill Set.
- Author
-
Angelo, Richard L., Ryu, Richard K.N., Pedowitz, Robert A., Beach, William, Burns, Joseph, Dodds, Julie, Field, Larry, Getelman, Mark, Hobgood, Rhett, McIntyre, Louis, and Gallagher, Anthony G.
- Abstract
Purpose: To determine the effectiveness of proficiency-based progression (PBP) training using simulation both compared with the same training without proficiency requirements and compared with a traditional resident course for learning to perform an arthroscopic Bankart repair (ABR).Methods: In a prospective, randomized, blinded study, 44 postgraduate year 4 or 5 orthopaedic residents from 21 Accreditation Council for Graduate Medical Education-approved US orthopaedic residency programs were randomly assigned to 1 of 3 skills training protocols for learning to perform an ABR: group A, traditional (routine Arthroscopy Association of North America Resident Course) (control, n = 14); group B, simulator (modified curriculum adding a shoulder model simulator) (n = 14); or group C, PBP (PBP plus the simulator) (n = 16). At the completion of training, all subjects performed a 3 suture anchor ABR on a cadaveric shoulder, which was videotaped and scored in blinded fashion with the use of previously validated metrics.Results: The PBP-trained group (group C) made 56% fewer objectively assessed errors than the traditionally trained group (group A) (P = .011) and 41% fewer than group B (P = .049) (both comparisons were statistically significant). The proficiency benchmark was achieved on the final repair by 68.7% of participants in group C compared with 36.7% in group B and 28.6% in group A. When compared with group A, group B participants were 1.4 times, group C participants were 5.5 times, and group C(PBP) participants (who met all intermediate proficiency benchmarks) were 7.5 times as likely to achieve the final proficiency benchmark.Conclusions: A PBP training curriculum and protocol coupled with the use of a shoulder model simulator and previously validated metrics produces a superior arthroscopic Bankart skill set when compared with traditional and simulator-enhanced training methods.Clinical Relevance: Surgical training combining PBP and a simulator is efficient and effective. Patient safety could be improved if surgical trainees participated in PBP training using a simulator before treating surgical patients. [ABSTRACT FROM AUTHOR]- Published
- 2015
- Full Text
- View/download PDF
11. The Bankart Performance Metrics Combined With a Cadaveric Shoulder Create a Precise and Accurate Assessment Tool for Measuring Surgeon Skill.
- Author
-
Angelo, Richard L., Ryu, Richard K.N., Pedowitz, Robert A., and Gallagher, Anthony G.
- Abstract
Purpose To determine if previously validated performance metrics for an arthroscopic Bankart repair (ABR) coupled with a cadaveric shoulder are a valid assessment tool with the ability to discriminate between the performances of experienced and novice surgeons and to establish a proficiency benchmark for an ABR using a cadaveric shoulder. Methods Ten master/associate master faculty from an Arthroscopy Association of North America Resident Course (experienced group) were compared with 12 postgraduate year 4 and postgraduate year 5 orthopaedic residents (novice group). Each group was instructed to perform a diagnostic arthroscopy and a 3 suture anchor Bankart repair on a cadaveric shoulder. The procedure was videotaped in its entirety and independently scored in blinded fashion by a pair of trained reviewers. Scoring was based on defined and previously validated metrics for an ABR and included steps, errors, “sentinel” (more serious) errors, and time. Results The inter-rater reliability was 0.92. Novice surgeons made 50% more errors (5.86 v 2.95, P = .013), showed more performance variability (SD, 1.86 v 0.55), and took longer to perform the procedure (45.5 minutes v 25.9 minutes, P < .001). The greatest difference in errors related to suture delivery and management (exclusive of knot tying) (1.95 v 0.45, P = .024). Conclusions The assessment tool composed of validated arthroscopic Bankart metrics coupled with a cadaveric shoulder accurately distinguishes the performance of experienced from novice orthopaedic surgeons. A benchmark based on the mean performance of the experienced group includes completion of a 3-anchor Bankart repair, and enacting no more than 3 total errors and 1 sentinel error. Clinical Relevance Validated procedural metrics combined with the use of a cadaveric shoulder can be used to assess the performance of an ABR. The methodology used may serve as a template for outcomes-based procedural skills training in general. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
12. The Bankart Performance Metrics Combined With a Shoulder Model Simulator Create a Precise and Accurate Training Tool for Measuring Surgeon Skill.
- Author
-
Angelo, Richard L., Pedowitz, Robert A., Ryu, Richard K.N., and Gallagher, Anthony G.
- Abstract
Purpose To determine if a dry shoulder model simulator coupled with previously validated performance metrics for an arthroscopic Bankart repair (ABR) would be a valid tool with the ability to discriminate between the performance of experienced and novice surgeons, and to establish a proficiency benchmark for an ABR using a model simulator. Methods We compared an experienced group of arthroscopic shoulder surgeons (Arthroscopy Association of North America faculty) (n = 12) with a novice group (n = 7) (postgraduate year 4 or 5 orthopaedic residents). All surgeons were instructed to perform a diagnostic arthroscopy and a 3 suture anchor Bankart repair on a dry shoulder model. Each procedure was videotaped in its entirety and scored in blinded fashion independently by 2 trained reviewers. Scoring used previously validated metrics for an ABR and included steps, errors, and “sentinel” (more serious) errors. Results The inter-rater reliability among pairs of raters averaged 0.93. The experienced group made 63% fewer errors, committed 79% fewer sentinel errors, and performed the procedure in 42% less time than the novice group (all significant differences). The greatest difference in errors between the groups involved anchor preparation and insertion, suture delivery and management, and knot tying. Conclusions The tool comprised by validated ABR metrics coupled with a dry shoulder model simulator is able to accurately distinguish between the performance of experienced and novice orthopaedic surgeons. A performance benchmark based on the mean performance of the experienced group includes completion of a 3 anchor Bankart repair, enacting no more than 4 total errors and 1 sentinel error. Clinical Relevance The combination of performance metrics and an arthroscopic shoulder model simulator can be used to improve the effectiveness of surgical skills training for an ABR. The methodology used may serve as a template for outcomes-based procedural skills training in general. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
13. Metric Development for an Arthroscopic Bankart Procedure: Assessment of Face and Content Validity.
- Author
-
Angelo, Richard L., Ryu, Richard K.N., Pedowitz, Robert A., and Gallagher, Anthony G.
- Abstract
Purpose To establish the metrics (operational definitions) necessary to characterize a reference arthroscopic Bankart procedure, and to seek consensus from experienced shoulder arthroscopists on the appropriateness of the steps, as well as errors identified. Methods Three experienced arthroscopic shoulder surgeons and an experimental psychologist (comprising the Metrics Group) deconstructed an arthroscopic Bankart procedure. Fourteen full-length videos were analyzed to identify the essential steps and potential errors. Sentinel (i.e., more serious) errors were defined as either (1) potentially jeopardizing the procedure outcome or (2) creating iatrogenic damage to the shoulder. The metrics were stress tested for clarity and the ability to be scored in binary fashion during a video review as either occurring or not occurring. The metrics were subjected to analysis by a panel of 27 experienced arthroscopic shoulder surgeons to obtain face and content validity using a modified Delphi Panel methodology (consensus opinion of experienced surgeons rendered by cyclical deliberations). Results Forty-five steps and 13 phases characterizing an arthroscopic Bankart procedure were identified. Seventy-seven procedural errors were specified, with 20 designated as sentinel errors. The modified Delphi Panel deliberation created the following changes: 2 metrics were deleted, 1 was added, and 5 were modified. Consensus on the resulting Bankart metrics was obtained and face and content validity verified. Conclusions This study confirms that a core group of experienced arthroscopic surgeons is able to perform task deconstruction of an arthroscopic Bankart repair and create unambiguous step and error definitions (metrics) that accurately characterize the essential components of the procedure. Analysis and revision by a larger panel of experienced arthroscopists were able to validate the Bankart metrics. Clinical Relevance The ability to perform task deconstruction and validate the resulting metrics will play a key role in improving surgical skills training and assessing trainee progression toward proficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
14. Objective assessment of surgical performance and its impact on a national selection programme of candidates for higher surgical training in plastic surgery.
- Author
-
Carroll, Sean M., Kennedy, A.M., Traynor, Oscar, and Gallagher, Anthony G.
- Subjects
PLASTIC surgeons ,PLASTIC surgery ,ROBUST control ,SIMULATION methods & models ,OPERATIVE surgery ,SURGERY career counseling ,TRAINING - Abstract
Summary: Objective: The objective of this study was to develop and validate a transparent, fair and objective assessment programme for the selection of surgical trainees into higher surgical training (HST) in plastic surgery in the Republic of Ireland. Methods: Thirty-four individuals applied for HST in plastic surgery at the Royal College of Surgeons in Ireland (RCSI) in the academic years 2005–2006 and 2006–2007. Eighteen were short-listed for interview and further assessment. All applicants were required to report on their undergraduate educational performance and their postgraduate professional development. Short-listed applicants completed validated objective assessment simulations of surgical skills, an interview and assessment of their suitability for a career in surgery. Results: When applicants'' short-listing scores were combined with their interview scores and assessment of their suitability for a career in surgery, individuals who were selected for HST in plastic surgery performed significantly better than those who were not (P <0.002). However, when the assessment of technical skills scores were added the significance level of this difference increased further (P <0.0001) as did the statistical power of the difference to 99.9%, thus increasing the robustness of the selection package. Conclusion: The results from this study suggest that the assessment protocol we used to select individuals for HST in plastic surgery reliably and statistically significantly discriminated between the performances of candidates. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
15. Proficiency-Based Progression Surgical Training: Preparation for Finishing School.
- Author
-
Angelo, Richard L. and Gallagher, Anthony G.
- Published
- 2021
- Full Text
- View/download PDF
16. Learning Curves and Reliability Measures for Virtual Reality Simulation in the Performance Assessment of Carotid Angiography
- Author
-
Patel, Amar D., Gallagher, Anthony G., Nicholson, William J., and Cates, Christopher U.
- Subjects
- *
VIRTUAL reality in medicine , *OCCUPATIONAL training , *ARTERIOGRAPHY , *SURGICAL stents , *ANGIOGRAPHY , *VASCULAR surgery - Abstract
Objectives: Improvement in performance as measured by metric-based procedural errors must be demonstrated if virtual reality (VR) simulation is to be used as a valid means of proficiency assessment and improvement in procedural-based medical skills. Background: The Food and Drug Administration requires completion of VR simulation training for physicians learning to perform carotid stenting. Methods: Interventional cardiologists (n = 20) participating in the Emory NeuroAnatomy Carotid Training program underwent an instructional course on carotid angiography and then performed five serial simulated carotid angiograms on the Vascular Interventional System Trainer (VIST) VR simulator (Mentice AB, Gothenburg, Sweden). Of the subjects, 90% completed the full assessment. Procedure time (PT), fluoroscopy time (FT), contrast volume, and composite catheter handling errors (CE) were recorded by the simulator. Results: An improvement was noted in PT, contrast volume, FT, and CE when comparing the subjects’ first and last simulations (all p < 0.05). The internal consistency of the VIST VR simulator as assessed with standardized coefficient alpha was high (range 0.81 to 0.93), except for FT (alpha = 0.36). Test-retest reliability was high for CE (r = 0.9, p = 0.0001). Conclusions: A learning curve with improved performance was demonstrated on the VIST simulator. This study represents the largest collection of such data to date in carotid VR simulation and is the first report to establish the internal consistency of the VIST simulator and its test-retest reliability across several metrics. These metrics are fundamental benchmarks in the validation of any measurement device. Composite catheter handling errors represent measurable dynamic metrics with high test-retest reliability that are required for the high-stakes assessment of procedural skills. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
17. Objective psychomotor skills assessment of experienced and novice flexible endoscopists with a virtual reality simulator
- Author
-
Ritter, E. Matt, McClusky III, David A., Lederman, Andrew B., Gallagher, Anthony G., Smith, C. Daniel, and McClusky, David A 3rd
- Subjects
MOTOR ability testing ,ENDOSCOPY ,COMPUTER simulation ,EDUCATIONAL tests & measurements ,DIAGNOSIS ,BIOLOGICAL models ,PSYCHOLOGY of movement ,RESEARCH evaluation ,USER interfaces - Abstract
The objective of this study was to determine whether the GI Mentor II virtual reality simulator can distinguish the psychomotor skills of intermediately experienced endoscopists from those of novices, and do so with a high level of consistency and reliability. A total of five intermediate and nine novice endoscopists were evaluated using the EndoBubble abstract psychomotor task. Each subject performed three repetitions of the task. Performance and error data were recorded for each trial. The intermediate group performed better than the novice group in each trial. The differences were significant in trial 1 for balloons popped (P = .001), completion time (P = .04), and errors (P = .03). Trial 2 showed significance only for balloons popped (P = .002). Trial 3 showed significance for balloons popped (P = .004) and errors (P = .008). The novice group showed significant improvement between trials 1 and 3 (P<0.05). No improvement was noted in the intermediate group. Measures of consistency and reliability were greater than 0.8 in both groups with the exception of novice completion time where test-retest reliability was 0.74. The GI Mentor II simulator can distinguish between novice and intermediate endoscopists. The simulator assesses skills with levels of consistency and reliability required for high-stakes assessment. [Copyright &y& Elsevier]
- Published
- 2003
- Full Text
- View/download PDF
18. Psychomotor skills assessment in practicing surgeons experienced in performing advanced laparoscopic procedures
- Author
-
Gallagher, Anthony G, Smith, C Daniel, Bowers, Steven P, Seymour, Neal E, Pearson, Adam, McNatt, Steven, Hananel, David, and Satava, Richard M
- Subjects
- *
ENDOSCOPIC surgery , *PSYCHOMOTOR disorders , *SURGEONS - Abstract
: BackgroundMinimally invasive surgery (MIS) has introduced a new and unique set of psychomotor skills for a surgeon to acquire and master. Although assessment technologies have been proposed, precise and objective psychomotor skills assessment of surgeons performing laparoscopic procedures has not been detailed.: Study designTwo hundred ten surgeons attending the 2001 annual meeting of the American College of Surgeons in New Orleans who reported having completed more than 50 laparoscopic procedures participated. Subjects were required to complete one box-trainer laparoscopic cutting task and a similar virtual reality task. These tasks were specifically designed to test only psychomotor and not cognitive skills. Both tasks were completed twice. Performance of tasks was assessed and analyzed. Demographic and laparoscopic experience data were also collected.: ResultsComplete data were available on 195 surgeons. In this group, surgeons performed the box-trainer task better with their dominant hand (p < 0.0001) and there was a strong and statistically significant correlation between trials (r = 0.47 − 0.64, p < 0.0001). After transforming raw data to z-scores (mean = 0 and SD = 1) it was shown that between 2% and 12% of surgeons performed more than two standard deviations from the mean. Some surgeons’ performance was 20 standard deviations from the mean. Minimally Invasive Surgical Trainer Virtual Reality metrics demonstrated high measurement consistency as assessed by coefficient alpha (α = 0.849).: ConclusionsObjective assessment of laparoscopic psychomotor skills is now possible. Surgeons who had performed more than 50 laparoscopic procedures showed considerable variability in their performance on a simple laparoscopic and virtual reality task. Approximately 10% of surgeons tested performed the task significantly worse than the group’s average performance. Studies such as this may form the methodology for establishing criteria levels and performance objectives in objective assessment of the technical skills component of determining surgical competence. [Copyright &y& Elsevier]
- Published
- 2003
- Full Text
- View/download PDF
19. Surgical competence and surgical proficiency: definitions, taxonomy, and metrics
- Author
-
Satava, Richard M, Gallagher, Anthony G, and Pellegrini, Carlos A
- Published
- 2003
- Full Text
- View/download PDF
20. A Proficiency-Based Progression Simulation Training Curriculum to Acquire the Skills Needed in Performing Arthroscopic Bankart and Rotator Cuff Repairs-Implementation and Impact.
- Author
-
Angelo, Richard L., St Pierre, Pat, Tauro, Joe, Gallagher, Anthony G., and Shoulder PBP Instructional Faculty
- Abstract
Purpose: To investigate the impact of a proficiency-based progression (PBP) curriculum employed to teach trainees in the skills needed to demonstrate proficiency for an arthroscopic Bankart repair (ABR) and an arthroscopic rotator cuff repair (ARCR) by objectively comparing pre- and immediate postcourse performances.Methods: In a prospective study, 16 arthroscopy/sports medicine fellows and 2 senior residents (complete group: N = 18) were randomly assigned to perform a precourse cadaveric ABR (Bankart subgroup: N = 6), ARCR (cuff subgroup: N = 6), or basic skills on a shoulder simulator (N = 6). After completing a PBP training curriculum, all 18 registrants performed both an ABR and ARCR scored in real time by trained raters using previously validated metrics.Results: The Bankart subgroup made 58% fewer objectively assessed errors at the completion of the course than at baseline (P = .004, confidence interval -1.449 to -0.281), and performance variability was substantially reduced (standard deviation = 5.89 vs 2.81). The cuff subgroup also made 58% fewer errors (P = .001, confidence interval -1.376 to 0.382) and showed a similar reduction in performance variability (standard deviation = 5.42 vs 2.1). Only one subject's precourse baseline performance met the proficiency benchmark compared with 89% and 83% of the all registrants on the final ABR and ARCR cadaveric assessments, respectively.Conclusions: The results of this study reject the null hypothesis. They demonstrate that the implementation of a PBP simulation curriculum to train the skills necessary to perform arthroscopic Bankart and rotator cuff repairs results in a large and statistically significant improvement in the trainee's ability to meet the 2 related performance benchmarks. Proficiency was demonstrated by 89% and 83% of the trainees for an ABR and an ARCR, respectively, in a two- and one-half day course.Clinical Relevance: Surgical training employing a PBP curriculum is efficient, effective, and has the potential to improve patient safety. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
21. Arthroscopic Rotator Cuff Repair Metrics: Establishing Face, Content, and Construct Validity in a Cadaveric Model.
- Author
-
Angelo, Richard L, Tauro, Joe, St Pierre, Pat, Ross, Glen, Voloshin, Ilya, Shafer, Ben, Ryu, Richard K N, McIntyre, Louis, and Gallagher, Anthony G
- Abstract
Purpose: To create and determine face validity and content validity of arthroscopic rotator cuff repair (ARCR) performance metrics, to confirm construct validity of the metrics coupled with a cadaveric shoulder, and to establish a performance benchmark for the procedure on a cadaveric shoulder.Methods: Five experienced arthroscopic shoulder surgeons created step, error, and sentinel error metrics for an ARCR. Fourteen shoulder arthroscopy faculty members from the Arthroscopy Association of North America formed the modified Delphi panel to assess face and content validity. Eight Arthroscopy Association of North America shoulder arthroscopy faculty members (experienced group) were compared with 9 postgraduate year 4 or 5 orthopaedic residents (novice group) in their ability to perform an ARCR. Instructions were given to perform a diagnostic arthroscopy and a 2-anchor, 4-simple suture repair of a 2-cm supraspinatus tear. The procedure was videotaped in its entirety and independently scored in blinded fashion by trained, paired reviewers.Results: Delphi panel consensus for 42 steps and 66 potential errors was obtained. Overall performance assessment showed a mean inter-rater reliability of 0.93. Novice surgeons completed 17% fewer steps (32.1 vs 37.5, P = .001) and enacted 2.5 times more errors than the experienced group (6.21 vs 2.5, P = .012). Fifty percent of the experienced group members and none of the novice group members achieved the proficiency benchmark of a minimum of 37 steps completed with 3 or fewer errors.Conclusions: Face validity and content validity for the ARCR metrics, along with construct validity for the metrics and cadaveric shoulder, were verified. A proficiency benchmark was established based on the mean performance of an experienced group of arthroscopic shoulder surgeons.Clinical Relevance: Validated procedural metrics combined with the use of a cadaveric shoulder can be used to accurately assess the performance of an ARCR. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
22. Augmenting, not cheating!
- Author
-
Smith, Simon D., Gallagher, Anthony G., and Henn, Patrick
- Published
- 2013
- Full Text
- View/download PDF
23. Deliberate practice using validated metrics improves skill acquisition in performance of ultrasound-guided peripheral nerve block in a simulated setting.
- Author
-
Ahmed, Osman M.a., Azher, Imran, Gallagher, Anthony G., Breslin, Dara S., O'donnell, Brian D., and Shorten, George D.
- Subjects
- *
PERIPHERAL nervous system , *NERVE block , *DIAGNOSTIC ultrasonic imaging , *RANDOMIZED controlled trials , *ANESTHESIOLOGISTS , *THERAPEUTICS - Abstract
Study objectives The aim of this study was to compare the effects of deliberate vs. self-guided practices (both using validated metrics) on the acquisition of needling skills by novice learners. Design Randomized Controlled Study. Setting Simulation lab, Department of Anesthesia, St.Vincent's Hospital, Dublin. Subjects Eighteen medical students. Interventions Students were assigned to either (i) deliberate practice (n = 10) or (ii) self-guided practice (n = 8) groups. After completion of a ‘learning phase’, subjects attempted to perform a predefined task, which entailed advancing a needle towards a target on a phantom gel under ultrasound guidance. Subsequently, all subjects practiced this task using predefined metrics. Only subjects in the deliberate practice group had an expert anesthesiologist during practice. Immediately after completing ‘practice phase’, all subjects attempted to perform the same task, and, on the following day, made two further attempts in succession. Two trained consultant anesthesiologists assessed a video of each performance independently using the pre-defined metrics. Measurements Number of procedural steps completed and number of errors made. Main results Compared with novices who self-guided their practice using metrics, those who undertook expert-supervised deliberate practice using metrics completed more steps (performance metrics) immediately after practice (median [range], 14.5 [12–15] vs. 3 [1–10], p < 0.0001) and 24 h later (15 [12–15] vs. 4.5 [1–11], p < 0.0001 and 15 [11–15] vs. 4 [2–14], p < 0.0001). They also made fewer errors immediately after practice (median [range], 0 [0–0] vs. 5 [3–8], p < 0.0001) and 24 h later, (0 [0–3] vs. 6.5 [3–8], p < 0.0001 and 0 [0–3] vs. 4 [2–7], p < 0.0001). Conclusion Combining deliberate practice with metrics improved acquisition of needling skills. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
24. Virtual Reality Simulation Training in a High-fidelity Procedure Suite: Operator Appraisal.
- Author
-
Lonn, Lars, Edmond, John J., Marco, Jean, Kearney, Peter P., and Gallagher, Anthony G.
- Abstract
Abstract: Purpose: To assess the face and content validity of a novel, full physics, full procedural, virtual reality simulation housed in a hybrid procedure suite. Methods and Materials: After completing 60 minutes of hands-on training in uterine artery embolization and coronary angioplasty, 24 radiologists and 18 cardiologists with mean 10 years of endovascular experience assessed the functionality of a comprehensive hybrid procedure suite simulation (Orcamp; Orzone, Gothenburg, Sweden). Results: C-arm and operating table functionality and realism were reliably (α = 0.0.89–0.92) rated highly (80/100). Performance realism of the catheter, guide wire, fluoroscopy image, electrocardiogram, and vital signs readout also reliably and statistically significantly predicted subjects'' overall positive assessment (mean = 87/100) of the simulation experience in a multiple regression model (α = .83; r = 0.85 and r
2 = 0.67; P < .0001). Conclusions: This study reports a quantitative evaluation of a comprehensive simulation of an authentic procedure suite for image-guided intravascular procedures. This new facility affords the opportunity for trainers to provide higher fidelity training of operative technical, procedural, and management skills in the realistic context of a complete procedure suite with all its complexities and potential distractions. [Copyright &y& Elsevier]- Published
- 2012
- Full Text
- View/download PDF
25. The effects of excessive alcohol consumption on laparoscopic surgical performance the following day
- Author
-
Gallagher, Anthony G., Boyle, Emily, Toner, Paul, Seymour, Neal E., Andersen, Dana K., and Satava, Richard M.
- Published
- 2008
- Full Text
- View/download PDF
26. Discrimination, reliability, sensitivity, and specificity of metric-based assessment of an unstable pertrochanteric 31A2 intramedullary nailing procedure performed by experienced and novice surgeons.
- Author
-
Kojima, Kodi E, Graves, Matt, Taha, Wa'el, Ghidinelli, Monica, Struelens, Bernard, Aliaga, Jorge Alberto Amaya, Cunningham, Mike, Joeris, Alexander, and Gallagher, Anthony G.
- Subjects
- *
INTRAMEDULLARY rods , *INTRAMEDULLARY fracture fixation , *TRAINING of surgeons , *ORTHOPEDISTS , *HIP fractures , *SURGEONS , *FRACTURE fixation , *CLINICAL competence , *ORTHOPEDICS ,RESEARCH evaluation - Abstract
Introduction: Identifying objective performance metrics for surgical training in orthopedic surgery is imperative for effective training and patient safety. The objective of this study was to determine if an internationally agreed, metric-based objective assessment of video recordings of an unstable pertrochanteric 31A2 intramedullary nailing procedure distinguished between the performance of experienced and novice orthopedic surgeons.Materials and Methods: Previously agreed procedure metrics (i.e., 15 phases of the procedure, 75 steps, 88 errors, and 28 sentinel errors) for a closed reduction and standard cephalomedullary nail fixation with a single cephalic element of an unstable pertrochanteric 31A2 fracture. Experienced surgeons trained to assess the performance metrics with an interrater reliability (IRR) > 0.8 assessed 14 videos from 10 novice surgeons (orthopaedic residents/trainees) and 20 videos from 14 experienced surgeons (orthopaedic surgeons) blinded to group and procedure order.Results: The mean IRR of procedure assessments was 0.97. No statistically significant differences were observed between the two groups for Procedure Steps, Errors, Sentinel Errors, and Total Errors. A small number of Experienced surgeons made a similar number of Total Errors as the weakest performing Novices. When the scores of each group were divided at the median Total Error score, large differences were observed between the Experienced surgeons who made the fewest errors and the Novices making the most errors (p < 0.001). Experienced surgeons who made the most errors made significantly more than their Experienced peers (p < 0.003) and the best performing Novices (p < 0.001). Error metrics assessed with Area Under the Curve demonstrated good to excellent Sensitivity and Specificity (0.807-0.907).Discussion: Binary performance metrics previously agreed by an international Delphi meeting discriminated between the objectively assessed video-recorded performance of Experienced and Novice orthopedic surgeons when group scores were sub-divided at the median for Total Errors. Error metrics discriminated best and also demonstrated good to excellent Sensitivity and Specificity. Some very experienced surgeons performed similar to the Novice group surgeons that made most errors.Conclusions: The procedure metrics used in this study reliably distinguish Novice and Experienced orthopaedic surgeons' performance and will underpin quality-assured novice training. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
27. Proficiency-based virtual reality training significantly reduces the error rate for residents during their first 10 laparoscopic cholecystectomies
- Author
-
Ahlberg, Gunnar, Enochsson, Lars, Gallagher, Anthony G., Hedman, Leif, Hogman, Christian, McClusky, David A., Ramel, Stig, Smith, C. Daniel, and Arvidsson, Dag
- Subjects
- *
VIRTUAL reality , *TRAINING , *SURGICAL errors , *CHOLECYSTECTOMY - Abstract
Abstract: Background: Virtual reality (VR) training has been shown previously to improve intraoperative performance during part of a laparoscopic cholecystectomy. The aim of this study was to assess the effect of proficiency-based VR training on the outcome of the first 10 entire cholecystectomies performed by novices. Methods: Thirteen laparoscopically inexperienced residents were randomized to either (1) VR training until a predefined expert level of performance was reached, or (2) the control group. Videotapes of each resident’s first 10 procedures were reviewed independently in a blinded fashion and scored for predefined errors. Results: The VR-trained group consistently made significantly fewer errors (P = .0037). On the other hand, residents in the control group made, on average, 3 times as many errors and used 58% longer surgical time. Conclusions: The results of this study show that training on the VR simulator to a level of proficiency significantly improves intraoperative performance during a resident’s first 10 laparoscopic cholecystectomies. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
28. Orsi Consensus Meeting on European Robotic Training (OCERT): Results from the First Multispecialty Consensus Meeting on Training in Robot-assisted Surgery.
- Author
-
Vanlander, Aude E., Mazzone, Elio, Collins, Justin W., Mottrie, Alexandre M., Rogiers, Xavier M., van der Poel, Henk G., Van Herzeele, Isabelle, Satava, Richard M., and Gallagher, Anthony G.
- Subjects
- *
SURGICAL robots , *ROBOTS , *ROBOTICS , *PERFORMANCE standards , *EDUCATIONAL standards - Abstract
To improve patient outcomes in robotic surgery, robotic training and education need to be modernised and augmented. The skills and performance levels of trainees need to be objectively assessed before they operate on real patients. The main goal of the first Orsi Consensus Meeting on European Robotic Training (OCERT) was to establish the opinions of experts from different scientific societies on standardised robotic training pathways and training methodology. After a 2-d consensus conference, 36 experts identified 23 key statements allotted to three themes: training standardisation pathways, validation metrics, and implementation prerequisites and certification. After two rounds of Delphi voting, consensus was obtained for 22 of 23 questions among these three categories. Participants agreed that societies should drive and support the implementation of benchmarked training using validated proficiency-based pathways. All courses should deliver an internationally agreed curriculum with performance standards, be accredited by universities/professional societies, and, trainees should receive a certificate approved by professional societies and/or universities after successful completion of the robotic training courses. This OCERT meeting established a basis for bringing surgical robotic training out of the operating room by seeking input and consensus across surgical specialties for an objective, validated, and standardised training programme with transparent, metric-based training outcomes. The Orsi Consensus Meeting on European Robotic Training (OCERT) is an international, multidisciplinary, Delphi-panel study of scientific societies and experts focused on training in robotic surgery. The panel achieved consensus that standardised international training pathways should be the basis for a structured, validated, replicable, and certified approach to implementation of robotic technology. The Orsi Consensus Meeting on European Robotic Training (OCERT) is an international, multidisciplinary, Delphi-panel study of scientific societies and experts focused on training in robotic surgery. It established a basis for bringing surgical robotic training out of the operating room by seeking input and consensus across surgical specialties for an objective, validated, and standardised training programme with transparent, metric-based training outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. A validation study of intraoperative performance metrics for training novice cardiac resynchronization therapy implanters.
- Author
-
Mascheroni, Jorio, Mont, Lluís, Stockburger, Martin, Patwala, Ashish, Retzlaff, Hartwig, and Gallagher, Anthony G.
- Subjects
- *
CARDIAC pacing , *KEY performance indicators (Management) , *VIRTUAL reality , *PERFORMANCE theory , *TEST validity - Abstract
Pacing/cardiac resynchronization therapy (CRT) implant training currently lacks a common system to objectively assess trainee ability to perform required tasks at predetermined performance levels. The purpose of this study was to primarily examine construct validity and reliability, secondarily discriminative validity of novel intraoperative performance metrics, developed for a reference approach to training novice CRT implanters. Fifteen novice and eleven experienced CRT implanters performed a 3-lead implant procedure on a virtual reality simulator. Performances were video-recorded, then independently scored using predefined metrics endorsed by an international panel of experts. First, Novice and Experienced group scores were compared for steps completed and errors made. Secondly, each group was split in two around the median score of the group and subgroup scores were compared. The mean number of scored metrics per performance was 108 and the inter-rater reliability for scoring was 0.947. Compared with novices, experienced implanters completed more procedural Steps correctly (mean 87% vs. 73%, p = 0.001), made fewer procedural Errors (6.3 vs. 11.2, p = 0.005), Critical Errors (1.8 vs. 4.4, p = 0.004), and total errors (8.1 vs. 15.6, p = 0.002). Furthermore, the differences between the two Novice subgroups were 25% for steps completed correctly and 94% for total errors made (p < 0.001); the differences between the two Experienced subgroups were respectively 16% and 191% (p < 0.001). The procedure metrics used in this study reliably distinguish novice and experienced CRT implanters' performances. The metrics further differentiated performance levels within a group with similar experience. These performance metrics will underpin quality-assured novice implanter training. • Performance metrics underpin simulation-based training curriculum to proficiency. • Construct validity of novel Metrics for training novice CRT implanters was assessed. • Novice and Experienced CRT operator performances were reliably scored using Metrics. • Metrics consistently distinguished between groups' objectively assessed performance. • Metrics differentiated objectively assessed performance levels within each group. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. International expert consensus on a scientific approach to training novice cardiac resynchronization therapy implanters using performance quality metrics.
- Author
-
Mascheroni, Jorio, Mont, Lluís, Stockburger, Martin, Patwala, Ashish, Retzlaff, Hartwig, and Gallagher, Anthony G.
- Subjects
- *
KEY performance indicators (Management) , *CARDIAC pacing , *HEART function tests - Abstract
Pacing and Cardiac Resynchronization Therapy (CRT) procedural training for novice operators usually takes place in-vivo and methods vary across countries/institutions. No common system exists to objectively assess trainee ability to perform required tasks at predetermined performance levels prior to in-vivo practice. We sought to characterize and validate with experts a reference approach to pacing/CRT implants based on objective and explicit performance quality metrics, for the development of a reproducible, simulation-based, training curriculum aiming to operator proficiency. Three experienced CRT implanters, a behavioural scientist and two engineers performed a detailed task deconstruction of the pacing/CRT procedure and identified the performance metrics (phases, steps, errors, critical errors) that constitute an optimal CRT implant for training purposes. The metrics were stress tested to determine reliability and score-ability and then subjected to detailed systematic review by an international panel of 15 expert implanters in a modified Delphi process. Thirteen procedure phases were identified, consisting of 196 steps, 122 errors, 50 critical errors. The expert panel deliberation added 16 metrics, deleted 12, and modified 43. Unanimous panel consensus on the resulting CRT procedure metrics was obtained, which verified face and content validity. A reference pacing/CRT procedure and metrics created by a core group of experts accurately characterize the essential components of performance and were endorsed by an international panel of experienced peers. The metrics will underpin quality-assured novice implanter training. • Performance metrics underpin simulation-based training curriculum to proficiency. • Detailed CRT reference procedure and performance metrics were defined. • Metrics identified phases, steps and errors constituting optimal CRT implant. • International expert consensus panel concurred with the performance metrics. • The CRT performance metrics are valid and can be objectively and reliably scored. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
31. AO international consensus panel for metrics on a closed reduction and fixation of a 31A2 pertrochanteric fracture.
- Author
-
Kojima, Kodi, Graves, Matt, Taha, Wa'el, Cunningham, Mike, Joeris, Alexander, and Gallagher, Anthony G.
- Subjects
- *
INTERNAL fixation in fractures , *ORTHOPEDISTS , *FRACTURE fixation , *ORTHOPEDIC implants , *SURGEONS - Abstract
Background: The foundations of an effective and evidence-based training program are the metrics, which characterize optimal performance.Purposes: To develop, operationally define, and seek consensus from procedure experts on the metrics that best characterize a reference approach to the performance of a closed reduction and internal fixation of a 31A2 unstable pertrochanteric fracture with a cephalomedullary nail with distal locking through the proximal guide.Methods: A Metrics Group consisting of 3 senior orthopaedic surgeons, a surgeon/medical scientist, an education expert and a behavioural scientist deconstructed the performance of the selected fixation procedure and defined performance metrics. At a modified Delphi meeting, 32 senior orthopaedic and trauma surgeons from 18 countries critiqued these metrics and their operational definitions before reaching consensus.Results: Initially performance metrics consisting of 14 Phases with 62 Steps, 84 errors and 20 Sentinel errors were identified that characterize the safe and effective performance of the procedure. During the Delphi panel meeting these were modified and consensus was reached on 15 Phases (1 added, p = 0.967)) with 75 Steps (14 added and 1 deleted; p = 0.028), 88 errors (10 added and 6 deleted; p = 0.47), and 28 Sentinel errors (8 added; p = 0.107). Pre and Post Delphi characterizations were highly correlated (r = 0.81-0.94).Conclusions: Surgical procedures can be broken down into constituent, essential, and elemental tasks necessary for the safe and effective completion of a reference approach to a specified procedure. Procedure experts from 18 countries reached consensus on performance metrics for the fixation procedure. This metric-based characterization should form the basis of more quantitative validation studies to guide the construction of a proficiency-based progression training curriculum. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
32. Corrigendum to 'Discrimination, reliability, sensitivity, and specificity of metric-based assessment of an unstable pertrochanteric 31A2 intramedullary nailing procedure performed by experienced and novice surgeons' [Injury Vol 53, Issue 8(2022) 2832-2838]
- Author
-
Kojima, Kodi E., Graves, Matt, Taha, Wa'el, Ghidinelli, Monica, Struelens, Bernard, Aliaga, Jorge Alberto Amaya, Cunningham, Mike, Joeris, Alexander, and Gallagher, Anthony G.
- Subjects
- *
INTRAMEDULLARY fracture fixation , *INTRAMEDULLARY rods , *SURGEONS , *WOUNDS & injuries - Published
- 2023
- Full Text
- View/download PDF
33. Prospective, Randomized, Double-Blind Trial of Curriculum-Based Training for Intracorporeal Suturing and Knot Tying
- Author
-
Van Sickle, Kent R., Ritter, E. Matt, Baghai, Mercedeh, Goldenberg, Adam E., Huang, Ih-Ping, Gallagher, Anthony G., and Smith, C. Daniel
- Subjects
- *
SUTURING , *OPERATIVE surgery , *CLINICAL trials ,STUDY & teaching of medicine - Abstract
Background: Advanced surgical skills such as laparoscopic suturing are difficult to learn in an operating room environment. The use of simulation within a defined skills-training curriculum is attractive for instructor, trainee, and patient. This study examined the impact of a curriculum-based approach to laparoscopic suturing and knot tying. Study Design: Senior surgery residents in a university-based general surgery residency program were prospectively enrolled and randomized to receive either a simulation-based laparoscopic suturing curriculum (TR group, n=11) or standard clinical training (NR group, n=11). During a laparoscopic Nissen fundoplication, placement of two consecutive intracorporeally knotted sutures was video recorded for analysis. Operative performance was assessed by two reviewers blinded to subject training status using a validated, error-based system to an interrater agreement of≥80%. Performance measures assessed were time, errors, and needle manipulations, and comparisons between groups were made using an unpaired t-test. Results: Compared with NR subjects, TR subjects performed significantly faster (total time, 526±189 seconds versus 790±171 seconds; p < 0.004), made significantly fewer errors (total errors, 25.6±9.3 versus 37.1±10.2; p < 0.01), and had 35% fewer excess needle manipulations (18.5±10.5 versus 27.3±8.6; p < 0.05). Conclusions: Subjects who receive simulation-based training demonstrate superior intraoperative performance of a highly complex surgical skill. Integration of such skills training should become standard in a surgical residency''s skills curriculum. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
34. Prospective, randomized, double-blind trial of curriculum-based training for intracorporeal suturing and knot-tying
- Author
-
Van Sickle, Kent R., Ritter, E. Matt, Baghai, Mercedeh, Goldenberg, Adam, Huang, Ih-Ping, Gallagher, Anthony G., and Smith, C. Daniel
- Published
- 2007
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.