1. Clinical performance comparators in audit and feedback: a review of theory and evidence.
- Author
-
Gude WT, Brown B, van der Veer SN, Colquhoun HL, Ivers NM, Brehaut JC, Landis-Lewis Z, Armitage CJ, de Keizer NF, and Peek N
- Subjects
- Benchmarking, Health Services Research, Humans, Randomized Controlled Trials as Topic, Clinical Audit standards, Evidence-Based Practice, Formative Feedback, Models, Theoretical, Process Assessment, Health Care, Quality Improvement standards
- Abstract
Background: Audit and feedback (A&F) is a common quality improvement strategy with highly variable effects on patient care. It is unclear how A&F effectiveness can be maximised. Since the core mechanism of action of A&F depends on drawing attention to a discrepancy between actual and desired performance, we aimed to understand current and best practices in the choice of performance comparator., Methods: We described current choices for performance comparators by conducting a secondary review of randomised trials of A&F interventions and identifying the associated mechanisms that might have implications for effective A&F by reviewing theories and empirical studies from a recent qualitative evidence synthesis., Results: We found across 146 trials that feedback recipients' performance was most frequently compared against the performance of others (benchmarks; 60.3%). Other comparators included recipients' own performance over time (trends; 9.6%) and target standards (explicit targets; 11.0%), and 13% of trials used a combination of these options. In studies featuring benchmarks, 42% compared against mean performance. Eight (5.5%) trials provided a rationale for using a specific comparator. We distilled mechanisms of each comparator from 12 behavioural theories, 5 randomised trials, and 42 qualitative A&F studies., Conclusion: Clinical performance comparators in published literature were poorly informed by theory and did not explicitly account for mechanisms reported in qualitative studies. Based on our review, we argue that there is considerable opportunity to improve the design of performance comparators by (1) providing tailored comparisons rather than benchmarking everyone against the mean, (2) limiting the amount of comparators being displayed while providing more comparative information upon request to balance the feedback's credibility and actionability, (3) providing performance trends but not trends alone, and (4) encouraging feedback recipients to set personal, explicit targets guided by relevant information.
- Published
- 2019
- Full Text
- View/download PDF