1. Outlier identification and monitoring of institutional or clinician performance: an overview of statistical methods and application to national audit data
- Author
-
Menelaos Pavlou, Gareth Ambler, Rumana Z. Omar, Andrew T. Goodwin, Uday Trivedi, Peter Ludman, and Mark de Belder
- Subjects
Outlier detection ,Funnel plot ,Random effects model ,Overdispersion ,Public aspects of medicine ,RA1-1270 - Abstract
Abstract Background Institutions or clinicians (units) are often compared according to a performance indicator such as in-hospital mortality. Several approaches have been proposed for the detection of outlying units, whose performance deviates from the overall performance. Methods We provide an overview of three approaches commonly used to monitor institutional performances for outlier detection. These are the common-mean model, the ‘Normal-Poisson’ random effects model and the ‘Logistic’ random effects model. For the latter we also propose a visualisation technique. The common-mean model assumes that the underlying true performance of all units is equal and that any observed variation between units is due to chance. Even after applying case-mix adjustment, this assumption is often violated due to overdispersion and a post-hoc correction may need to be applied. The random effects models relax this assumption and explicitly allow the true performance to differ between units, thus offering a more flexible approach. We discuss the strengths and weaknesses of each approach and illustrate their application using audit data from England and Wales on Adult Cardiac Surgery (ACS) and Percutaneous Coronary Intervention (PCI). Results In general, the overdispersion-corrected common-mean model and the random effects approaches produced similar p-values for the detection of outliers. For the ACS dataset (41 hospitals) three outliers were identified in total but only one was identified by all methods above. For the PCI dataset (88 hospitals), seven outliers were identified in total but only two were identified by all methods. The common-mean model uncorrected for overdispersion produced several more outliers. The reason for observing similar p-values for all three approaches could be attributed to the fact that the between-hospital variance was relatively small in both datasets, resulting only in a mild violation of the common-mean assumption; in this situation, the overdispersion correction worked well. Conclusion If the common-mean assumption is likely to hold, all three methods are appropriate to use for outlier detection and their results should be similar. Random effect methods may be the preferred approach when the common-mean assumption is likely to be violated.
- Published
- 2023
- Full Text
- View/download PDF