101. Unpacking quality indicators: how much do they reflect differences in the quality of care?
- Author
-
Jill Tinmouth
- Subjects
media_common.quotation_subject ,Psychological intervention ,quality measurement ,Accreditation ,03 medical and health sciences ,primary care ,0302 clinical medicine ,Resource (project management) ,Ambulatory care ,Neoplasms ,Health care ,Medicine ,Humans ,Performance measurement ,Quality (business) ,Operations management ,030212 general & internal medicine ,media_common ,Original Research ,Primary Health Care ,business.industry ,Information technology ,Reproducibility of Results ,health policy ,performance measures ,Cross-Sectional Studies ,Risk analysis (engineering) ,030220 oncology & carcinogenesis ,Performance indicator ,business - Abstract
Just over 50 years ago, Avedis Donabedian published his seminal paper, which sought to define and specify the ‘quality of health care’, articulating the now paradigmatic triad of structure, process and outcome for measuring healthcare quality.1 In recent years, we have seen the rapid expansion of increasingly inexpensive information technology capability and capacity, facilitating the collection and analysis of large healthcare data sets. These technological advances fuel the current proliferation of performance measurement in healthcare.2 Increasingly, in an effort to improve care, many cancer health systems, including those in England,3 the USA4 and Canada,5 6 are publicly reporting performance indicators, generally derived from these large data sets. Not surprisingly, differences in prevention, early detection and/or treatment of cancer are often used to explain the observed differences in performance across jurisdictions.6–9 Given the considerable effort and resource invested in performance measurement as well as potential adverse consequences if done poorly,10 it is important to get it right. Determining the effectiveness of healthcare performance measurement is challenging,11 particularly at the health system level. Often, performance measurement is implemented uniformly across an entire system, making well-designed controlled analysis less feasible or impossible12 13 and leaving evaluations vulnerable to secular trends.14 At the physician level, audit and feedback studies report variable results: meta-analyses show a modest benefit overall,15–17 but an important proportion of interventions were ineffective or minimally effective with a few studies suggesting a negative effect on performance.16 Likely, this heterogeneity is due to the complexity of the endeavour and its many moving parts, which include the behaviour targeted, the recipients of the feedback, their environment, the use of cointerventions and the components of the audit and feedback intervention itself.18 The latter generally comprises performance indicators, often derived from large healthcare data sets; however, …
- Published
- 2017