Dougal S Hargreaves, Carolyn Steele Gray, Bejoy Nambiar, Nicolette Sheridan, G. Ross Baker, Walter P. Wodchis, Jean-Louis Denis, Martin J. Connolly, Ann McKillop, Peter Carswell, Timothy Kenealy, James Shaw, and Tim Colbourn
IntroductionA recent WHO multi-country study on maternal and newborn health concluded that there was no evidence of an association between high coverage with essential interventions and reduced mortality in health care facilities, or improvement in other outcomes.1 According to Horton the missing ingredient in this relation is quality of care.2 Quality improvement in healthcare has adopted techniques mainly from industries such as manufacturing and has been used widely in Europe and US. However, evidence of success of these techniques in healthcare is not very conclusive, especially in low and middle-income countries. There have been limited efforts to critically analyse the techniques used in quality improvement interventions. One of the main challenges in evaluating quality improvement is the complexity of the interventions themselves and the complex nature of the systems in which they are implemented. Robust evidence regarding quality improvement interventions for resource poor settings is generally lacking.The MaiKhanda trial looked at the effect of QI interventions and community women's groups on maternal and newborn mortality in 3 central districts in Malawi.3 The impact evaluation measuring effect on newborn mortality for the QI interventions, using a cluster RCT approach, remained inconclusive. We use a Theory-Based Evaluation (TBE) approach to understand why improvement interventions undertaken by MaiKhanda for new-born care did not show an effect. Absence of effect could be attributed to a failure of theory, a failure of implementation, an evaluation failure or a combination of these.Our primary objective was to understand the mechanisms by which the QI interventions worked (or not) and explore the interaction between the various factors that mediated the lack of effect on neonatal mortality that was observed in the cluster randomized control trial.MethodsOur research strategy consisted of developing a post-hoc Theory of Change, consolidating and synthesizing all the available evidence using an appropriate framework, and analysing the program and implementation theory using theory based approaches to evaluation.Data synthesis was conducted using the Consolidate Framework for Implementation Research (CFIR).4 The synthesis takes into consideration the various reports and documents accumulated through the life of the project and complements the process evaluation studies conducted during the same period. In doing so, it draws a picture of the intervention with a multi-dimensional perspective, which provides insights into the evolution of the project. The framework is very comprehensive covering 5 major domains and a range of constructs, not all of which were included in our study. As this was post-hoc analysis, the choice of constructs was based on the availability of data rather than prioritizing the key constructs to consider.CFIR helps to produce structured and comprehensive data that is then used for analysing the program theory in relation to the intervention outcome. The program theory thus generated for the MaiKhanda intervention is compared with the program theories of the Michigan Keystone Project, which used similar collaborative methods to successfully reduce their central venous line blood stream infections in 106 participating ICUs.5 The rationale for such a comparison is that while the interventions per se are very unique and specific to their context, the program theories underlying the use of collaborative methods in both the interventions is the same and therefore comparable. Theories offer a higher level of abstraction that can be comparable across different settings.6 ResultsThe key finding from analysis of the program theory is that similar intervention strategies that triggered successful mechanisms for improvement in the Keystone Project failed to generate such mechanisms in MaiKhanda project.The Model for Improvement used in MaiKhanda was built around Deming's improvement theory7 and Roger's diffusion of innovation theory.8 The former theory considers improvement as a product of subject matter knowledge and profound knowledge. Subject matter knowledge on essential and emergency newborn care was generally lacking among health care providers in Malawi. Similarly, understanding variations within the health systems is an acquired skill. While the implementing partners, provided ample opportunities for the Malawian health system to learn the Institute for Healthcare Improvement (IHI) model for improvement, in general, QI teams lacked capacity to collate data and analyse the variations between the health facilities. QI was a fairly new concept in Malawi and MaiKhanda's attempts to embed it within existing health system was limited by challenges of the health systems context, MaiKhanda's own organizational transition and QI and clinical capacity of health care providers.The main challenge for MaiKhanda was to simultaneously implement and sustain the various change packages it had introduced in the different facilities. While there were isolated instances of successful intervention activities within MaiKhanda, it did not build enough momentum to generate mechanisms across a critical mass of the facilities that would eventually result in improved newborn outcomes. This can be attributed to the implementation strength, context and complexity of MaiKhanda's interventions. This is explored further using the implementation theory.Implementation was based on diffusion theories where better performing facilities were to act as role models for other facilities to emulate. The cRCT design for measuring impact evaluation required a random allocation of the improvement facilities and this conflicted with innovation diffusion theories, which prescribed a gradual organic spread of the interventions by strategically engaging the innovators and early adapters.Limitations of the evaluation design notwithstanding, the implementation strength characterized by the dose, duration, intensity and specificity of the intervention was sub-optimal.Implementation strength is not the only factor triggering an intervention mechanism and cannot be measured independent of the intervention complexity or the intervention context. For example, MaiKhanda struggled to show an effect of its interventions, despite having a long pre-intervention period to refine its interventions, while the Michigan study produced results within 18 month period. This could be because of other factors related to intervention complexity such as the long implementation chain for intervention delivery, the subjective perception of the agency (QI teams) regarding QI and contextual factors such as organizational readiness, the health systems context, QI team capacity to deliver QI interventions and MaiKhanda's own internal capacity.Human agency is at the heart of implementation and the intervention required a continuous and prolonged time and effort, than was anticipated, to engage and train the health facility QI teams on the improvement model.One of the key factors affecting the uptake of strategies was MaiKhanda's positioning within the health system and the degree of influence it could exert on other actors. This factor has a significant role to play in country where projects are donor supported and perhaps also donor driven. The period of the intervention also saw MaiKhanda going through a period of rapid organizational transition, which affected intervention implementation on the ground. Furthermore, MaiKhanda's own understanding of QI concepts was evolving gradually and this coupled with its long implementation chain, influenced the subjective understanding of the QI teams regarding QI concepts. Health facility staff also lacked the necessary skills and knowledge related to management of newborn health.Limited resources within the health facilities meant that gains achieved in some aspects of the intervention could not be sustained in the long run. External contextual factors such as fuel shortages contributed to poor implementation. Changes in policy such as government ban on TBAs, affected intervention uptake and resulted in an increase in health facility deliveries, overwhelming the already under-resourced staff capacity in the health facilities. It is conceivable that quality improvement was not on top of their priority list. But, ‘motivation’ to be involved in QI Collaboratives remained high. In resource constrained settings, ‘motivation’ can be influenced by the lure of personal incentives (such as per diems for attending workshops and meetings) as much as individual's commitment to broader social gains (ie reduction in newborn case fatality rates in their facility). The improvement model was competing against other existing models and it was difficult to get enough stakeholder commitment to the prescribed model as there were huge expectations fuelled by the poverty and poor governance structures and a culture of “perdiemitis” was prevalent in Malawian health care system.9 DiscussionAs is evident from the study, a single research method will not be able to provide justice to evaluation of a complex set of factors that influence newborn outcomes. We propose a research strategy that includes developing a Theory of Change, followed by evaluation of the program theory, measuring implementation strength, analysing implementation theory and comparing this in relation to the outcomes of the intervention observed through the impact evaluation. The results arising from such a comprehensive evaluation will contribute to the growth of improvement science with the accumulation of knowledge and explanation rather than being just a bedrock of observational facts.More generally, we propose that design, implementation and evaluation of QI activities, particularly in resource-poor settings, should consider five key principles i.e it should include whole systems thinking, accountability, participatory approach, should be evidence-based and adapt innovative methods.10