7 results on '"Savitz L"'
Search Results
2. Data Cleaning in the Evaluation of a Multi-Site Intervention Project.
- Author
-
Welch G, von Recklinghausen F, Taenzer A, Savitz L, and Weiss L
- Abstract
Context: The High Value Healthcare Collaborative (HVHC) sepsis project was a two-year multi-site project where Member health care delivery systems worked on improving sepsis care using a dissemination & implementation framework designed by HVHC. As part of the project evaluation, participating Members provided 5 data submissions over the project period. Members created data files using a uniform specification, but the data sources and methods used to create the data sets differed. Extensive data cleaning was necessary to get a data set usable for the evaluation analysis., Case Description: HVHC was the coordinating center for the project and received and cleaned all data submissions. Submissions received 3 sequentially more detailed levels of checking by HVHC. The most detailed level evaluated validity by comparing values within-Member over time and between Member. For a subset of episodes Member-submitted data were compared to matched Medicare claims data., Findings: Inconsistencies in data submissions, particularly for length-of-stay variables were common in early submissions and decreased with subsequent submissions. Multiple resubmissions were sometimes required to get clean data. Data checking also uncovered a systematic difference in the way Medicare and some members defined intensive care unit stay., Conclusions: Data checking is a critical for ensuring valid analytic results for projects using electronic health record data. It is important to budget sufficient resources for data checking. Interim data submissions and checks help find anomalies early. Data resubmissions should be checked as fixes can introduce new errors. Communicating with those responsible for creating the data set provides critical information.
- Published
- 2017
- Full Text
- View/download PDF
3. The Effect of the Hospital Readmission Reduction Program on the Duration of Observation Stays: Using Regression Discontinuity to Estimate Causal Effects.
- Author
-
Albritton J, Belnap T, and Savitz L
- Abstract
Research Objective: Determine whether hospitals are increasing the duration of observation stays following index admission for heart failure to avoid potential payment penalties from the Hospital Readmission Reduction Program., Study Design: The Hospital Readmission Reduction Program applies a 30-day cutoff after which readmissions are no longer penalized. Given this seemingly arbitrary cutoff, we use regression discontinuity design, a quasi-experimental research design that can be used to make causal inferences., Population Studied: The High Value Healthcare Collaborative includes member healthcare systems covering 57% of the nation's hospital referral regions. We used Medicare claims data including all patients residing within these regions. The study included patients with index admissions for heart failure from January 1, 2012 to June 30, 2015 and a subsequent observation stay within 60 days. We excluded hospitals with fewer than 25 heart failure readmissions in a year or fewer than 5 observation stays in a year and patients with subsequent observation stays at a different hospital., Principal Findings: Overall, there was no discontinuity at the 30-day cutoff in the duration of observation stays, the percent of observation stays over 12 hours, or the percent of observation stays over 24 hours. In the sub-analysis, the discontinuity was significant for non-penalized., Conclusion: The findings reveal evidence that the HRRP has resulted in an increase in the duration of observation stays for some non-penalized hospitals.
- Published
- 2017
- Full Text
- View/download PDF
4. Analytical Methods for a Learning Health System: 4. Delivery System Science.
- Author
-
Stoto M, Parry G, and Savitz L
- Abstract
The last in a series of four papers on how learning health systems can use routinely collected electronic health data (EHD) to advance knowledge and support continuous learning, this review describes how delivery system science provides a systematic means to answer questions that arise in translating complex interventions to other practice settings. When the focus is on translation and spread of innovations, the questions are different than in evaluative research. Causal inference is not the main issue, but rather one must ask: How and why does the intervention work? What works for whom and in what contexts? How can a model be amended to work in new settings? In these settings, organizational factors and design, infrastructure, policies, and payment mechanisms all influence an intervention's success, so a theory-driven formative evaluation approach that considers the full path of the intervention from activities to engage participants and change how they act to the expected changes in clinical processes and outcomes is needed. This requires a scientific approach to quality improvement that is characterized by a basis in theory; iterative testing; clear, measurable process and outcomes goals; appropriate analytic methods; and documented results. To better answer the questions that arise in delivery system science, this paper introduces a number of standard qualitative research approaches that can be applied in a learning health system: Pawson and Tilley's "realist evaluation," theory-based evaluation approaches, mixed-methods and case study research approaches, and the "positive deviance" approach.
- Published
- 2017
- Full Text
- View/download PDF
5. Analytical Methods for a Learning Health System: 1. Framing the Research Question.
- Author
-
Stoto M, Oakes M, Stuart E, Savitz L, Priest EL, and Zurovac J
- Abstract
Learning health systems use routinely collected electronic health data (EHD) to advance knowledge and support continuous learning. Even without randomization, observational studies can play a central role as the nation's health care system embraces comparative effectiveness research and patient-centered outcomes research. However, neither the breadth, timeliness, volume of the available information, nor sophisticated analytics, allow analysts to confidently infer causal relationships from observational data. However, depending on the research question, careful study design and appropriate analytical methods can improve the utility of EHD. The introduction to a series of four papers, this review begins with a discussion of the kind of research questions that EHD can help address, noting how different evidence and assumptions are needed for each. We argue that when the question involves describing the current (and likely future) state of affairs, causal inference is not relevant, so randomized clinical trials (RCTs) are not necessary. When the question is whether an intervention improves outcomes of interest, causal inference is critical, but appropriately designed and analyzed observational studies can yield valid results that better balance internal and external validity than typical RCTs. When the question is one of translation and spread of innovations, a different set of questions comes into play: How and why does the intervention work? How can a model be amended or adapted to work in new settings? In these "delivery system science" settings, causal inference is not the main issue, so a range of quantitative, qualitative, and mixed research designs are needed. We then describe why RCTs are regarded as the gold standard for assessing cause and effect, how alternative approaches relying on observational data can be used to the same end, and how observational studies of EHD can be effective complements to RCTs. We also describe how RCTs can be a model for designing rigorous observational studies, building an evidence base through iterative studies that build upon each other (i.e., confirmation across multiple investigations).
- Published
- 2017
- Full Text
- View/download PDF
6. Analytical Methods for a Learning Health System: 2. Design of Observational Studies.
- Author
-
Stoto M, Oakes M, Stuart E, Priest EL, and Savitz L
- Abstract
The second paper in a series on how learning health systems can use routinely collected electronic health data (EHD) to advance knowledge and support continuous learning, this review summarizes study design approaches, including choosing appropriate data sources, and methods for design and analysis of natural and quasi-experiments. The primary strength of study design approaches described in this section is that they study the impact of a deliberate intervention in real-world settings, which is critical for external validity. These evaluation designs address estimating the counterfactual - what would have happened if the intervention had not been implemented. At the individual level, epidemiologic designs focus on identifying situations in which bias is minimized. Natural and quasi-experiments focus on situations where the change in assignment breaks the usual links that could lead to confounding, reverse causation, and so forth. And because these observational studies typically use data gathered for patient management or administrative purposes, the possibility of observation bias is minimized. The disadvantages are that one cannot necessarily attribute the effect to the intervention (as opposed to other things that might have changed), and the results do not indicate what about the intervention made a difference. Because they cannot rely on randomization to establish causality, program evaluation methods demand a more careful consideration of the "theory" of the intervention and how it is expected to play out. A logic model describing this theory can help to design appropriate comparisons, account for all influential variables in a model, and help to ensure that evaluation studies focus on the critical intermediate and long-term outcomes as well as possible confounders.
- Published
- 2017
- Full Text
- View/download PDF
7. Introduction of an Area Deprivation Index Measuring Patient Socioeconomic Status in an Integrated Health System: Implications for Population Health.
- Author
-
Knighton AJ, Savitz L, Belnap T, Stephenson B, and VanDerslice J
- Abstract
Introduction: Intermountain Healthcare is a fully integrated delivery system based in Salt Lake City, Utah. As a learning healthcare system with a mission of performance excellence, it became apparent that population health management and our efforts to move towards shared accountability would require additional patient-centric metrics in order to provide the right care to the right patients at the right time. Several European countries have adopted social deprivation indices in measuring the impact that social determinants can have on health. Such indices provide a geographic, area-based measure of how socioeconomically deprived residents of that area are on average. Intermountain's approach was to identify a proxy measure that did not require front-line data collection and could be standardized for our patient population, leading us to the area deprivation index or ADI. This paper describes the specifications and calculation of an ADI for the state of Utah. Results are presented along with introduction of three use cases demonstrating the potential for application of an ADI in quality improvement in a learning healthcare system., Case Description: The Utah ADI shows promise in providing a proxy for patient-reported measures reflecting key socio-economic indicators useful for tailoring patient interventions to improve health care delivery and patient outcomes. Strengths of this approach include a consistent standardized measurement of social determinants, use of more granular block group level measures and a limited data capture burden for front-line teams. While the methodology is generalizable to other communities, results of this index are limited to block groups within the state of Utah and will differ from national calculations or calculations for other states. The use of composite measures to evaluate individual characteristics must also be approached with care. Other limitations with the use of U.S. Census data include use of estimates and missing data., Conclusion: Initial applications in three meaningfully different areas of an integrated health system provide initial evidence of its broad applicability in addressing the impact of social determinants on health. The variation in socio-economic status by quintile also has potential for clinical significance, though more research is needed to link variation in ADI with variation in health outcomes overall and by disease type.
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.