833 results on '"principal stratification"'
Search Results
202. Evaluating Candidate Principal Surrogate Endpoints.
- Author
-
Gilbert, Peter B. and Hudgens, Michael G.
- Subjects
- *
BIOMARKERS , *BIOCHEMISTRY , *IMMUNE response , *BIOMETRY , *VACCINES - Abstract
Frangakis and Rubin (2002, Biometrics 58, 21–29) proposed a new definition of a surrogate endpoint (a “principal” surrogate) based on causal effects. We introduce an estimand for evaluating a principal surrogate, the causal effect predictiveness (CEP) surface, which quantifies how well causal treatment effects on the biomarker predict causal treatment effects on the clinical endpoint. Although the CEP surface is not identifiable due to missing potential outcomes, it can be identified by incorporating a baseline covariate(s) that predicts the biomarker. Given case–cohort sampling of such a baseline predictor and the biomarker in a large blinded randomized clinical trial, we develop an estimated likelihood method for estimating the CEP surface. This estimation assesses the “surrogate value” of the biomarker for reliably predicting clinical treatment effects for the same or similar setting as the trial. A CEP surface plot provides a way to compare the surrogate value of multiple biomarkers. The approach is illustrated by the problem of assessing an immune response to a vaccine as a surrogate endpoint for infection. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
203. Does Finasteride Affect the Severity of Prostate Cancer? A Causal Sensitivity Analysis.
- Author
-
SHEPHERD, Bryan E., REDMAN, Mary W., and ANKERST, Donna P.
- Subjects
- *
FINASTERIDE , *PROSTATE cancer , *SENSITIVITY analysis , *CANCER risk factors , *CANCER treatment , *BIOPSY - Abstract
In 2003 Thompson and colleagues reported that daily use of finasteride reduced the prevalence of prostate cancer by 25% compared to placebo. These results were based on the double-blind randomized Prostate Cancer Prevention Trial (PCPT), which followed 18,882 men with no prior or current indications of prostate cancer annually for 7 years. Enthusiasm for the risk reduction afforded by the chemopreventative agent and adoption of its use in clinical practice, however, was severely dampened by the additional finding in the trial of an increased absolute number of high-grade (Gleason score ≥ 7) cancers on the finasteride arm. The question arose as to whether this finding truly implied that finasteride increased the risk of more severe prostate cancer or was a study artifact due to a series of possible postrandomization selection biases, including differences among treatment arms in patient characteristics of cancer cases, differences in biopsy verification of cancer status due to increased sensitivity of prostate-specific antigen under finasteride, differential grading by biopsy due to prostate volume reduction by finasteride, and nonignorable dropout. Via a causal inference approach implementing inverse probability weighted estimating equations, this analysis addresses the question of whether finasteride caused more severe prostate cancer by estimating the mean treatment difference in prostate cancer severity between finasteride and placebo for the principal stratum of participants who would have developed prostate cancer regardless of treatment assignment. We perform sensitivity analyses that sequentially adjust for the numerous potential postrandomization biases conjectured in the PCPT. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
204. Causal Inference in Randomized Experiments With Mediational Processes.
- Author
-
Jo, Booil
- Subjects
CAUSATION (Philosophy) ,INFERENCE (Logic) ,STOCHASTIC processes ,STRUCTURAL equation modeling ,PARAMETER estimation ,MONTE Carlo method - Abstract
This article links the structural equation modeling (SEM) approach with the principal stratification (PS) approach, both of which have been widely used to study the role of intermediate posttreatment outcomes in randomized experiments. Despite the potential benefit of such integration, the 2 approaches have been developed in parallel with little interaction. This article proposes the cross-model translation (CMT) approach, in which parameter estimates are translated back and forth between the PS and SEM models. First, without involving any particular identifying assumptions, translation between PS and SEM parameters is carded out on the basis of their close conceptual connection. Monte Carlo simulations are used to further clarify the relation between the 2 approaches under particular identifying assumptions. The study concludes that, under the common goal of causal inference, what makes a practical difference is the choice of identifying assumptions, not the modeling framework itself. The CMT approach provides a common ground in which the PS and SEM approaches can be jointly considered, focusing on their common inferential problems. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
205. The Truncation-by-Death Problem: What To Do in an Experimental Evaluation When the Outcome Is Not Always Defined.
- Author
-
McConnell, Sheena, Stuart, Elizabeth A., and Devaney, Barbara
- Subjects
- *
EVALUATION , *RESEARCH , *OUTCOME assessment (Social services) , *STATISTICS , *DIFFERENCES , *EXPERIMENTS - Abstract
Although experiments are viewed as the gold standard for evaluation, some of their benefits may be lost when, as is common, outcomes are not defined for some sample members. In evaluations of marriage interventions, for example, a key outcome—relationship quality—is undefined when a couple splits up. This article shows how treatment-control differences in mean outcomes can be misleading when outcomes are not defined for everyone and discusses ways to identify the seriousness of the problem. Potential solutions to the problem are described, including approaches that rely on simple treatment-control differences-in-means as well as more complex modeling approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
206. Semiparametric estimation of treatment effects given base-line covariates on an outcome measured after a post-randomization event occurs.
- Author
-
Jemiai, Yannis, Rotnitzky, Andrea, Shepherd, Bryan E., and Gilbert, Peter B.
- Subjects
ANALYSIS of covariance ,RANDOM data (Statistics) ,COUNTERFACTUALS (Logic) ,MISSING data (Statistics) ,STRUCTURAL frame models ,VACCINES - Abstract
We consider estimation, from a double-blind randomized trial, of treatment effect within levels of base-line covariates on an outcome that is measured after a post-treatment event E has occurred in the subpopulation
E,E that would experience event E regardless of treatment. Specifically, we consider estimation of the parameters γ indexing models for the outcome mean conditional on treatment and base-line covariates in the subpopulationE,E . Such parameters are not identified from randomized trial data but become identified if additionally it is assumed that the subpopulationE, E of subjects that would experience event E under the second treatment but not under the first is empty and a parametric model for the conditional probability that a subject experiences event E if assigned to the first treatment given that the subject would experience the event if assigned to the second treatment, his or her outcome under the second treatment and his or her pretreatment covariates. We develop a class of estimating equations whose solutions comprise, up to asymptotic equivalence, all consistent and asymptotically normal estimators of γ under these two assumptions. In addition, we derive a locally semiparametric efficient estimator of γ. We apply our methods to estimate the effect on mean viral load of vaccine versus placebo after infection with human immunodeficiency virus (the event E) in a placebo-controlled randomized acquired immune deficiency syndrome vaccine trial. [ABSTRACT FROM AUTHOR]- Published
- 2007
- Full Text
- View/download PDF
207. Principal Stratification Designs to Estimate Input Data Missing Due to Death.
- Author
-
Frangakis, Constantine E., Rubin, Donald B., An, Ming-Wen, and MacKenzie, Ellen
- Subjects
- *
WOUNDS & injuries , *DEATH , *MATHEMATICAL variables , *MORTALITY , *PREDICTIVE validity , *COHORT analysis - Abstract
We consider studies of cohorts of individuals after a critical event, such as an injury, with the following characteristics. First, the studies are designed to measure “input” variables, which describe the period before the critical event, and to characterize the distribution of the input variables in the cohort. Second, the studies are designed to measure “output” variables, primarily mortality after the critical event, and to characterize the predictive (conditional) distribution of mortality given the input variables in the cohort. Such studies often possess the complication that the input data are missing for those who die shortly after the critical event because the data collection takes place after the event. Standard methods of dealing with the missing inputs, such as imputation or weighting methods based on an assumption of ignorable missingness, are known to be generally invalid when the missingness of inputs is nonignorable, that is, when the distribution of the inputs is different between those who die and those who live. To address this issue, we propose a novel design that obtains and uses information on an additional key variable—a treatment or externally controlled variable, which if set at its “effective” level, could have prevented the death of those who died. We show that the new design can be used to draw valid inferences for the marginal distribution of inputs in the entire cohort, and for the conditional distribution of mortality given the inputs, also in the entire cohort, even under nonignorable missingness. The crucial framework that we use is principal stratification based on the potential outcomes, here mortality under both levels of treatment. We also show using illustrative preliminary injury data that our approach can reveal results that are more reasonable than the results of standard methods, in relatively dramatic ways. Thus, our approach suggests that the routine collection of data on variables that could be used as possible treatments in such studies of inputs and mortality should become common. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
208. Application of the Principal Stratification Approach to the Faenza Randomized Experiment on Breast Self-Examination.
- Author
-
Mattei, A. and Mealli, F.
- Subjects
- *
MEDICAL self-examination , *MISSING data (Statistics) , *NONCOMPLIANCE , *CAUSAL models , *DATA analysis - Abstract
In this article we present an extended framework based on the principal stratification approach (Frangakis and Rubin, 2002, Biometrics 58, 21–29), for the analysis of data from randomized experiments which suffer from treatment noncompliance, missing outcomes following treatment noncompliance, and “truncation by death.” We are not aware of any previous work that addresses all these complications jointly. This framework is illustrated in the context of a randomized trial of breast self-examination. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
209. Sensitivity Analyses Comparing Time-to-Event Outcomes Existing Only in a Subset Selected Postrandomization.
- Author
-
Shepherd, Bryan E., Gilbert, Peter B., and Lumley, Thomas
- Subjects
- *
AIDS , *PREVENTIVE medicine , *HIV , *ANTIVIRAL agents , *PROBABILITY theory , *MEDICAL research , *STATISTICS - Abstract
In some randomized studies, researchers are interested in determining the effect of treatment assignment on outcomes that may exist only in a subset chosen after randomization. For example, in preventative human immunodeficiency virus (HIV) vaccine efficacy trials, it is of interest to determine whether randomization to vaccine affects postinfection outcomes that may be right-censored. Such outcomes in these trials include time from infection diagnosis to initiation of antiretroviral therapy and time from infection diagnosis to acquired immune deficiency syndrome. Here we present sensitivity analysis methods for making causal comparisons on these postinfection outcomes. We focus on estimating the survival causal effect, defined as the difference between probabilities of not yet experiencing the event in the vaccine and placebo arms, conditional on being infected regardless of treatment assignment. This group is referred to as the always-infected principal stratum. Our key assumption is monotonicity--that subjects randomized to the vaccine arm who become infected would have been infected had they been randomized to placebo. We propose nonparametric, semiparametric, and parametric methods for estimating the survival causal effect. We apply these methods to the first Phase III preventative HIV vaccine trial, VaxGen's trial of AIDSVAX BIB. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
210. Estimating efficacy in a proposed randomized trial with initial and later non-compliance.
- Author
-
Baker, Stuart G., Frangakis, Constantine, and Lindeman, Karen S.
- Subjects
CESAREAN section ,OBSTETRICS ,HEALTH outcome assessment ,RANDOMIZED controlled trials ,NONCOMPLIANCE ,LABOR (Obstetrics) ,PROBABILITY theory - Abstract
A controversial topic in obstetrics is the effect of walking on the probability of Caesarean section among women in labour. A major reason for the controversy is the presence of non-compliance that complicates the estimation of efficacy, the effect of treatment received on outcome. The intent-to-treat method does not estimate efficacy, and estimates of efficacy that are based directly on treatment received may be biased because they are not protected by randomization. However, when non-compliance occurs immediately after randomization, the use of a potential outcomes model with reasonable assumptions has made it possible to estimate efficacy and still to retain the benefits of randomization to avoid selection bias. In this obstetrics application, non-compliance occurs initially and later in one arm. Consequently some parameters cannot be uniquely estimated without making strong assumptions. This difficulty is circumvented by a new study design involving an additional randomization group and a novel potential outcomes model (principal stratification). [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
211. Defining and Estimating Intervention Effects for Groups that will Develop an Auxiliary Outcome.
- Author
-
Joffe, Marshall M., Small, Dylan, and Chi-Yuan Hsu
- Subjects
DECISION theory ,MATHEMATICAL variables ,ESTIMATION theory ,CANCER ,NEPHROLOGY ,POPULATION - Abstract
It has recently become popular to define treatment effects for subsets of the target population characterized by variables not observable at the time a treatment decision is made. Characterizing and estimating such treatment effects is tricky; the most popular but naive approach inappropriately adjusts for variables affected by treatment and so is biased. We consider several appropriate ways to formalize the effects: principal stratification, stratification on a single potential auxiliary variable, stratification on an observed auxiliary variable and stratification on expected levels of auxiliary variables. We then outline identifying assumptions for each type of estimand. We evaluate the utility of these estimands and estimation procedures for decision making and understanding causal processes, contrasting them with the concepts of direct and indirect effects. We motivate our development with examples from nephrology and cancer screening, and use simulated data and real data on cancer screening to illustrate the estimation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
212. Eliciting a Counterfactual Sensitivity Parameter.
- Author
-
Shepherd, Bryan E., Gilbert, Peter B., and Mehrotra, Devan V.
- Subjects
ANALYSIS of variance ,RANDOMIZED controlled trials ,HIV ,VIRAL vaccines ,MEDICAL research ,MEDICAL statistics - Abstract
Sensitivity analyses, wherein estimation is performed for each of a range of values of a sensitivity parameter, are of particular use in causal inference, where estimands often are identified because of untestable assumptions. Sensitivity parameters may have counterfactual interpretations, making their elicitation especially challenging. This article describes our experience eliciting a counterfactual sensitivity parameter to be used in the analysis of an ongoing HIV vaccine trial. We include instructions given to 10 subject-matter experts, the chosen ranges of our eight responders and some of their comments, their ranges applied to data from an earlier trial, and a brief discussion of some general issues regarding counterfactual elicitation. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
213. Population Stratification, Adjustment for
- Author
-
Todd L. Edwards and Xiaoyi Gao
- Subjects
Geography ,Association (object-oriented programming) ,Principal stratification ,Statistics ,Principal component analysis ,Score method ,Econometrics ,Multidimensional scaling ,Population stratification ,Stratification (mathematics) ,Genetic association - Abstract
Population stratification is a major concern in genetic association studies. Failure to control it effectively can lead to excess false-positive results and failure to detect true associations. Many methods have been designed to adjust for population stratification, which mainly belong to the following categories: (1) genomic control, (2) structured association, (3) principal component or multidimensional scaling adjustment, (4) stratification score method and (5) other approaches. No method is likely to be superior in all situations. Care needs to be taken to ensure that the assumptions of the method are met and that the method is used for its intended purpose. Keywords: population stratification; genomic control; structured association; principal components; multidimensional scaling; stratification score
- Published
- 2017
- Full Text
- View/download PDF
214. Inferring causal directions from uncertain data
- Author
-
Guiming Luo, Weifeng Ma, and Yulai Zhang
- Subjects
Uncertain data ,Computer science ,Principal stratification ,05 social sciences ,Inference ,Regression analysis ,Probability density function ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,Causal system ,Artificial Intelligence ,Control and Systems Engineering ,0502 economics and business ,Data mining ,Noise (video) ,050207 economics ,Electrical and Electronic Engineering ,computer ,0105 earth and related environmental sciences - Abstract
Causal knowledge discovery is an essential task in many disciplines. Inferring the knowledge of causal directions from the measurement data of two correlated variables is one of the most basic but non-trivial problems in the research of causal discovery. Most of the existing methods assume that at least one of the variables is strictly measured. In practice, uncertain data with observation error is widely exists and is unavoidable for both the cause and the effect. Correct causal relationships will be blurred by such noise. A causal direction inference method based on the errors-in-variables (EIV) model is proposed in this work. All variables are assumed to be measured with observation errors in the errors-in-variables models. Causal directions will be inferred by computing the correlation coefficients between the regression model functions and the probability density functions on both of the possible causal directions. Experiments are done on artificial data sets and the real world data sets to illustrate the performance of the proposed method.
- Published
- 2017
- Full Text
- View/download PDF
215. Principal Score Methods: Assumptions, Extensions, and Practical Considerations
- Author
-
Avi Feller, Luke Miratrix, and Fabrizia Mealli
- Subjects
Program evaluation ,Computer science ,Principal stratification ,05 social sciences ,Principal (computer security) ,050401 social sciences methods ,Conditional probability ,Context (language use) ,01 natural sciences ,Ignorability ,Education ,010104 statistics & probability ,0504 sociology ,Causal inference ,Covariate ,Econometrics ,0101 mathematics ,Social Sciences (miscellaneous) - Abstract
Researchers addressing posttreatment complications in randomized trials often turn to principal stratification to define relevant assumptions and quantities of interest. One approach for the subsequent estimation of causal effects in this framework is to use methods based on the “principal score,” the conditional probability of belonging to a certain principal stratum given covariates. These methods typically assume that stratum membership is as good as randomly assigned, given these covariates. We clarify the key assumption in this context, known as principal ignorability, and argue that versions of this assumption are quite strong in practice. We describe these concepts in terms of both one- and two-sided noncompliance and propose a novel approach for researchers to “mix and match” principal ignorability assumptions with alternative assumptions, such as the exclusion restriction. Finally, we apply these ideas to randomized evaluations of a job training program and an early childhood education program. O...
- Published
- 2017
- Full Text
- View/download PDF
216. Links between causal effects and causal association for surrogacy evaluation in a gaussian setting
- Author
-
Anna Conlon, Karla Diaz-Ordaz, Michael R. Elliott, Jeremy M. G. Taylor, and Yun Li
- Subjects
Statistics and Probability ,Estimation ,Epidemiology ,Gaussian ,Principal stratification ,01 natural sciences ,Outcome (probability) ,010104 statistics & probability ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,Causal inference ,Similarity (psychology) ,Econometrics ,symbols ,Identifiability ,030212 general & internal medicine ,0101 mathematics ,Causal model ,Mathematics - Abstract
Two paradigms for the evaluation of surrogate markers in randomized clinical trials have been proposed: the causal effects paradigm and the causal association paradigm. Each of these paradigms rely on assumptions that must be made to proceed with estimation and to validate a candidate surrogate marker (S) for the true outcome of interest (T). We consider the setting in which S and T are Gaussian and are generated from structural models that include an unobserved confounder. Under the assumed structural models, we relate the quantities used to evaluate surrogacy within both the causal effects and causal association frameworks. We review some of the common assumptions made to aid in estimating these quantities and show that assumptions made within one framework can imply strong assumptions within the alternative framework. We demonstrate that there is a similarity, but not exact correspondence between the quantities used to evaluate surrogacy within each framework, and show that the conditions for identifiability of the surrogacy parameters are different from the conditions, which lead to a correspondence of these quantities.
- Published
- 2017
- Full Text
- View/download PDF
217. Augmented Designs to Assess Immune Response in Vaccine Trials.
- Author
-
Follman, Dean
- Subjects
- *
VACCINATION , *CLINICAL trials , *IMMUNE response , *HIV infections , *MEDICAL research , *DESIGN - Abstract
This article introduces methods for use in vaccine clinical trials to help determine whether the immune response to a vaccine is actually causing a reduction in the infection rate. This is not easy because immune response to the (say HIV) vaccine is only observed in the HIV vaccine arm. If we knew what the HIV-specific immune response in placebo recipients would have been, had they been vaccinated, this immune response could be treated essentially like a baseline covariate and an interaction with treatment could be evaluated. Relatedly, the rate of infection by this baseline covariate could be compared between the two groups and a causative role of immune response would be supported if infection risk decreased with increasing HIV immune response only in the vaccine group. We introduce two methods for inferring this HIV-specific immune response. The first involves vaccinating everyone before baseline with an irrelevant vaccine, for example, rabies. Randomization ensures that the relationship between the immune responses to the rabies and HIV vaccines observed in the vaccine group is the same as what would have been seen in the placebo group. We infer a placebo volunteer's response to the HIV vaccine using their rabies response and a prediction model from the vaccine group. The second method entails vaccinating all uninfected placebo patients at the closeout of the trial with the HIV vaccine and recording immune response. We pretend this immune response at closeout is what they would have had at baseline. We can then infer what the distribution of immune response among placebo infecteds would have been. Such designs may help elucidate the role of immune response in preventing infections. More pointedly, they could be helpful in the decision to improve or abandon an HIV vaccine with mediocre performance in a phase III trial. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
218. A Comparison of Eight Methods for the Dual-Endpoint Evaluation of Efficacy in a Proof-of-Concept HIV Vaccine Trial.
- Author
-
Mehrotra, Devan V., Xiaoming Li, and Gilbert, Peter B.
- Subjects
- *
HIV infections , *VACCINATION , *HIV , *CELLULAR immunity , *DISEASES - Abstract
To support the design of the world's first proof-of-concept (POC) efficacy trial of a cell-mediated immunity-based HIV vaccine, we evaluate eight methods for testing the composite null hypothesis of no-vaccine effect on either the incidence of HIV infection or the viral load set point among those infected, relative to placebo. The first two methods use a single test applied to the actual values or ranks of a burden-of-illness (BOI) outcome that combines the infection and viral load endpoints. The other six methods combine separate tests for the two endpoints using unweighted or weighted versions of the two-part z, Simes', and Fisher's methods. Based on extensive simulations that were used to design the landmark POC trial, the BOI methods are shown to have generally low power for rejecting the composite null hypothesis (and hence advancing the vaccine to a subsequent large-scale efficacy trial). The unweighted Simes' and Fisher's combination methods perform best overall. Importantly, this conclusion holds even after the test for the viral load component is adjusted for bias that can be introduced by conditioning on a postrandomization event (HIV infection). The adjustment is derived using a selection bias model based on the principal stratification framework of causal inference. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
219. Identification and estimation of causal effects with outcomes truncated by death
- Author
-
Thomas S. Richardson, Linbo Wang, and Xiao-Hua Zhou
- Subjects
Statistics and Probability ,FOS: Computer and information sciences ,Survivor average causal effect ,General Mathematics ,media_common.quotation_subject ,Principal stratification ,01 natural sciences ,Methodology (stat.ME) ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Statistics ,Model parameterization ,030212 general & internal medicine ,0101 mathematics ,Statistics - Methodology ,media_common ,Mathematics ,Selection bias ,Applied Mathematics ,Substitution (logic) ,Instrumental variable ,Articles ,Agricultural and Biological Sciences (miscellaneous) ,Identification (information) ,Variable (computer science) ,Causal inference ,Observational study ,Statistics, Probability and Uncertainty ,General Agricultural and Biological Sciences - Abstract
It is common in medical studies that the outcome of interest is truncated by death, meaning that a subject has died before the outcome could be measured. In this case, restricted analysis among survivors may be subject to selection bias. Hence, it is of interest to estimate the survivor average causal effect, defined as the average causal effect among the subgroup consisting of subjects who would survive under either exposure. In this paper, we consider the identification and estimation problems of the survivor average causal effect. We propose to use a substitution variable in place of the latent membership in the always-survivor group. The identification conditions required for a substitution variable are conceptually similar to conditions for a conditional instrumental variable, and may apply to both randomized and observational studies. We show that the survivor average causal effect is identifiable with use of such a substitution variable, and propose novel model parameterizations for estimation of the survivor average causal effect under our identification assumptions. Our approaches are illustrated via simulation studies and a data analysis., Comment: Correct a typo in equation (8)
- Published
- 2017
220. Sensitivity Analyses Comparing Outcomes Only Existing in a Subset Selected Post-Randomization, Conditional on Covariates, with Application to HIV Vaccine Trials.
- Author
-
Shepherd, Bryan E., Gilbert, Peter B., Jemiai, Yannis, and Rotnitzky, Andrea
- Subjects
- *
VIRAL load , *HIV infections , *VACCINATION , *REGRESSION analysis , *CLINICAL trials - Abstract
In many experiments, researchers would like to compare between treatments and outcome that only exists in a subset of participants selected after randomization. For example, in preventive HIV vaccine efficacy trials it is of interest to determine whether randomization to vaccine causes lower HIV viral load, a quantity that only exists in participants who acquire HIV. To make a causal comparison and account for potential selection bias we propose a sensitivity analysis following the principal stratification framework set forth by Frangakis and Rubin (2002, Biometrics 58, 21–29). Our goal is to assess the average causal effect of treatment assignment on viral load at a given baseline covariate level in the always infected principal stratum (those who would have been infected whether they had been assigned to vaccine or placebo). We assume stable unit treatment values (SUTVA), randomization, and that subjects randomized to the vaccine arm who became infected would also have become infected if randomized to the placebo arm (monotonicity). It is not known which of those subjects infected in the placebo arm are in the always infected principal stratum, but this can be modeled conditional on covariates, the observed viral load, and a specified sensitivity parameter. Under parametric regression models for viral load, we obtain maximum likelihood estimates of the average causal effect conditional on covariates and the sensitivity parameter. We apply our methods to the world's first phase III HIV vaccine trial. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
221. Polydesigns and Causal Inference.
- Author
-
Fan Li and Frangakis, Constantine E.
- Subjects
- *
NEEDLE exchange programs , *CAUSAL models , *INFERENCE (Logic) , *METHODOLOGY , *MEDICAL research - Abstract
In an increasingly common class of studies, the goal is to evaluate causal effects of treatments that are only partially controlled by the investigator. In such studies there are two conflicting features: (1) a model on the full cohort design and data can identify the causal effects of interest, but can be sensitive to extreme regions of that design's data, where model specification can have more impact; and (2) models on a reduced design (i.e., a subset of the full data), for example, conditional likelihood on matched subsets of data, can avoid such sensitivity, but do not generally identify the causal effects. We propose a framework to assess how inference is sensitive to designs by exploring combinations of both the full and reduced designs. We show that using such a “polydesign” framework generates a rich class of methods that can identify causal effects and that can also be more robust to model specification than methods using only the full design. We discuss implementation of polydesign methods, and provide an illustration in the evaluation of a needle exchange program. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
222. Causal Vaccine Effects on Binary Postinfection Outcomes.
- Author
-
Hudgens, Michael G. and Halloran, M. Elizabeth
- Subjects
- *
PUBLIC health , *VACCINES , *MEDICAL research , *CLINICAL trials , *ROTAVIRUSES , *WHOOPING cough vaccines - Abstract
The effects of vaccine on postinfection outcomes, such as disease, death, and secondary transmission to others, are important scientific and public health aspects of prophylactic vaccination. As a result, evaluation of many vaccine effects condition on being infected. Conditioning on an event that occurs posttreatment (in our case, infection subsequent to assignment to vaccine or control) can result in selection bias. Moreover, because the set of individuals who would become infected if vaccinated is likely not identical to the set of those who would become infected if given control, comparisons that condition on infection do not have a causal interpretation. In this article we consider identifiability and estimation of causal vaccine effects on binary postinfection outcomes. Using the principal stratification framework, we define a postinfection causal vaccine efficacy estimand in individuals who would be infected regardless of treatment assignment. The estimand is shown to be not identifiable under the standard assumptions of the stable unit treatment value, monotonicity, and independence of treatment assignment. Thus selection models are proposed that identify the causal estimand. Closed-form maximum likelihood estimators (MLEs) are then derived under these models, including those assuming maximum possible levels of positive and negative selection bias. These results show the relations between the MLE of the causal estimand and two commonly used estimators for vaccine effects on postinfection outcomes. For example, the usual intent-to-treat estimator is shown to be an upper bound on the postinfection causal vaccine effect provided that the magnitude of protection against infection is not too large. The methods are used to evaluate postinfection vaccine effects in a clinical trial of a rotavirus vaccine candidate and in a field study of a pertussis vaccine. Our results show that pertussis vaccination has a significant causal effect in reducing disease severity. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
223. Validation of surrogate endpoints in cancer clinical trials via principal stratification with an application to a prostate cancer trial
- Author
-
Shiro Tanaka, Yutaka Matsuyama, and Yasuo Ohashi
- Subjects
Male ,Statistics and Probability ,Oncology ,medicine.medical_specialty ,Endpoint Determination ,Epidemiology ,Principal stratification ,Estimating equations ,computer.software_genre ,01 natural sciences ,010104 statistics & probability ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Meta-Analysis as Topic ,Neoplasms ,Internal medicine ,medicine ,Clinical endpoint ,Humans ,0101 mathematics ,Proportional Hazards Models ,Randomized Controlled Trials as Topic ,business.industry ,Surrogate endpoint ,Prostatic Neoplasms ,Cancer ,Prostate-Specific Antigen ,medicine.disease ,Causality ,Logistic Models ,Clinical Trials, Phase III as Topic ,030220 oncology & carcinogenesis ,Causal inference ,Data mining ,business ,computer ,Algorithms ,Biomarkers - Abstract
Increasing attention has been focused on the use and validation of surrogate endpoints in cancer clinical trials. Previous literature on validation of surrogate endpoints are classified into four approaches: the proportion explained approach; the indirect effects approach; the meta-analytic approach; and the principal stratification approach. The mainstream in cancer research has seen the application of a meta-analytic approach. However, VanderWeele (2013) showed that all four of these approaches potentially suffer from the surrogate paradox. It was also shown that, if a principal surrogate satisfies additional criteria called one-sided average causal sufficiency, the surrogate cannot exhibit a surrogate paradox. Here, we propose a method for estimating principal effects under a monotonicity assumption. Specifically, we consider cancer clinical trials which compare a binary surrogate endpoint and a time-to-event clinical endpoint under two naturally ordered treatments (e.g. combined therapy vs. monotherapy). Estimation based on a mean score estimating equation will be implemented by the expectation-maximization algorithm. We will also apply the proposed method as well as other surrogacy criteria to evaluate the surrogacy of prostate-specific antigen using data from a phase III advanced prostate cancer trial, clarifying the complementary roles of both the principal stratification and meta-analytic approaches in the evaluation of surrogate endpoints in cancer. Copyright © 2017 John Wiley & Sons, Ltd.
- Published
- 2017
- Full Text
- View/download PDF
224. Latent Class Survival Models Linked by Principal Stratification to Investigate Heterogenous Survival Subgroups Among Individuals With Early-Stage Kidney Cancer
- Author
-
Robert G. Uzzo, Yu-Ning Wong, and Brian L. Egleston
- Subjects
Statistics and Probability ,Oncology ,medicine.medical_specialty ,Extramural ,business.industry ,Principal stratification ,medicine.disease ,01 natural sciences ,Article ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Internal medicine ,Statistics ,medicine ,030212 general & internal medicine ,0101 mathematics ,Statistics, Probability and Uncertainty ,Stage (cooking) ,business ,Kidney cancer ,Survival analysis - Abstract
Rates of kidney cancer have been increasing, with small incidental tumors experiencing the fastest growth rates. Much of the increase could be due to increased use of CT scans, MRIs, and ultrasounds for unrelated conditions. Many tumors might never have been detected or become symptomatic in the past. This suggests that many patients might benefit from less aggressive therapy, such as active surveillance by which tumors are surgically removed only if they become sufficiently large. However, it has been difficult for clinicians to identify subgroups of patients for whom treatment might be especially beneficial or harmful. In this work, we use a principal stratification framework to estimate the proportion and characteristics of individuals who have large or small hazard rates of death in two treatment arms. This allows us to assess who might be helped or harmed by aggressive treatment. We also use Weibull mixture models. This work differs from much previous work in that the survival classes upon which principal stratification is based are latent variables. That is, survival class is not an observed variable. We apply this work using Surveillance Epidemiology and End Results-Medicare claims data. Clinicians can use our methods for investigating treatments with heterogeneous effects.
- Published
- 2017
- Full Text
- View/download PDF
225. Estimating marginal causal effects in a secondary analysis of case-control data
- Author
-
Torbjörn Lind, Ingeborg Waernbaum, and Emma Persson
- Subjects
Statistics and Probability ,Matching (statistics) ,Epidemiology ,Average treatment effect ,Principal stratification ,Causal effect ,food and beverages ,030209 endocrinology & metabolism ,01 natural sciences ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Secondary analysis ,Propensity score matching ,Statistics ,Econometrics ,0101 mathematics ,Case control data ,Event (probability theory) ,Mathematics - Abstract
When an initial case-control study is performed, data can be used in a secondary analysis to evaluate the effect of the case-defining event on later outcomes. In this paper, we study the example in ...
- Published
- 2017
- Full Text
- View/download PDF
226. Effects of Kindergarten Retention Policy on Children's Cognitive Growth in Reading and Mathematics.
- Author
-
Hong, Guanglei and Raudenbush, Stephen W.
- Subjects
GRADING of students ,EARLY childhood education ,PRESCHOOLS ,GRADE repetition ,EDUCATIONAL psychology ,MATHEMATICS education ,LONGITUDINAL method - Abstract
Grade retention has been controversial for many years, and current calls to end social promotion have lent new urgency to this issue. On the one hand, a policy of retaining in grade those students making slow progress might facilitate instruction by making classrooms more homogeneous academically. On the other hand, grade retention might harm high-risk students by limiting their framing opportunities. Analyzing data from the US Early Childhood Longitudinal Study Kindergarten cohort with the technique of multilevel propensity score stratification, we find no evidence that a policy of grade retention in kindergarten improves average achievement in mathematics or reading. Nor do we find evidence that the policy benefits children who would be promoted under the policy. However, the evidence does suggest that children who are retained learn less than they would have had they instead been promoted. The negative effect of grade retention on those retained has little influence on the overall mean achievement of children attending schools with a retention policy because the fraction of children retained in those schools is quite small. Nevertheless, the effect of retention on the retainees is considerably large. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
227. Causal Inference Using Potential Outcomes: Design, Modeling, Decisions.
- Author
-
Rubin, Donald B.
- Subjects
- *
ANALYSIS of covariance , *REGRESSION analysis , *MATHEMATICAL statistics , *MULTIPLE comparisons (Statistics) , *STATISTICS , *EXPERIMENTAL design , *BAYESIAN analysis , *VALUES (Ethics) - Abstract
Causal effects are defined as comparisons of potential outcomes under different treatments on a common set of units. Observed values of the potential outcomes are revealed by the assignment mechanism-a probabilistic model for the treatment each unit receives as a function of covariates and potential outcomes. Fisher made tremendous contributions to causal inference through his work on the design of randomized experiments, but the potential outcomes perspective applies to other complex experiments and nonrandomized studies as well. As noted by Kempthorne in his 1976 discussion of Savage's Fisher lecture, Fisher never bridged his work on experimental design and his work on parametric modeling, a bridge that appears nearly automatic with an appropriate view of the potential outcomes framework, where the potential outcomes and covariates are given a Bayesian distribution to complete the model specification. Also, this framework crisply separates scientific inference for causal effects and decisions based on such inference, a distinction evident in Fisher's discussion of tests of significance versus tests in an accept/reject framework. But Fisher never used the potential outcomes framework, originally proposed by Neyman in the context of randomized experiments, and as a result he provided generally flawed advice concerning the use of the analysis of covariance to adjust for posttreatment concomitants in randomized trials. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
228. Direct and Indirect Causal Effects via Potential Outcomes.
- Author
-
Rubin, Donald B.
- Subjects
- *
ANTHRAX vaccines , *BIOMARKERS , *CAUSATION (Philosophy) , *CAUSAL models , *STATISTICS , *HEALTH outcome assessment , *PHARMACOLOGY , *MEDICAL sciences - Abstract
The use of the concept of ‘direct’ versus ‘indirect’ causal effects is common, not only in statistics but also in many areas of social and economic sciences. The related terms of ‘biomarkers’ and ‘surrogates’ are common in pharmacological and biomedical sciences. Sometimes this concept is represented by graphical displays of various kinds. The view here is that there is a great deal of imprecise discussion surrounding this topic and, moreover, that the most straightforward way to clarify the situation is by using potential outcomes to define causal effects. In particular, I suggest that the use of principal stratification is key to understanding the meaning of direct and indirect causal effects. A current study of anthrax vaccine will be used to illustrate ideas. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
229. Methodology for Evaluating a Partially Controlled Longitudinal Treatment Using Principal Stratification, With Application to a Needle Exchange Program.
- Author
-
Frangakis, Constantine E., Brookmeyer, Ronald S., Varadhan, Ravi, Safaeian, Mahboobeh, Vlahov, David, and Strathdee, Steffanie A.
- Subjects
- *
NEEDLE exchange programs , *HIV , *STATISTICS , *LONGITUDINAL method - Abstract
We consider studies for evaluating the short-term effect of a treatment of interest on a time-to-event outcome. The studies we consider are partially controlled in the following sense: (1) Subjects' exposure to the treatment of interest can vary over time, but this exposure is not directly controlled by the study; (2) subjects' follow-up time is not directly controlled by the study; and (3) the study directly controls another factor that can affect subjects' exposure to the treatment of interest as well as subjects' follow-up time. When factors (1) and (2) are both present in the study, evaluating the treatment of interest using standard methods, including instrumental variables, does not generally estimate treatment effects. We develop the methodology for estimating the effect of treatment in this setting of partially controlled studies under explicit assumptions using the framework for principal stratification for causal inference. We illustrate our methods by a study to evaluate the efficacy of the Baltimore Needle Exchange Program to reduce the risk of human immunodeficiency virus (HIV) transmission, using data on distance of the program's sites from the subjects. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
230. Conditional separable effects
- Author
-
Mats J. Stensrud, James M. Robins, Aaron Sarvet, Eric J. Tchetgen Tchetgen, and Jessica G. Young
- Subjects
Statistics and Probability ,FOS: Computer and information sciences ,truncation ,Mathematics - Statistics Theory ,Statistics Theory (math.ST) ,outcomes ,identifiability ,principal stratification ,Methodology (stat.ME) ,models ,FOS: Mathematics ,identification ,mediation ,estimands ,Statistics, Probability and Uncertainty ,causal inference ,separable effects ,Statistics - Methodology - Abstract
Researchers are often interested in treatment effects on outcomes that are only defined conditional on a post-treatment event status. For example, in a study of the effect of different cancer treatments on quality of life at end of follow-up, the quality of life of individuals who die during the study is undefined. In these settings, a naive contrast of outcomes conditional on the post-treatment variable is not an average causal effect, even in a randomized experiment. Therefore the effect in the principal stratum of those who would have the same value of the post-treatment variable regardless of treatment, such as the always survivors in a truncation by death setting, is often advocated for causal inference. While this principal stratum effect is a well defined causal contrast, it is often hard to justify that it is relevant to scientists, patients or policy makers, and it cannot be identified without relying on unfalsifiable assumptions. Here we formulate alternative estimands, the conditional separable effects, that have a natural causal interpretation under assumptions that can be falsified in a randomized experiment. We provide identification results and introduce different estimators, including a doubly robust estimator derived from the nonparametric influence function. As an illustration, we estimate a conditional separable effect of chemotherapies on quality of life in patients with prostate cancer, using data from a randomized clinical trial.
- Published
- 2020
- Full Text
- View/download PDF
231. Principal Stratification Approach to Broken Randomized Experiments: A Case Study of School Choice Vouchers in New York City.
- Author
-
Barnard, John, Frangakis, Constantine E., Hill, Jennifer L., and Rubin, Donald B.
- Subjects
- *
EDUCATION , *INNER cities , *CITIES & towns , *EDUCATION policy , *SCHOLARSHIPS - Abstract
The precarious state of the educational system in the inner cities of the United States, as well as its potential causes and solutions, have been popular topics of debate in recent years. Part of the difficulty in resolving this debate is the lack of solid empirical evidence regarding the true impact of educational initiatives. The efficacy of so-called "school choice" programs has been a particularly contentious issue. A current multimillion dollar program, the School Choice Scholarship Foundation Program in New York, randomized the distribution of vouchers in an attempt to shed some light on this issue. This is an important time for school choice, because on June 27, 2002 the U.S. Supreme Court upheld the constitutionality of a voucher program in Cleveland that provides scholarships both to secular and religious private schools. Although this study benefits immensely from a randomized design, it suffers from complications common to such research with human subjects: noncompliance with assigned "treatments" and missing data. Recent work has revealed threats to valid estimates of experimental effects that exist in the presence of noncompliance and missing data, even when the goal is to estimate simple intention-to-treat effects. Our goal was to create a better solution when faced with both noncompliance and missing data. This article presents a model that accommodates these complications that is based on the general framework of "principal stratification" and thus relies on more plausible assumptions than standard methodology. Our analyses revealed positive effects on math scores for children who applied to the program from certain types of schools - those with average test scores below the citywide median. Among these children, the effects are stronger for children who applied in the first grade and for African-American children. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
232. Effects in Adherent Subjects
- Author
-
Thomas Permutt
- Subjects
Statistics and Probability ,business.industry ,Principal stratification ,education ,Pharmaceutical Science ,Subject Characteristics ,Missing data ,behavioral disciplines and activities ,01 natural sciences ,Outcome (game theory) ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,health services administration ,mental disorders ,Medicine ,Treatment effect ,030212 general & internal medicine ,0101 mathematics ,business ,health care economics and organizations ,Dropout (neural networks) ,Clinical psychology - Abstract
Dropouts confound the treatment effect when the outcome and the dropout process both depend on subject characteristics. If dropout is unrelated to treatment, there is an unconfounded effect, but it...
- Published
- 2018
- Full Text
- View/download PDF
233. Overview of Analyses for Principal Stratification Intercurrent Event Strategies
- Author
-
Geert Molenberghs, Craig Mallinckrodt, Bohdana Ratitch, and Ilya Lipkovich
- Subjects
History ,Principal stratification ,Event (relativity) ,Cartography - Published
- 2019
- Full Text
- View/download PDF
234. Overview of Principal Stratification Methods
- Author
-
Craig Mallinckrodt, Geert Molenberghs, Ilya Lipkovich, and Bohdana Ratitch
- Subjects
Climatology ,Principal stratification ,Geology - Published
- 2019
- Full Text
- View/download PDF
235. A comparison of methods to estimate the survivor average causal effect in the presence of missing data: a simulation study
- Author
-
Robyn H. Guymer, Robert Finger, Amalia Karahalios, Jessica Kasza, Julie A. Simpson, and Myra B McGuinness
- Subjects
Male ,Simulation study ,Epidemiology ,Principal stratification ,Iron ,Missing data ,Marginal structural model ,Health Informatics ,Logistic regression ,01 natural sciences ,Sensitivity and Specificity ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Sex Factors ,Bias ,Covariate ,Statistics ,Medicine ,Humans ,Computer Simulation ,030212 general & internal medicine ,Unmeasured confounding ,0101 mathematics ,lcsh:R5-920 ,Models, Statistical ,Survival bias ,business.industry ,Macular degeneration ,Confounding ,Correction ,16. Peace & justice ,Survival Analysis ,Diet ,Causality ,Death ,Causal inference ,Data Interpretation, Statistical ,Propensity score matching ,Female ,business ,lcsh:Medicine (General) ,Sensitivity analysis ,Iron, Dietary ,Research Article - Abstract
Background Attrition due to death and non-attendance are common sources of bias in studies of age-related diseases. A simulation study is presented to compare two methods for estimating the survivor average causal effect (SACE) of a binary exposure (sex-specific dietary iron intake) on a binary outcome (age-related macular degeneration, AMD) in this setting. Methods A dataset of 10,000 participants was simulated 1200 times under each scenario with outcome data missing dependent on measured and unmeasured covariates and survival. Scenarios differed by the magnitude and direction of effect of an unmeasured confounder on both survival and the outcome, and whether participants who died following a protective exposure would also die if they had not received the exposure (validity of the monotonicity assumption). The performance of a marginal structural model (MSM, weighting for exposure, survival and missing data) was compared to a sensitivity approach for estimating the SACE. As an illustrative example, the SACE of iron intake on AMD was estimated using data from 39,918 participants of the Melbourne Collaborative Cohort Study. Results The MSM approach tended to underestimate the true magnitude of effect when the unmeasured confounder had opposing directions of effect on survival and the outcome. Overestimation was observed when the unmeasured confounder had the same direction of effect on survival and the outcome. Violation of the monotonicity assumption did not increase bias. The estimates were similar between the MSM approach and the sensitivity approach assessed at the sensitivity parameter of 1 (assuming no survival bias). In the illustrative example, high iron intake was found to be protective of AMD (adjusted OR 0.57, 95% CI 0.40–0.82) using complete case analysis via traditional logistic regression. The adjusted SACE odds ratio did not differ substantially from the complete case estimate, ranging from 0.54 to 0.58 for each of the SACE methods. Conclusions On average, MSMs with weighting for exposure, missing data and survival produced biased estimates of the SACE in the presence of an unmeasured survival-outcome confounder. The direction and magnitude of effect of unmeasured survival-outcome confounders should be considered when assessing exposure-outcome associations in the presence of attrition due to death.
- Published
- 2019
236. Statistical methods for adjusting estimates of treatment effectiveness for patient nonadherence in the context of time-to-event outcomes and health technology assessment: A systematic review of methodological papers
- Author
-
Nicholas Latimer, Paul Tappenden, James Fotheringham, Ruth Wong, Dyfrig A. Hughes, Simon Dixon, and Abualbishr Alshreef
- Subjects
medicine.medical_specialty ,Technology Assessment, Biomedical ,Principal stratification ,Cost-Benefit Analysis ,Reviews ,Context (language use) ,01 natural sciences ,medication nonadherence ,survival analysis ,Medication Adherence ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Medicine ,Humans ,Medical physics ,030212 general & internal medicine ,0101 mathematics ,causal inference ,Event (probability theory) ,Probability ,Proportional Hazards Models ,Randomized Controlled Trials as Topic ,business.industry ,Health Policy ,cost-effectiveness analysis ,Health technology ,Cost-effectiveness analysis ,Pharmacometrics ,Clinical trial ,Treatment Outcome ,Causal inference ,noncompliance ,business - Abstract
Introduction. Medication nonadherence can have a significant negative impact on treatment effectiveness. Standard intention-to-treat analyses conducted alongside clinical trials do not make adjustments for nonadherence. Several methods have been developed that attempt to estimate what treatment effectiveness would have been in the absence of nonadherence. However, health technology assessment (HTA) needs to consider effectiveness under real-world conditions, where nonadherence levels typically differ from those observed in trials. With this analytical requirement in mind, we conducted a review to identify methods for adjusting estimates of treatment effectiveness in the presence of patient nonadherence to assess their suitability for use in HTA. Methods. A “Comprehensive Pearl Growing” technique, with citation searching and reference checking, was applied across 7 electronic databases to identify methodological papers for adjusting time-to-event outcomes for nonadherence using individual patient data. A narrative synthesis of identified methods was conducted. Methods were assessed in terms of their ability to reestimate effectiveness based on alternative, suboptimal adherence levels. Results. Twenty relevant methodological papers covering 12 methods and 8 extensions to those methods were identified. Methods are broadly classified into 4 groups: 1) simple methods, 2) principal stratification methods, 3) generalized methods (g-methods), and 4) pharmacometrics-based methods using pharmacokinetics and pharmacodynamics (PKPD) analysis. Each method makes specific assumptions and has associated limitations. Five of the 12 methods are capable of adjusting for real-world nonadherence, with only g-methods and PKPD considered appropriate for HTA. Conclusion. A range of statistical methods is available for adjusting estimates of treatment effectiveness for nonadherence, but most are not suitable for use in HTA. G-methods and PKPD appear to be more appropriate to estimate effectiveness in the presence of real-world adherence.
- Published
- 2019
237. Identifying and estimating principal causal effects in a multi-site trial of Early College High Schools
- Author
-
Lo-Hua Yuan, Avi Feller, and Luke Miratrix
- Subjects
Statistics and Probability ,Principal stratification ,media_common.quotation_subject ,education ,covariate restrictions ,Distribution (economics) ,Principal causal effects ,law.invention ,principal stratification ,Randomized controlled trial ,law ,Voting ,Leverage (statistics) ,media_common ,Actuarial science ,business.industry ,Early College High School ,Principal (computer security) ,Intervention (law) ,multi-site randomized trials ,Modeling and Simulation ,Statistics, Probability and Uncertainty ,noncompliance ,Psychology ,business ,Graduation - Abstract
Randomized trials are often conducted with separate randomizations across multiple sites such as schools, voting districts, or hospitals. These sites can differ in important ways, including the site’s implementation quality, local conditions, and the composition of individuals. An important question in practice is whether—and under what assumptions—researchers can leverage this cross-site variation to learn more about the intervention. We address these questions in the principal stratification framework, which describes causal effects for subgroups defined by post-treatment quantities. We show that researchers can estimate certain principal causal effects via the multi-site design if they are willing to impose the strong assumption that the site-specific effects are independent of the site-specific distribution of stratum membership. We motivate this approach with a multi-site trial of the Early College High School Initiative, a unique secondary education program with the goal of increasing high school graduation rates and college enrollment. Our analyses corroborate previous studies suggesting that the initiative had positive effects for students who would have otherwise attended a low-quality high school, although power is limited.
- Published
- 2019
238. Bayesian methods for multiple mediators: Relating principal stratification and causal mediation in the analysis of power plant emission controls
- Author
-
Chanmin Kim, Christine Choirat, Joseph W. Hogan, Corwin M. Zigler, and Michael J. Daniels
- Subjects
FOS: Computer and information sciences ,0301 basic medicine ,Statistics and Probability ,Pollution ,Power station ,natural indirect effect ,Computer science ,Ambient $\mathrm{PM}_{2.5}$ ,media_common.quotation_subject ,Principal stratification ,Bayesian probability ,01 natural sciences ,Article ,Methodology (stat.ME) ,Bayesian nonparametrics ,010104 statistics & probability ,03 medical and health sciences ,Feature (machine learning) ,Econometrics ,0101 mathematics ,Air quality index ,Statistics - Methodology ,media_common ,Principal (computer security) ,multipollutants ,030104 developmental biology ,Gaussian copula ,Modeling and Simulation ,Causal inference ,Statistics, Probability and Uncertainty - Abstract
Emission control technologies installed on power plants are a key feature of many air pollution regulations in the US. While such regulations are predicated on the presumed relationships between emissions, ambient air pollution, and human health, many of these relationships have never been empirically verified. The goal of this paper is to develop new statistical methods to quantify these relationships. We frame this problem as one of mediation analysis to evaluate the extent to which the effect of a particular control technology on ambient pollution is mediated through causal effects on power plant emissions. Since power plants emit various compounds that contribute to ambient pollution, we develop new methods for multiple intermediate variables that are measured contemporaneously, may interact with one another, and may exhibit joint mediating effects. Specifically, we propose new methods leveraging two related frameworks for causal inference in the presence of mediating variables: principal stratification and causal mediation analysis. We define principal effects based on multiple mediators, and also introduce a new decomposition of the total effect of an intervention on ambient pollution into the natural direct effect and natural indirect effects for all combinations of mediators. Both approaches are anchored to the same observed-data models, which we specify with Bayesian nonparametric techniques. We provide assumptions for estimating principal causal effects, then augment these with an additional assumption required for causal mediation analysis. The two analyses, interpreted in tandem, provide the first empirical investigation of the presumed causal pathways that motivate important air quality regulatory policies.
- Published
- 2019
- Full Text
- View/download PDF
239. Simultaneous Inference of Treatment Effect Modification by Intermediate Response Endpoint Principal Strata with Application to Vaccine Trials
- Author
-
Ying Huang, Yingying Zhuang, and Peter B. Gilbert
- Subjects
Statistics and Probability ,Oncology ,medicine.medical_specialty ,Principal stratification ,Inference ,Biostatistics ,Placebo ,01 natural sciences ,Article ,law.invention ,Dengue ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Randomized controlled trial ,law ,Resampling ,Internal medicine ,Outcome Assessment, Health Care ,Clinical endpoint ,Humans ,Medicine ,030212 general & internal medicine ,0101 mathematics ,Dengue vaccine ,Randomized Controlled Trials as Topic ,Vaccines ,business.industry ,General Medicine ,Models, Theoretical ,Vaccine efficacy ,Clinical Trials, Phase III as Topic ,Statistics, Probability and Uncertainty ,business ,Biomarkers - Abstract
In randomized clinical trials, researchers are often interested in identifying an inexpensive intermediate study endpoint (typically a biomarker) that is a strong effect modifier of the treatment effect on a longer-term clinical endpoint of interest. Motivated by randomized placebo-controlled preventive vaccine efficacy trials, within the principal stratification framework a pseudo-score type estimator has been proposed to estimate disease risks conditional on the counter-factual biomarker of interest under each treatment assignment to vaccine or placebo, yielding an estimator of biomarker conditional vaccine efficacy. This method can be used for trial designs that use baseline predictors of the biomarker and/or designs that vaccinate disease-free placebo recipients at the end of the trial. In this article, we utilize the pseudo-score estimator to estimate the biomarker conditional vaccine efficacy adjusting for baseline covariates. We also propose a perturbation resampling method for making simultaneous inference on conditional vaccine efficacy over the values of the biomarker. We illustrate our method with datasets from two phase 3 dengue vaccine efficacy trials.
- Published
- 2019
- Full Text
- View/download PDF
240. Subtleties in the interpretation of hazard contrasts
- Author
-
Torben Martinussen, Stijn Vansteelandt, and Per Kragh Andersen
- Subjects
Randomised study ,Principal stratification ,media_common.quotation_subject ,Hazard ratio ,01 natural sciences ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Econometrics ,Humans ,Treatment effect ,Computer Simulation ,030212 general & internal medicine ,0101 mathematics ,PRINCIPAL STRATIFICATION ,media_common ,Proportional Hazards Models ,Proportional hazards model ,Interpretation (philosophy) ,Applied Mathematics ,COX ,General Medicine ,Survival analysis ,Hazard ,Causality ,Hazard difference ,Surprise ,Mathematics and Statistics ,Data Interpretation, Statistical ,SURVIVAL ,Psychology ,Cox regression - Abstract
The hazard ratio is one of the most commonly reported measures of treatment effect in randomised trials, yet the source of much misinterpretation. This point was made clear by Hernan (Epidemiology (Cambridge, Mass) 21(1):13-15, 2010) in a commentary, which emphasised that the hazard ratio contrasts populations of treated and untreated individuals who survived a given period of time, populations that will typically fail to be comparable-even in a randomised trial-as a result of different pressures or intensities acting on different populations. The commentary has been very influential, but also a source of surprise and confusion. In this note, we aim to provide more insight into the subtle interpretation of hazard ratios and differences, by investigating in particular what can be learned about a treatment effect from the hazard ratio becoming 1 (or the hazard difference 0) after a certain period of time. We further define a hazard ratio that has a causal interpretation and study its relationship to the Cox hazard ratio, and we also define a causal hazard difference. These quantities are of theoretical interest only, however, since they rely on assumptions that cannot be empirically evaluated. Throughout, we will focus on the analysis of randomised experiments.
- Published
- 2019
- Full Text
- View/download PDF
241. A Bayesian Nonparametric Approach for Evaluating the Causal Effect of Treatment in Randomized Trials with Semi-Competing Risks
- Author
-
Daniel O. Scharfstein, Peter Müller, Yanxun Xu, and Michael J. Daniels
- Subjects
Statistics and Probability ,FOS: Computer and information sciences ,Computer science ,Principal stratification ,Inference ,Machine learning ,computer.software_genre ,01 natural sciences ,law.invention ,Methodology (stat.ME) ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Randomized controlled trial ,law ,Humans ,Computer Simulation ,0101 mathematics ,Statistics - Methodology ,Event (probability theory) ,Randomized Controlled Trials as Topic ,business.industry ,Bayes Theorem ,General Medicine ,3. Good health ,Causality ,Identification (information) ,Estimand ,Terminal and nonterminal symbols ,030220 oncology & carcinogenesis ,Causal inference ,Artificial intelligence ,Statistics, Probability and Uncertainty ,business ,computer ,Algorithms - Abstract
Summary We develop a Bayesian nonparametric (BNP) approach to evaluate the causal effect of treatment in a randomized trial where a nonterminal event may be censored by a terminal event, but not vice versa (i.e., semi-competing risks). Based on the idea of principal stratification, we define a novel estimand for the causal effect of treatment on the nonterminal event. We introduce identification assumptions, indexed by a sensitivity parameter, and show how to draw inference using our BNP approach. We conduct simulation studies and illustrate our methodology using data from a brain cancer trial. The R code implementing our model and algorithm is available for download at https://github.com/YanxunXu/BaySemiCompeting.
- Published
- 2019
242. The role of mastery learning in an intelligent tutoring system: Principal stratification on a latent variable
- Author
-
Adam Sales and John F. Pane
- Subjects
0301 basic medicine ,Statistics and Probability ,Principal stratification ,educational technology ,01 natural sciences ,Algebra I ,Bayesian ,Intelligent tutoring system ,principal stratification ,010104 statistics & probability ,03 medical and health sciences ,Mathematics education ,0101 mathematics ,TUTOR ,Curriculum ,computer.programming_language ,latent variables ,Educational technology ,Cognitive tutor ,item response theory ,Mastery learning ,030104 developmental biology ,Modeling and Simulation ,Statistics, Probability and Uncertainty ,computer ,Causal inference - Abstract
Students in Algebra I classrooms typically learn at different rates and struggle at different points in the curriculum—a common challenge for math teachers. Cognitive Tutor Algebra I (CTA1), an educational computer program, addresses such student heterogeneity via what they term “mastery learning,” where students progress from one section of the curriculum to the next by demonstrating appropriate “mastery” at each stage. However, when students are unable to master a section’s skills even after trying many problems, they are automatically promoted to the next section anyway. Does promotion without mastery impair the program’s effectiveness? ¶ At least in certain domains, CTA1 was recently shown to improve student learning on average in a randomized effectiveness study. This paper uses student log data from that study in a continuous principal stratification model to estimate the relationship between students’ potential mastery and the CTA1 treatment effect. In contrast to extant principal stratification applications, a student’s propensity to master worked sections here is never directly observed. Consequently we embed an item-response model, which measures students’ potential mastery, within the larger principal stratification model. We find that the tutor may, in fact, be more effective for students who are more frequently promoted (despite unsuccessfully completing sections of the material). However, since these students are distinctive in their educational strength (as well as in other respects), it remains unclear whether this enhanced effectiveness can be directly attributed to aspects of the mastery learning program.
- Published
- 2019
243. Defining treatment effects: A regulatory perspective
- Author
-
Thomas Permutt
- Subjects
Pharmacology ,Clinical Trials as Topic ,Models, Statistical ,Drug Industry ,Management science ,Computer science ,Principal stratification ,Perspective (graphical) ,Harmonization ,Guidelines as Topic ,General Medicine ,01 natural sciences ,Clinical trial ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Estimand ,Research Design ,Data Interpretation, Statistical ,Humans ,030212 general & internal medicine ,0101 mathematics ,Randomized Controlled Trials as Topic - Abstract
The proposed addendum to the International Conference on Harmonization document, Statistical Principles for Clinical Trials, can be read in two ways. There is a new framework for talking about estimands, but is it about fitting present methods into the framework? Or is it about changing methods? My answer: some of each. Where different methods are needed, there are challenging problems in estimating some desirable estimands, but there may also be desirable estimands that can be estimated easily and robustly.
- Published
- 2019
244. Translating questions to estimands in randomized clinical trials with intercurrent events.
- Author
-
Stensrud MJ and Dukes O
- Subjects
- Causality, Data Interpretation, Statistical, Humans, Randomized Controlled Trials as Topic, Models, Statistical, Research Design
- Abstract
Intercurrent (post-treatment) events occur frequently in randomized trials, and investigators often express interest in treatment effects that suitably take account of these events. Contrasts that naively condition on intercurrent events do not have a straight-forward causal interpretation, and the practical relevance of other commonly used approaches is debated. In this work, we discuss how to formulate and choose an estimand, beyond the marginal intention-to-treat effect, from the point of view of a decision maker and drug developer. In particular, we argue that careful articulation of a practically useful research question should either reflect decision making at this point in time or future drug development. Indeed, a substantially interesting estimand is simply a formalization of the (plain English) description of a research question. A common feature of estimands that are practically useful is that they correspond to possibly hypothetical but well-defined interventions in identifiable (sub)populations. To illustrate our points, we consider five examples that were recently used to motivate consideration of principal stratum estimands in clinical trials. In all of these examples, we propose alternative causal estimands, such as conditional effects, sequential regime effects, and separable effects, that correspond to explicit research questions of substantial interest., (© 2022 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.)
- Published
- 2022
- Full Text
- View/download PDF
245. Handling parametric assumptions in principal causal effect estimation using Gaussian mixtures.
- Author
-
Jo B
- Subjects
- Causality, Humans, Normal Distribution, Models, Statistical
- Abstract
Given the latent stratum membership, principal stratification models with continuous outcomes naturally fit in the parametric estimation framework of Gaussian mixtures. However, with models that are not nonparametrically identified, relying on parametric mixture modeling has been mostly discouraged as a way of identifying principal effects. This study revisits this rather deserted use of parametric mixture modeling, which may open up various possibilities in principal stratification modeling. The main problem with using the parametric mixture modeling approach is that it is hard to assess the quality of principal effect estimates given its reliance on parametric conditions. As a way of assessing the estimation quality in this situation, this study proposes that we use parametric mixture modeling in two different ways, with and without the assurance of nonparametric identification. The key identifying assumption employed in this study is the moving exclusion restriction, a flexible version of the standard exclusion restriction assumption. This assumption is used as a temporary vehicle to help assess the quality of principal effect estimates obtained relying on parametric mixture modeling. The study presents promising results, showing the possibility of using parametric mixture modeling as an accessible tool for causal inference., (© 2022 John Wiley & Sons Ltd.)
- Published
- 2022
- Full Text
- View/download PDF
246. Assessing complier average causal effects from longitudinal trials with multiple endpoints and treatment noncompliance: An application to a study of Arthritis Health Journal.
- Author
-
Guo L, Qian Y, and Xie H
- Subjects
- Causality, Computer Simulation, Humans, Randomized Controlled Trials as Topic, Treatment Outcome, Arthritis therapy, Patient Compliance
- Abstract
Treatment noncompliance often occurs in longitudinal randomized controlled trials (RCTs) on human subjects, and can greatly complicate treatment effect assessment. The complier average causal effect (CACE) informs the intervention efficacy for the subpopulation who would comply regardless of assigned treatment and has been considered as patient-oriented treatment effects of interest in the presence of noncompliance. Real-world RCTs evaluating multifaceted interventions often employ multiple study endpoints to measure treatment success. In such trials, limited sample sizes, low compliance rates, and small to moderate effect sizes on individual endpoints can significantly reduce the power to detect CACE when these correlated endpoints are analyzed separately. To overcome the challenge, we develop a multivariate longitudinal potential outcome model with stratification on latent compliance types to efficiently assess multivariate CACEs (MCACE) by combining information across multiple endpoints and visits. Evaluation using simulation data shows a significant increase in the estimation efficiency with the MCACE model, including up to 50% reduction in standard errors (SEs) of CACE estimates and 1-fold increase in the power to detect CACE. Finally, we apply the proposed MCACE model to an RCT on Arthritis Health Journal online tool. Results show that the MCACE analysis detects significant and beneficial intervention effects on two of the six endpoints while estimating CACEs for these endpoints separately fail to detect treatment effect on any endpoint., (© 2022 John Wiley & Sons Ltd.)
- Published
- 2022
- Full Text
- View/download PDF
247. Assessing treatment benefit in the presence of placebo response using the sequential parallel comparison design.
- Author
-
Liu X, Kim C, Han Z, Lim P, Roychoudhury S, Fava M, and Doros G
- Subjects
- Bias, Computer Simulation, Humans, Placebo Effect, Research Design
- Abstract
In clinical trials, placebo response is considered a beneficial effect arising from multiple factors, including the patient's expectations for the treatment. Its presence makes the classical parallel study design suboptimal and can bias the inference. The sequential parallel comparison design (SPCD), a two-stage design where the first stage is a classical parallel study design, followed by another parallel design among placebo subjects from the first stage, was proposed to address the shortcomings of the classical design. In SPCD, in lieu of treatment effect, a weighted average of the mean treatment difference in Stage I among all randomized subjects and the mean treatment difference in Stage II among placebo non-responders was proposed as the efficacy measure. However, by linking two possibly different populations, this weighted average lacks interpretability, and the choice of weight remains controversial. In this work, under the principal stratification framework, we propose a causal estimand for the treatment effect under each of three clinically important principal strata: Always Responders, Never Responders, and Drug-only Responders. To make the stratum treatment effect identifiable, we introduce a set of assumptions and two sensitivity parameters. By further considering the strata as latent characteristics, the sensitivity parameters can be estimated. An extensive simulation study is conducted to evaluate the operating characteristics of the proposed method. Finally, we apply our method on the ADAPT-A study data to assess the benefit of low-dose aripiprazole adjunctive to antidepressant therapy treatment., (© 2022 John Wiley & Sons Ltd.)
- Published
- 2022
- Full Text
- View/download PDF
248. Assessing treatment benefit in the presence of placebo response using the Sequential Parallel Comparison Design
- Author
-
Liu, Xiaoyan
- Subjects
- Biostatistics, Estimand, Placebo response, Principal stratification, SPCD, Treatment effect
- Abstract
In clinical trials, placebo response is considered a beneficial effect arising from multiple factors, including the patient’s expectations for the treatment. Due to the presence of placebo response, the classical parallel design often fails to declare an efficacious treatment. The Sequential Parallel Comparison Design (SPCD), a two-stage design where the first stage is a classical parallel trial, followed by another parallel trial among placebo patients from the first stage, was proposed to mitigate the placebo response. In SPCD, in lieu of treatment effect, a weighted average of the mean treatment difference in Stage I among all randomized patients and the mean treatment difference in Stage II among placebo non-responders was proposed as the efficacy measure. However, by mixing two possibly different populations, this weighted average lacks interpretability, the choice of weight remains controversial, and the classification of patients into placebo responders and non-responders via hard criterion-based rule warrants careful consideration. In this work, we first elaborate and study the shortcomings surrounding this efficacy measure, which motivates us to propose causal estimands for clinically meaningful principal strata under the principal stratification framework. To make the estimands identifiable, we invoke a set of assumptions and introduce two sensitivity parameters. Meanwhile, in the absence of a clinically proven criterion for classifying responders and non-responders, we additionally suggest estimating the response status and sensitivity parameters via the Expectation-Maximization (EM) algorithm by treating the principal strata as full latent characteristics. Next, we further refine and alternatively propose a more consistent and sophisticated EM procedure for classification, point estimation, and hypothesis testing. Finally, we evaluate our methods with extensive simulation studies and apply them to an actual SPCD study of antidepressant therapy to assess the benefit of low-dose aripiprazole adjunctive to antidepressant therapy treatment, the ADAPT-A trial. In conclusion, we believe this is an important step toward a more rigorous and transparent approach to evaluating the treatment benefit in the presence of placebo response.
- Published
- 2022
249. Evaluation of treatment effect modification by biomarkers measured pre- and post-randomization in the presence of non-monotone missingness.
- Author
-
Zhuang Y, Huang Y, and Gilbert PB
- Subjects
- Biomarkers, Humans, Random Allocation, Treatment Outcome, Dengue prevention & control, Research Design
- Abstract
In vaccine studies, an important research question is to study effect modification of clinical treatment efficacy by intermediate biomarker-based principal strata. In settings where participants entering a trial may have prior exposure and therefore variable baseline biomarker values, clinical treatment efficacy may further depend jointly on a biomarker measured at baseline and measured at a fixed time after vaccination. This makes it important to conduct a bivariate effect modification analysis by both the intermediate biomarker-based principal strata and the baseline biomarker values. Existing research allows this assessment if the sampling of baseline and intermediate biomarkers follows a monotone pattern, i.e., if participants who have the biomarker measured post-randomization would also have the biomarker measured at baseline. However, additional complications in study design could happen in practice. For example, in a dengue correlates study, baseline biomarker values were only available from a fraction of participants who have biomarkers measured post-randomization. How to conduct the bivariate effect modification analysis in these studies remains an open research question. In this article, we propose approaches for bivariate effect modification analysis in the complicated sampling design based on an estimated likelihood framework. We demonstrate advantages of the proposed method over existing methods through numerical studies and illustrate our method with data sets from two phase 3 dengue vaccine efficacy trials., (© The Author 2020. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.)
- Published
- 2022
- Full Text
- View/download PDF
250. Clarifying selection bias in cluster randomized trials.
- Author
-
Li F, Tian Z, Bobb J, Papadogeorgou G, and Li F
- Subjects
- Bias, Causality, Computer Simulation, Humans, Intention to Treat Analysis, Randomized Controlled Trials as Topic, Selection Bias, Research Design
- Abstract
Background: In cluster randomized trials, patients are typically recruited after clusters are randomized, and the recruiters and patients may not be blinded to the assignment. This often leads to differential recruitment and consequently systematic differences in baseline characteristics of the recruited patients between intervention and control arms, inducing post-randomization selection bias. We aim to rigorously define causal estimands in the presence of selection bias. We elucidate the conditions under which standard covariate adjustment methods can validly estimate these estimands. We further discuss the additional data and assumptions necessary for estimating causal effects when such conditions are not met., Methods: Adopting the principal stratification framework in causal inference, we clarify there are two average treatment effect (ATE) estimands in cluster randomized trials: one for the overall population and one for the recruited population. We derive analytical formula of the two estimands in terms of principal-stratum-specific causal effects. Furthermore, using simulation studies, we assess the empirical performance of the multivariable regression adjustment method under different data generating processes leading to selection bias., Results: When treatment effects are heterogeneous across principal strata, the average treatment effect on the overall population generally differs from the average treatment effect on the recruited population. A naïve intention-to-treat analysis of the recruited sample leads to biased estimates of both average treatment effects. In the presence of post-randomization selection and without additional data on the non-recruited subjects, the average treatment effect on the recruited population is estimable only when the treatment effects are homogeneous between principal strata, and the average treatment effect on the overall population is generally not estimable. The extent to which covariate adjustment can remove selection bias depends on the degree of effect heterogeneity across principal strata., Conclusion: There is a need and opportunity to improve the analysis of cluster randomized trials that are subject to post-randomization selection bias. For studies prone to selection bias, it is important to explicitly specify the target population that the causal estimands are defined on and adopt design and estimation strategies accordingly. To draw valid inferences about treatment effects, investigators should (1) assess the possibility of heterogeneous treatment effects, and (2) consider collecting data on covariates that are predictive of the recruitment process, and on the non-recruited population from external sources such as electronic health records.
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.