15 results
Search Results
2. Approaches to treatment effect heterogeneity in the presence of confounding.
- Author
-
Anoke, Sarah C., Normand, Sharon‐Lise, Zigler, Corwin M., and Normand, Sharon-Lise
- Subjects
MEDICARE beneficiaries ,THERAPEUTICS ,HETEROGENEITY ,COMPUTER simulation ,RESEARCH ,TREATMENT effect heterogeneity ,RESEARCH methodology ,REGRESSION analysis ,EVALUATION research ,COMPARATIVE studies ,ATTRIBUTION (Social psychology) ,RESEARCH funding ,PROBABILITY theory - Abstract
The literature on causal effect estimation tends to focus on the population mean estimand, which is less informative as medical treatments are becoming more personalized and there is increasing awareness that subpopulations of individuals may experience a group-specific effect that differs from the population average. In fact, it is possible that there is underlying systematic effect heterogeneity that is obscured by focusing on the population mean estimand. In this context, understanding which covariates contribute to this treatment effect heterogeneity (TEH) and how these covariates determine the differential treatment effect (TE) is an important consideration. Towards such an understanding, this paper briefly reviews three approaches used in making causal inferences and conducts a simulation study to compare these approaches according to their performance in an exploratory evaluation of TEH when the heterogeneous subgroups are not known a priori. Performance metrics include the detection of any heterogeneity, the identification and characterization of heterogeneous subgroups, and unconfounded estimation of the TE within subgroups. The methods are then deployed in a comparative effectiveness evaluation of drug-eluting versus bare-metal stents among 54 099 Medicare beneficiaries in the continental United States admitted to a hospital with acute myocardial infarction in 2008. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
3. Covariance estimators for generalized estimating equations (GEE) in longitudinal analysis with small samples.
- Author
-
Wang, Ming, Kong, Lan, Li, Zheng, and Zhang, Lijun
- Subjects
ANTICONVULSANTS ,HEAD ,CLINICAL trials ,COMPARATIVE studies ,COMPUTER simulation ,COMPUTER software ,EPILEPSY ,GABA ,LONGITUDINAL method ,RESEARCH methodology ,MEDICAL cooperation ,RESEARCH ,RESEARCH funding ,SAMPLE size (Statistics) ,EVALUATION research ,RESEARCH bias ,STATISTICAL models ,ANATOMY ,THERAPEUTICS - Abstract
Generalized estimating equations (GEE) is a general statistical method to fit marginal models for longitudinal data in biomedical studies. The variance-covariance matrix of the regression parameter coefficients is usually estimated by a robust "sandwich" variance estimator, which does not perform satisfactorily when the sample size is small. To reduce the downward bias and improve the efficiency, several modified variance estimators have been proposed for bias-correction or efficiency improvement. In this paper, we provide a comprehensive review on recent developments of modified variance estimators and compare their small-sample performance theoretically and numerically through simulation and real data examples. In particular, Wald tests and t-tests based on different variance estimators are used for hypothesis testing, and the guideline on appropriate sample sizes for each estimator is provided for preserving type I error in general cases based on numerical results. Moreover, we develop a user-friendly R package "geesmv" incorporating all of these variance estimators for public usage in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
4. Optimal treatment allocation for placebo-treatment comparisons in trials with discrete-time survival endpoints.
- Author
-
Moerbeek, Mirjam and Wong, Weng‐Kee
- Subjects
RISPERIDONE ,DRUG therapy for schizophrenia ,ANTIPSYCHOTIC agents ,BIOLOGICAL assay ,CLINICAL trials ,EXPERIMENTAL design ,LONGITUDINAL method ,RESEARCH funding ,SURVIVAL analysis (Biometry) ,SAMPLE size (Statistics) ,PROPORTIONAL hazards models ,THERAPEUTICS - Abstract
In many randomized controlled trials, treatment groups are of equal size, but this is not necessarily the best choice. This paper provides a methodology to calculate optimal treatment allocations for longitudinal trials when we wish to compare multiple treatment groups with a placebo group, and the comparisons may have unequal importance. The focus is on trials with a survival endpoint measured in discrete time. We assume the underlying survival process is Weibull and show that values for the parameters in the Weibull distribution have an impact on the optimal treatment allocation scheme in an interesting way. Additionally, we incorporate different cost considerations at the subject and measurement levels and determine the optimal number of time periods. We also show that when many events occur at the beginning of the trial, fewer time periods are more efficient. As an application, we revisit a risperidone maintenance treatment trial in schizophrenia and use our proposed methodology to redesign it and compare merits of our optimal design. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
5. A comparison of imputation methods in a longitudinal randomized clinical trial.
- Author
-
Tang, Lingqi, Song, Juwon, Belin, Thomas R., Unützer, Jürgen, and Unützer, Jürgen
- Subjects
MENTAL depression ,THERAPEUTICS ,CLINICAL trials ,COMPARATIVE studies ,COMPUTER simulation ,LONGITUDINAL method ,RESEARCH methodology ,MEDICAL cooperation ,RESEARCH ,RESEARCH funding ,SYSTEM analysis ,EVALUATION research ,RANDOMIZED controlled trials ,STATISTICAL models - Abstract
It is common for longitudinal clinical trials to face problems of item non-response, unit non-response, and drop-out. In this paper, we compare two alternative methods of handling multivariate incomplete data across a baseline assessment and three follow-up time points in a multi-centre randomized controlled trial of a disease management programme for late-life depression. One approach combines hot-deck (HD) multiple imputation using a predictive mean matching method for item non-response and the approximate Bayesian bootstrap for unit non-response. A second method is based on a multivariate normal (MVN) model using PROC MI in SAS software V8.2. These two methods are contrasted with a last observation carried forward (LOCF) technique and available-case (AC) analysis in a simulation study where replicate analyses are performed on subsets of the originally complete cases. Missing-data patterns were simulated to be consistent with missing-data patterns found in the originally incomplete cases, and observed complete data means were taken to be the targets of estimation. Not surprisingly, the LOCF and AC methods had poor coverage properties for many of the variables evaluated. Multiple imputation under the MVN model performed well for most variables but produced less than nominal coverage for variables with highly skewed distributions. The HD method consistently produced close to nominal coverage, with interval widths that were roughly 7 per cent larger on average than those produced from the MVN model. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
6. The choice of prior distribution for a covariance matrix in multivariate meta-analysis: a simulation study.
- Author
-
Hurtado Rúa, Sandra M., Mazumdar, Madhu, and Strawderman, Robert L.
- Subjects
PERIODONTAL disease treatment ,CARDIOVASCULAR agents ,ALGORITHMS ,COMPUTER simulation ,MULTIVARIATE analysis ,PROBABILITY theory ,RESEARCH funding ,STATISTICS ,STROKE ,CAUSAL models ,STATISTICAL models ,THERAPEUTICS - Abstract
Bayesian meta-analysis is an increasingly important component of clinical research, with multivariate meta-analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta-analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta-analysis example from the periodontal field and a medium meta-analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
7. Identification of the optimal treatment regimen in the presence of missing covariates.
- Author
-
Huang, Ying, Zhou, Xiao‐Hua, and Zhou, Xiao-Hua
- Subjects
ALZHEIMER'S disease ,IDENTIFICATION ,THERAPEUTICS ,EXPERIMENTAL design ,COMPUTER simulation ,RESEARCH ,RESEARCH methodology ,MEDICAL cooperation ,EVALUATION research ,COMPARATIVE studies ,RESEARCH funding - Abstract
Covariates associated with treatment-effect heterogeneity can potentially be used to make personalized treatment recommendations towards best clinical outcomes. Methods for treatment-selection rule development that directly maximize treatment-selection benefits have attracted much interest in recent years, due to the robustness of these methods to outcome modeling. In practice, the task of treatment-selection rule development can be further complicated by missingness in data. Here, we consider the identification of optimal treatment-selection rules for a binary disease outcome when measurements of an important covariate from study participants are partly missing. Under the missing at random assumption, we develop a robust estimator of treatment-selection rules under the direct-optimization paradigm. This estimator targets the maximum selection benefits to the population under correct specification of at least one mechanism from each of the two sets-missing data or conditional covariate distribution, and treatment assignment or disease outcome model. We evaluate and compare performance of the proposed estimator with alternative direct-optimization estimators through extensive simulation studies. We demonstrate the application of the proposed method through a real data example from an Alzheimer's disease study for developing covariate combinations to guide the treatment of Alzheimer's disease. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
8. To add or not to add a new treatment arm to a multiarm study: A decision-theoretic framework.
- Author
-
Lee, Kim May, Wason, James, and Stallard, Nigel
- Subjects
THERAPEUTICS ,ARM ,INVESTIGATIONAL therapies ,CLINICAL trials ,TIME trials ,OSTEOARTHRITIS treatment ,KNEE diseases ,STATISTICS ,RESEARCH ,MULTIVARIATE analysis ,COLD therapy ,RESEARCH methodology ,NERVE block ,EVALUATION research ,MEDICAL cooperation ,MEDICAL protocols ,COMPARATIVE studies ,DECISION making ,OSTEOARTHRITIS ,RESEARCH funding - Abstract
Multiarm clinical trials, which compare several experimental treatments against control, are frequently recommended due to their efficiency gain. In practise, all potential treatments may not be ready to be tested in a phase II/III trial at the same time. It has become appealing to allow new treatment arms to be added into on-going clinical trials using a "platform" trial approach. To the best of our knowledge, many aspects of when to add arms to an existing trial have not been explored in the literature. Most works on adding arm(s) assume that a new arm is opened whenever a new treatment becomes available. This strategy may prolong the overall duration of a study or cause reduction in marginal power for each hypothesis if the adaptation is not well accommodated. Within a two-stage trial setting, we propose a decision-theoretic framework to investigate when to add or not to add a new treatment arm based on the observed stage one treatment responses. To account for different prospect of multiarm studies, we define utility in two different ways; one for a trial that aims to maximise the number of rejected hypotheses; the other for a trial that would declare a success when at least one hypothesis is rejected from the study. Our framework shows that it is not always optimal to add a new treatment arm to an existing trial. We illustrate a case study by considering a completed trial on knee osteoarthritis. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. Matching algorithms for causal inference with multiple treatments.
- Author
-
Scotina, Anthony D. and Gutman, Roee
- Subjects
THERAPEUTICS ,CLINICAL trials ,ALGORITHMS ,EXPERIMENTAL design ,STATISTICS ,RESEARCH ,RESEARCH methodology ,PATIENT readmissions ,EVALUATION research ,MEDICAL cooperation ,NURSING care facilities ,COMPARATIVE studies ,CAUSAL inference ,RESEARCH funding ,PROBABILITY theory - Abstract
Randomized clinical trials are ideal for estimating causal effects, because the distributions of background covariates are similar in expectation across treatment groups. When estimating causal effects using observational data, matching is a commonly used method to replicate the covariate balance achieved in a randomized clinical trial. Matching algorithms have a rich history dating back to the mid-1900s but have been used mostly to estimate causal effects between two treatment groups. When there are more than two treatments, estimating causal effects requires additional assumptions and techniques. We propose several novel matching algorithms that address the drawbacks of the current methods, and we use simulations to compare current and new methods. All of the methods display improved covariate balance in the matched sets relative to the prematched cohorts. In addition, we provide advice to investigators on which matching algorithms are preferred for different covariate distributions. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. Tutorial on statistical considerations on subgroup analysis in confirmatory clinical trials.
- Author
-
Alosh, Mohamed, Huque, Mohammad F., Bretz, Frank, D'Agostino, Ralph B., and D'Agostino, Ralph B Sr
- Subjects
ASPIRIN ,RECOMBINANT proteins ,BLOOD proteins ,STROKE prevention ,MYOCARDIAL infarction ,CLINICAL trials ,RESEARCH evaluation ,RESEARCH funding ,SEPSIS ,STATISTICS ,SURVIVAL analysis (Biometry) ,DATA analysis ,TICLOPIDINE ,TREATMENT effectiveness ,STATISTICAL models ,PREVENTION ,THERAPEUTICS - Abstract
Clinical trials target patients who are expected to benefit from a new treatment under investigation. However, the magnitude of the treatment benefit, if it exists, often depends on the patient baseline characteristics. It is therefore important to investigate the consistency of the treatment effect across subgroups to ensure a proper interpretation of positive study findings in the overall population. Such assessments can provide guidance on how the treatment should be used. However, great care has to be taken when interpreting consistency results. An observed heterogeneity in treatment effect across subgroups can arise because of chance alone, whereas true heterogeneity may be difficult to detect by standard statistical tests because of their low power. This tutorial considers issues related to subgroup analyses and their impact on the interpretation of findings of completed trials that met their main objectives. In addition, we provide guidance on the design and analysis of clinical trials that account for the expected heterogeneity of treatment effects across subgroups by establishing treatment benefit in a pre-defined targeted subgroup and/or the overall population. Copyright © 2016 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
11. Fractional Brownian motion and multivariate-t models for longitudinal biomedical data, with application to CD4 counts in HIV-positive patients.
- Author
-
Stirrup, Oliver T., Babiker, Abdel G., Carpenter, James R., and Copas, Andrew J.
- Subjects
THERAPEUTICS ,HIV infection epidemiology ,COMPARATIVE studies ,HIV infections ,LONGITUDINAL method ,RESEARCH methodology ,MEDICAL cooperation ,MULTIVARIATE analysis ,PROBABILITY theory ,REGRESSION analysis ,RESEARCH ,RESEARCH funding ,STATISTICS ,EVALUATION research ,TREATMENT effectiveness ,STATISTICAL models ,CD4 lymphocyte count - Abstract
Longitudinal data are widely analysed using linear mixed models, with 'random slopes' models particularly common. However, when modelling, for example, longitudinal pre-treatment CD4 cell counts in HIV-positive patients, the incorporation of non-stationary stochastic processes such as Brownian motion has been shown to lead to a more biologically plausible model and a substantial improvement in model fit. In this article, we propose two further extensions. Firstly, we propose the addition of a fractional Brownian motion component, and secondly, we generalise the model to follow a multivariate-t distribution. These extensions are biologically plausible, and each demonstrated substantially improved fit on application to example data from the Concerted Action on SeroConversion to AIDS and Death in Europe study. We also propose novel procedures for residual diagnostic plots that allow such models to be assessed. Cohorts of patients were simulated from the previously reported and newly developed models in order to evaluate differences in predictions made for the timing of treatment initiation under different clinical management strategies. A further simulation study was performed to demonstrate the substantial biases in parameter estimates of the mean slope of CD4 decline with time that can occur when random slopes models are applied in the presence of censoring because of treatment initiation, with the degree of bias found to depend strongly on the treatment initiation rule applied. Our findings indicate that researchers should consider more complex and flexible models for the analysis of longitudinal biomarker data, particularly when there are substantial missing data, and that the parameter estimates from random slopes models must be interpreted with caution. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
12. Designing multi-arm multi-stage clinical trials using a risk-benefit criterion for treatment selection.
- Author
-
Jaki, Thomas and Hampson, Lisa V.
- Subjects
ANTI-infective agents ,HETEROCYCLIC compounds ,ANGIOTENSIN receptors ,BIOLOGICAL assay ,CLINICAL trials ,COMPUTER simulation ,DECISION making ,EXPERIMENTAL design ,INSULIN resistance ,PATIENT safety ,RESEARCH funding ,SAMPLE size (Statistics) ,HIV seroconversion ,STATISTICAL models ,THERAPEUTICS - Abstract
Multi-arm clinical trials that compare several active treatments to a common control have been proposed as an efficient means of making an informed decision about which of several treatments should be evaluated further in a confirmatory study. Additional efficiency is gained by incorporating interim analyses and, in particular, seamless Phase II/III designs have been the focus of recent research. Common to much of this work is the constraint that selection and formal testing should be based on a single efficacy endpoint, despite the fact that in practice, safety considerations will often play a central role in determining selection decisions. Here, we develop a multi-arm multi-stage design for a trial with an efficacy and safety endpoint. The safety endpoint is explicitly considered in the formulation of the problem, selection of experimental arm and hypothesis testing. The design extends group-sequential ideas and considers the scenario where a minimal safety requirement is to be fulfilled and the treatment yielding the best combined safety and efficacy trade-off satisfying this constraint is selected for further testing. The treatment with the best trade-off is selected at the first interim analysis, while the whole trial is allowed to compose of J analyses. We show that the design controls the familywise error rate in the strong sense and illustrate the method through an example and simulation. We find that the design is robust to misspecification of the correlation between the endpoints and requires similar numbers of subjects to a trial based on efficacy alone for moderately correlated endpoints. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
13. Bayesian restricted spatial regression for examining session features and patient outcomes in open-enrollment group therapy studies.
- Author
-
Paddock, Susan M., Leininger, Thomas J., and Hunter, Sarah B.
- Subjects
THERAPEUTICS ,COGNITIVE therapy ,COMPUTER software ,MENTAL depression ,GROUP psychotherapy ,PROBABILITY theory ,RESEARCH funding ,STATISTICS ,SUBSTANCE abuse ,TREATMENT effectiveness ,DISEASE complications - Abstract
Group-based interventions have been developed for treating patients across a range of health conditions. Enrollment into such groups often occurs on an open (or rolling) basis. Conditional autoregression modeling of random session effects has been proposed to account for the expected correlation in session effects associated with the overlap in patient participation session to session. However, when the analytic objective is to examine the relationship between a fixed-effect session feature and a patient outcome using conditional autoregression, confounding might arise if the fixed session feature of interest and the random session effects vary across sessions in similar ways, resulting in bias and inflated standard errors of a fixed-effect session feature of interest. Motivated by the goal of examining the relationships between outcomes and the session features of leader and session module theme, we applied restricted spatial regression to the analysis of patient outcomes collected from 132 participants in an open-enrollment group for treating depression among patients of a residential alcohol and other drug treatment program, adapting the approach to the multilevel data structure of open-enrollment group data. As compared with standard conditional autoregression, the restricted regression approach resulted in more precise estimates of regression coefficients of the module theme and leader predictor variables. The restricted regression approach provides an important analytic tool for group therapy researchers who are investigating the relationship between key components of open-enrollment group therapy interventions and patient outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
14. Multiple imputation for harmonizing longitudinal non-commensurate measures in individual participant data meta-analysis.
- Author
-
Siddique, Juned, Reiter, Jerome P., Brincks, Ahnalee, Gibbons, Robert D., Crespi, Catherine M., and Brown, C. Hendricks
- Subjects
FLUOXETINE ,SECOND-generation antidepressants ,ADOLESCENT psychology ,CALIBRATION ,MENTAL depression ,EXPERIMENTAL design ,LONGITUDINAL method ,META-analysis ,RESEARCH funding ,TREATMENT effectiveness ,STATISTICAL models ,THERAPEUTICS - Abstract
There are many advantages to individual participant data meta-analysis for combining data from multiple studies. These advantages include greater power to detect effects, increased sample heterogeneity, and the ability to perform more sophisticated analyses than meta-analyses that rely on published results. However, a fundamental challenge is that it is unlikely that variables of interest are measured the same way in all of the studies to be combined. We propose that this situation can be viewed as a missing data problem in which some outcomes are entirely missing within some trials and use multiple imputation to fill in missing measurements. We apply our method to five longitudinal adolescent depression trials where four studies used one depression measure and the fifth study used a different depression measure. None of the five studies contained both depression measures. We describe a multiple imputation approach for filling in missing depression measures that makes use of external calibration studies in which both depression measures were used. We discuss some practical issues in developing the imputation model including taking into account treatment group and study. We present diagnostics for checking the fit of the imputation model and investigate whether external information is appropriately incorporated into the imputed values. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
15. More powerful two-sample tests for differences in repeated measures of adverse effects in psychiatric trials when only some patients may be at risk.
- Author
-
McMahon, Robert P., Arndt, Stephan, and Conley, Robert R.
- Subjects
DRUG therapy for schizophrenia ,ANTIPSYCHOTIC agents ,BASAL ganglia diseases ,BENZODIAZEPINES ,CHLORPROMAZINE ,CLINICAL trials ,COMPARATIVE studies ,COMPUTER simulation ,RESEARCH methodology ,MEDICAL cooperation ,PSYCHIATRY ,RESEARCH ,RESEARCH funding ,STATISTICS ,DATA analysis ,TRANQUILIZING drugs ,EVALUATION research ,BLIND experiment ,THERAPEUTICS - Abstract
Common adverse effect measures in psychiatric trials are typically analysed with repeated measures ANOVA, despite having distributions which violate key assumptions of that method; moreover, some adverse effects may be concentrated in vulnerable subgroups of participants. For testing treatment differences in adverse effects, we propose use of Kendall's taub as a summary measure of within-participant trends in adverse events, in conjunction with a weighted modification of a rank test proposed by Conover and Salsburg. Data on extrapyramidal side effects from a controlled clinical trial conducted in persons with treatment resistant schizophrenia was used to compare the proposed analysis to repeated measures ANOVA using mixed models and alternate tests for treatment differences in taub trend scores. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.