9 results on '"Liu, Guanghan"'
Search Results
2. A novel power prior approach for borrowing historical control data in clinical trials.
- Author
-
Shi Y, Li W, and Liu GF
- Subjects
- Bayes Theorem, Sample Size, Computer Simulation, Models, Statistical, Research Design
- Abstract
There has been an increased interest in borrowing information from historical control data to improve the statistical power for hypothesis testing, therefore reducing the required sample sizes in clinical trials. To account for the heterogeneity between the historical and current trials, power priors are often considered to discount the information borrowed from the historical data. However, it can be challenging to choose a fixed power prior parameter in the application. The modified power prior approach, which defines a random power parameter with initial prior to control the amount of historical information borrowed, may not directly account for heterogeneity between the trials. In this paper, we propose a novel approach to pick a power prior based on some direct measures of distributional differences between historical control data and current control data under normal assumptions. Simulations are conducted to investigate the performance of the proposed approach compared with current approaches (e.g. commensurate prior, meta-analytic-predictive, and modified power prior). The results show that the proposed power prior improves the study power while controlling the type I error within a tolerable limit when the distribution of the historical control data is similar to that of the current control data. The method is developed for both superiority and non-inferiority trials and is illustrated with an example from vaccine clinical trials.
- Published
- 2023
- Full Text
- View/download PDF
3. SMIM: A unified framework of survival sensitivity analysis using multiple imputation and martingale.
- Author
-
Yang S, Zhang Y, Liu GF, and Guan Q
- Subjects
- Computer Simulation, Survival Analysis, Models, Statistical, Research Design
- Abstract
Censored survival data are common in clinical trial studies. We propose a unified framework for sensitivity analysis to censoring at random in survival data using multiple imputation and martingale, called SMIM. The proposed framework adopts the δ-adjusted and control-based models, indexed by the sensitivity parameter, entailing censoring at random and a wide collection of censoring not at random assumptions. Also, it targets a broad class of treatment effect estimands defined as functionals of treatment-specific survival functions, taking into account missing data due to censoring. Multiple imputation facilitates the use of simple full-sample estimation; however, the standard Rubin's combining rule may overestimate the variance for inference in the sensitivity analysis framework. We decompose the multiple imputation estimator into a martingale series based on the sequential construction of the estimator and propose the wild bootstrap inference by resampling the martingale series. The new bootstrap inference has a theoretical guarantee for consistency and is computationally efficient compared to the nonparametric bootstrap counterpart. We evaluate the finite-sample performance of the proposed SMIM through simulation and an application on an HIV clinical trial., (© 2021 The International Biometric Society.)
- Published
- 2023
- Full Text
- View/download PDF
4. Analysis of time-to-event data using a flexible mixture model under a constraint of proportional hazards.
- Author
-
Liu GF and Liao JJZ
- Subjects
- Computer Simulation, Data Interpretation, Statistical, Humans, Likelihood Functions, Models, Statistical, Neoplasms metabolism, Neoplasms mortality, Neoplasms therapy, Proportional Hazards Models, Survival Analysis, Time Factors, Treatment Outcome, Randomized Controlled Trials as Topic statistics & numerical data, Research Design statistics & numerical data
- Abstract
Cox proportional hazards (PH) model evaluates the effects of interested covariates under PH assumption without specified the baseline hazard. In clinical trial applications, however, the explicitly estimated hazard or cumulative survival function for each treatment group helps to assess and interpret the meaning of treatment difference. In this paper, we propose to use a flexible mixture model under the PH constraint to fit the underline survival functions. Simulations are conducted to evaluate its performance and show that the proposed mixture PH model is very similar to the Cox PH model in terms of estimating the hazard ratio, bias, confidence interval coverage, type-I error and testing power. Application to several real clinical trial examples demonstrates that the results from this approach are almost identical to the results from Cox PH model. The explicitly estimated hazard function for each treatment group provides additional useful information and helps the interpretation of hazard comparisons.
- Published
- 2020
- Full Text
- View/download PDF
5. Estimand framework: Delineating what to be estimated with clinical questions of interest in clinical trials.
- Author
-
Jin M and Liu G
- Subjects
- Causality, Data Interpretation, Statistical, Humans, Drugs, Investigational, Research Design
- Abstract
ICH (International Council for Harmonization) E9 R1 (2019) proposes a framework to define estimands in clinical trials. Although the concept of estimand was proposed previously when US Food and Drug Administration (FDA) issued the panel report on handling missing data in clinical trials, many details including attributes and different strategies have not been developed until the recent ICH E9 (R1) addendum. A clearly defined estimand should include considerations of five attributes including patient population, treatment regimen of interest, endpoint/variables, handling of intercurrent events (IEs), and summary measures for assessing treatment effect. To evaluate the underlying treatment effects of a new investigational drug or biologic product, it is desirable to consider estimands that are aligned with the objectives of the study and that are meaningful to the stakeholders such as physicians or patients, health authority administration, and payers, etc.. In this paper, the concepts, attributes and strategies of the estimand framework will be reviewed and illustrated with clinical trial examples. Some common estimands and their associated scientific questions are discussed within a causal inference framework for longitudinal clinical trials., (Copyright © 2020 Elsevier Inc. All rights reserved.)
- Published
- 2020
- Full Text
- View/download PDF
6. Two-level approaches to missing data in longitudinal trials with daily patient-reported outcomes.
- Author
-
Jin M, Feng D, Liu G, and Wan S
- Subjects
- Bias, Computer Simulation, Humans, Patient Reported Outcome Measures, Research Design, Sleep Initiation and Maintenance Disorders drug therapy
- Abstract
In longitudinal clinical trials with daily patient-reported outcomes, the analysis endpoints are often defined as the averaged daily diary outcomes in a treatment cycle (such as a month or a week). Conventional methods often deal with missing data at the cycle level by imputing the average, and the cycle average is treated as missing if the number of days with available outcomes in the treatment cycle is less than a certain number. This was the method used for a case study of a phase 3 clinical trial evaluating a treatment for insomnia with daily patient-reported outcomes. Such methods may introduce bias. Motivated by this, we propose methods to impute missing daily outcomes in this paper. Specifically, we define a two-level missing pattern for clinical trials with daily patient-reported outcomes, and propose two-level methods to impute missing data at daily base. Other than the standard methods by multiple imputations, we derive analytic formulas for the proposed two-level methods to reduce computational intensity and improve the estimates of variances. The proposed two-level methods provide more powerful approaches to estimate the treatment difference compared to the conventional cycle-level methods, which are evaluated by theoretical development and simulation studies. In addition, the methods are applied to the motivating phase 3 trial evaluating a treatment for insomnia with daily patient-reported outcomes.
- Published
- 2020
- Full Text
- View/download PDF
7. Control-based imputation for sensitivity analyses in informative censoring for recurrent event data.
- Author
-
Gao F, Liu GF, Zeng D, Xu L, Lin B, Diao G, Golm G, Heyse JF, and Ibrahim JG
- Subjects
- Computer Simulation, Diabetes Mellitus, Type 2 drug therapy, Humans, Hypoglycemic Agents therapeutic use, Randomized Controlled Trials as Topic methods, Sitagliptin Phosphate therapeutic use, Clinical Trials as Topic methods, Data Interpretation, Statistical, Models, Statistical, Research Design
- Abstract
In clinical trials, missing data commonly arise through nonadherence to the randomized treatment or to study procedure. For trials in which recurrent event endpoints are of interests, conventional analyses using the proportional intensity model or the count model assume that the data are missing at random, which cannot be tested using the observed data alone. Thus, sensitivity analyses are recommended. We implement the control-based multiple imputation as sensitivity analyses for the recurrent event data. We model the recurrent event using a piecewise exponential proportional intensity model with frailty and sample the parameters from the posterior distribution. We impute the number of events after dropped out and correct the variance estimation using a bootstrap procedure. We apply the method to an application of sitagliptin study., (Copyright © 2017 John Wiley & Sons, Ltd.)
- Published
- 2017
- Full Text
- View/download PDF
8. Bayesian methods for the design and analysis of noninferiority trials.
- Author
-
Gamalo-Siebers M, Gao A, Lakshminarayanan M, Liu G, Natanegara F, Railkar R, Schmidli H, and Song G
- Subjects
- Data Interpretation, Statistical, Humans, Placebos, Treatment Outcome, Bayes Theorem, Randomized Controlled Trials as Topic, Research Design
- Abstract
The gold standard for evaluating treatment efficacy of a medical product is a placebo-controlled trial. However, when the use of placebo is considered to be unethical or impractical, a viable alternative for evaluating treatment efficacy is through a noninferiority (NI) study where a test treatment is compared to an active control treatment. The minimal objective of such a study is to determine whether the test treatment is superior to placebo. An assumption is made that if the active control treatment remains efficacious, as was observed when it was compared against placebo, then a test treatment that has comparable efficacy with the active control, within a certain range, must also be superior to placebo. Because of this assumption, the design, implementation, and analysis of NI trials present challenges for sponsors and regulators. In designing and analyzing NI trials, substantial historical data are often required on the active control treatment and placebo. Bayesian approaches provide a natural framework for synthesizing the historical data in the form of prior distributions that can effectively be used in design and analysis of a NI clinical trial. Despite a flurry of recent research activities in the area of Bayesian approaches in medical product development, there are still substantial gaps in recognition and acceptance of Bayesian approaches in NI trial design and analysis. The Bayesian Scientific Working Group of the Drug Information Association provides a coordinated effort to target the education and implementation issues on Bayesian approaches for NI trials. In this article, we provide a review of both frequentist and Bayesian approaches in NI trials, and elaborate on the implementation for two common Bayesian methods including hierarchical prior method and meta-analytic-predictive approach. Simulations are conducted to investigate the properties of the Bayesian methods, and some real clinical trial examples are presented for illustration.
- Published
- 2016
- Full Text
- View/download PDF
9. A note on effective sample size for constructing confidence intervals for the difference of two proportions.
- Author
-
Liu GF
- Subjects
- Bayes Theorem, Computer Simulation, Confidence Intervals, Humans, Linear Models, Logistic Models, Models, Statistical, Sample Size, Clinical Trials as Topic methods, Outcome Assessment, Health Care methods, Research Design
- Abstract
Proportion differences are often used to estimate and test treatment effects in clinical trials with binary outcomes. In order to adjust for other covariates or intra-subject correlation among repeated measures, logistic regression or longitudinal data analysis models such as generalized estimating equation or generalized linear mixed models may be used for the analyses. However, these analysis models are often based on the logit link which results in parameter estimates and comparisons in the log-odds ratio scale rather than in the proportion difference scale. A two-step method is proposed in the literature to approximate the calculation of confidence intervals for the proportion difference using a concept of effective sample sizes. However, the performance of this two-step method has not been investigated in their paper. On this note, we examine the properties of the two-step method and propose an adjustment to the effective sample size formula based on Bayesian information theory. Simulations are conducted to evaluate the performance and to show that the modified effective sample size improves the coverage property of the confidence intervals., (Copyright © 2012 John Wiley & Sons, Ltd.)
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.