1,272 results
Search Results
2. Subgroup analysis and interpretation for phase 3 confirmatory trials: White paper of the EFSPI/PSI working group on subgroup analysis.
- Author
-
Dane, Aaron, Spencer, Amy, Rosenkranz, Gerd, Lipkovich, Ilya, and Parke, Tom
- Subjects
- *
SUBGROUP analysis (Experimental design) , *TEAMS in the workplace , *CLINICAL trials , *LABELS - Abstract
Subgroup by treatment interaction assessments are routinely performed when analysing clinical trials and are particularly important for phase 3 trials where the results may affect regulatory labelling. Interpretation of such interactions is particularly difficult, as on one hand the subgroup finding can be due to chance, but equally such analyses are known to have a low chance of detecting differential treatment effects across subgroup levels, so may overlook important differences in therapeutic efficacy. EMA have therefore issued draft guidance on the use of subgroup analyses in this setting. Although this guidance provided clear proposals on the importance of pre‐specification of likely subgroup effects and how to use this when interpreting trial results, it is less clear which analysis methods would be reasonable, and how to interpret apparent subgroup effects in terms of whether further evaluation or action is necessary. A PSI/EFSPI Working Group has therefore been investigating a focused set of analysis approaches to assess treatment effect heterogeneity across subgroups in confirmatory clinical trials that take account of the number of subgroups explored and also investigating the ability of each method to detect such subgroup heterogeneity. This evaluation has shown that the plotting of standardised effects, bias‐adjusted bootstrapping method and SIDES method all perform more favourably than traditional approaches such as investigating all subgroup‐by‐treatment interactions individually or applying a global test of interaction. Therefore, these approaches should be considered to aid interpretation and provide context for observed results from subgroup analyses conducted for phase 3 clinical trials. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
3. Comments on D.A. Noe's papers on noncompartmental pharmacokinetic analysis: Performance characteristics of the adjusted r2 algorithm for determining the start of the terminal disposition phase and comparison with a simple r2 algorithm and a visual inspection method https://onlinelibrary.wiley.com/doi/10.1002/pst.1979 and Criteria for reporting noncompartmental estimates of half‐life and area under the curve extrapolated to infinity https://onlinelibrary.wiley.com/doi/10.1002/pst.1978
- Author
-
Weiner, Daniel and Teuscher, Nathan S.
- Subjects
- *
INSPECTION & review , *INFINITY (Mathematics) , *ALGORITHMS , *ESTIMATES , *CURVES - Abstract
Algorithm and a visual inspection method https://onlinelibrary.wiley.com/doi/10.1002/pst.1979 and Criteria for reporting noncompartmental estimates of half-life and area under the curve extrapolated to infinity https://onlinelibrary.wiley.com/doi/10.1002/pst.1978 The author of the two articles has presented interesting research that points to bias in algorithms commonly used to estimate the terminal elimination rate constants for non-compartmental analysis of pharmacokinetic data. In conclusion, the author suggests a proposed visual inspection algorithm that is applicable to pharmacokinetic data analysts performing NCA analyses, and we thank the author for opening a dialogue on this issue. [Extracted from the article]
- Published
- 2020
- Full Text
- View/download PDF
4. Obituary: Sir David Cox.
- Subjects
STATISTICS ,PROPORTIONAL hazards models - Abstract
His name has been attached to the Cox process, a stochastic process model he developed in a 1955 paper1 and, most prominently, to the Cox model,2 a semi-parametric regression framework for identifying factors that influence the time to an event occurring. This paper, published 9 years after his PhD, and after publishing 44 more specialised papers and a book, perhaps represents David Cox's first work on purely theoretical statistics. Discussion: IBS-BIR, 40 years of the Cox Model, March 8. 2013. https://www.youtube.com/watch?v=y16ZxKs PTM&list=PL9jArM9qlWA-JJhrt3kwttDvmhluB z37&index=8 10 Cox DR. MRC Biostatistics Unit Armitage Lecture, 2016 Sir David Cox, who died on January 18, 2022, was arguably the most influential statistician of the latter half of the 20th Century. [Extracted from the article]
- Published
- 2022
- Full Text
- View/download PDF
5. Just say no to data listings!
- Author
-
Navarro, Mercidita, Brucken, Nancy, Yang, Aiming, and Ball, Greg
- Subjects
DATA structures ,USER experience ,DATA modeling - Abstract
Sponsor companies often create voluminous static listings for Clinical Study Reports (CSRs) and regulatory submissions, and possibly for internal use to review participant‐level data. This is likely due to the perception that they are required and/or lack of knowledge of various alternatives. However, there are other ways of viewing clinical study data that can provide an improved user experience, and are made possible by standard data structures such as the Study Data Tabulation Model (SDTM). The purpose of this paper is to explore some alternatives to providing a complete set of static listings and make a case for sponsors to begin considering these alternatives. We will discuss the recommendations from the PHUSE white paper, "Data Listings in Clinical Study Reports." [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. A model‐assisted design for partially or completely ordered groups.
- Author
-
Celum, Connor and Conaway, Mark
- Abstract
This paper proposes a trial design for locating group‐specific doses when groups are partially or completely ordered by dose sensitivity. Previous trial designs for partially ordered groups are model‐based, whereas the proposed method is model‐assisted, providing clinicians with a design that is simpler. The proposed method performs similarly to model‐based methods, providing simplicity without losing accuracy. Additionally, to the best of our knowledge, the proposed method is the first paper on dose‐finding for partially ordered groups with convergence results. To generalize the proposed method, a framework is introduced that allows partial orders to be transferred to a grid format with a known ordering across rows but an unknown ordering within rows. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Discussion on the paper 'Prediction of accrual closure date in multi-center clinical trials with discrete-time Poisson process models', by Gong Tang, Yuan Kong, Chung-Chou Ho Chang, Lan Kong, and Joseph P. Costantino.
- Author
-
Anisimov, Vladimir V.
- Published
- 2012
- Full Text
- View/download PDF
8. Authors' reply to the letter to the editor on the paper 'Prediction of accrual closure date in multi-center clinical trials with discrete-time Poisson process models'.
- Author
-
Tang, Gong, Kong, Yuan, Chang, Chung-Chou Ho, Kong, Lan, and Costantino, Joseph P.
- Published
- 2012
- Full Text
- View/download PDF
9. The analysis of the AB/BA cross-over trial in the medical literature.
- Author
-
Senn, Stephen and Lee, Sally
- Published
- 2004
- Full Text
- View/download PDF
10. From innovative thinking to pharmaceutical industry implementation: Some success stories.
- Author
-
Walley, Rosalind and Brayshaw, Nigel
- Subjects
PHARMACEUTICAL industry ,SUCCESS ,STATISTICIANS - Abstract
In industry, successful innovation involves not only developing new statistical methodology, but also ensuring that this methodology is implemented successfully. This includes enabling applied statisticians to understand the method, its benefits and limitations and empowering them to implement the new method. This will include advocacy, influencing in‐house and external stakeholders, such that these stakeholders are receptive to the new methodology. In this paper, we describe some industry successes and focus on our colleague, Andy Grieve's role in these. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Quantification of follow‐up time in oncology clinical trials with a time‐to‐event endpoint: Asking the right questions.
- Author
-
Rufibach, Kaspar, Grinsted, Lynda, Li, Jiang, Weber, Hans Jochen, Zheng, Cheng, and Zhou, Jiangxiu
- Subjects
CLINICAL trials ,RANDOMIZED controlled trials ,DRUG development ,ONCOLOGY - Abstract
For the analysis of a time‐to‐event endpoint in a single‐arm or randomized clinical trial it is generally perceived that interpretation of a given estimate of the survival function, or the comparison between two groups, hinges on some quantification of the amount of follow‐up. Typically, a median of some loosely defined quantity is reported. However, whatever median is reported, is typically not answering the question(s) trialists actually have in terms of follow‐up quantification. In this paper, inspired by the estimand framework, we formulate a comprehensive list of relevant scientific questions that trialists have when reporting time‐to‐event data. We illustrate how these questions should be answered, and that reference to an unclearly defined follow‐up quantity is not needed at all. In drug development, key decisions are made based on randomized controlled trials, and we therefore also discuss relevant scientific questions not only when looking at a time‐to‐event endpoint in one group, but also for comparisons. We find that different thinking about some of the relevant scientific questions around follow‐up is required depending on whether a proportional hazards assumption can be made or other patterns of survival functions are anticipated, for example, delayed separation, crossing survival functions, or the potential for cure. We conclude the paper with practical recommendations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Conditional assurance: the answer to the questions that should be asked within drug development.
- Author
-
Temple, Jane R. and Robertson, Jon R.
- Subjects
DRUG development ,DECISION making ,RATE of return ,TREATMENT effectiveness - Abstract
In this paper, we extend the use of assurance for a single study to explore how meeting a study's pre‐defined success criteria could update our beliefs about the true treatment effect and impact the assurance of subsequent studies. This concept of conditional assurance, the assurance of a subsequent study conditional on success in an initial study, can be used assess the de‐risking potential of the study requiring immediate investment, to ensure it provides value within the overall development plan. If the planned study does not discharge sufficient later phase risk, alternative designs and/or success criteria should be explored. By transparently laying out the different design options and the risks associated, this allows for decision makers to make quantitative investment choices based on their risk tolerance levels and potential return on investment. This paper lays out the derivation of conditional assurance, discusses how changing the design of a planned study will impact the conditional assurance of a future study, as well as presenting a simple illustrative example of how this methodology could be used to transparently compare development plans to aid decision making within an organisation. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. Number of Repetitions in Re‐Randomization Tests.
- Author
-
Zhang, Yilong, Zhao, Yujie, Wang, Bingjun, and Luo, Yiwen
- Subjects
- *
MONTE Carlo method , *NUMERICAL analysis , *INFERENTIAL statistics , *SAMPLE size (Statistics) , *CLINICAL trials - Abstract
ABSTRACT In covariate‐adaptive or response‐adaptive randomization, the treatment assignment and outcome can be correlated. Under this situation, the re‐randomization test is a straightforward and attractive method to provide valid statistical inferences. In this paper, we investigate the number of repetitions in tests. This is motivated by a group sequential design in clinical trials, where the nominal significance bound can be very small at an interim analysis. Accordingly, re‐randomization tests lead to a very large number of required repetitions, which may be computationally intractable. To reduce the number of repetitions, we propose an adaptive procedure and compare it with multiple approaches under predefined criteria. Monte Carlo simulations are conducted to show the performance of different approaches in a limited sample size. We also suggest strategies to reduce total computation time and provide practical guidance in preparing, executing, and reporting before and after data are unblinded at an interim analysis, so one can complete the computation within a reasonable time frame. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. An Adaptive Three‐Arm Comparative Clinical Endpoint Bioequivalence Study Design With Unblinded Sample Size Re‐Estimation and Optimized Allocation Ratio.
- Author
-
Hinds, David and Sun, Wanjie
- Subjects
- *
FALSE positive error , *GENERIC drugs , *SAMPLE size (Statistics) , *ERROR rates , *COST control - Abstract
ABSTRACT A three‐arm comparative clinical endpoint bioequivalence (BE) study is often used to establish bioequivalence (BE) between a locally acting generic drug (T) and reference drug (R), where superiority needs to be established for T and R over Placebo (P) and equivalence needs to be established for T vs. R. Sometimes, when study design parameters are uncertain, a fixed design study may be under‐ or over‐powered and result in study failure or unnecessary cost. In this paper, we propose a two‐stage adaptive clinical endpoint BE study with unblinded sample size re‐estimation, standard or maximum combination method, optimized allocation ratio, optional re‐estimation of the effect size based on likelihood estimation, and optional re‐estimation of the R and P treatment means at interim analysis, which have not been done previously. Our proposed method guarantees control of Type 1 error rate analytically. It helps to reduce the average sample size when the original fixed design is overpowered and increases the sample size and power when the original study and group sequential design are under‐powered. Our proposed adaptive design can help generic drug sponsors cut cost and improve success rate, making clinical study endpoint BE studies more affordable and more generic drugs accessible to the public. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Alone, together: On the benefits of Bayesian borrowing in a meta‐analytic setting.
- Author
-
Harari, Ofir, Soltanifar, Mohsen, Verhoek, Andre, and Heeg, Bart
- Subjects
RANDOMIZED controlled trials ,TREATMENT effectiveness - Abstract
It is common practice to use hierarchical Bayesian model for the informing of a pediatric randomized controlled trial (RCT) by adult data, using a prespecified borrowing fraction parameter (BFP). This implicitly assumes that the BFP is intuitive and corresponds to the degree of similarity between the populations. Generalizing this model to any K≥1 historical studies, naturally leads to empirical Bayes meta‐analysis. In this paper we calculate the Bayesian BFPs and study the factors that drive them. We prove that simultaneous mean squared error reduction relative to an uninformed model is always achievable through application of this model. Power and sample size calculations for a future RCT, designed to be informed by multiple external RCTs, are also provided. Potential applications include inference on treatment efficacy from independent trials involving either heterogeneous patient populations or different therapies from a common class. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Replenishing the pipeline: A quantitative approach to optimising the sourcing of new projects.
- Author
-
Wiklund, Stig Johan, Önnheim, Magnus, and Ytterstad, Magnus
- Subjects
BUDGET ,BUSINESS revenue ,SUBSET selection ,ASSETS (Accounting) ,PHARMACEUTICAL industry - Abstract
Large pharmaceutical companies maintain a portfolio of assets, some of which are projects under development while others are on the market and generating revenue. The budget allocated to R&D may not always be sufficient to fund all the available projects for development. Much attention has been paid to the selection of optimal subsets of available projects to fit within the available budget. In this paper, we argue the need for a forward‐looking approach to portfolio decision‐making. We develop a quantitative model that allows the portfolio management to evaluate the need for future inflow of new projects to achieve revenue at desired levels, often aspiring to a certain annual revenue growth. Optimisation methods are developed for the presented model, allowing an optimal choice of number, timing and type of projects to be added to the portfolio. The proposed methodology allows for a proactive approach to portfolio management, prioritisation, and optimisation. It provides a quantitatively based support for strategic decisions regarding the efforts needed to secure the future development pipeline and revenue stream of the company. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Improved inference for MCP‐Mod approach using time‐to‐event endpoints with small sample sizes.
- Author
-
Diniz, Márcio A., Gallardo, Diego I., and Magalhães, Tiago M.
- Subjects
SAMPLE size (Statistics) ,FALSE positive error ,COVARIANCE matrices ,REGRESSION analysis ,MULTIPLE comparisons (Statistics) - Abstract
The Multiple Comparison Procedures with Modeling Techniques (MCP‐Mod) framework has been recently approved by the U.S. Food, Administration, and European Medicines Agency as fit‐for‐purpose for phase II studies. Nonetheless, this approach relies on the asymptotic properties of Maximum Likelihood (ML) estimators, which might not be reasonable for small sample sizes. In this paper, we derived improved ML estimators and correction for their covariance matrices in the censored Weibull regression model based on the corrective and preventive approaches. We performed two simulation studies to evaluate ML and improved ML estimators with their covariance matrices in (i) a regression framework (ii) the Multiple Comparison Procedures with Modeling Techniques framework. We have shown that improved ML estimators are less biased than ML estimators yielding Wald‐type statistics that controls type I error without loss of power in both frameworks. Therefore, we recommend the use of improved ML estimators in the MCP‐Mod approach to control type I error at nominal value for sample sizes ranging from 5 to 25 subjects per dose. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Estimation of multivariate treatment effects in contaminated clinical trials.
- Author
-
Ye, Zi and Harrar, Solomon W.
- Subjects
TREATMENT effectiveness ,CLINICAL trials ,SAMPLE size (Statistics) ,EXPECTATION-maximization algorithms ,GINGIVAL recession ,BRAIN-computer interfaces ,ELECTROENCEPHALOGRAPHY - Abstract
The paper addresses estimating and testing treatment effects with multivariate outcomes in clinical trials where imperfect diagnostic devices are used to assign subjects to treatment groups. The paper focuses on the pre‐post design and proposes two novel methods for estimating and testing treatment effects. In addition, methods for sample size and power calculations are developed. The methods are compared with each other and with a traditional method in a simulation study. The new methods show significant advantages in terms of power, coverage probability, and required sample size. The application of the methods is illustrated with data from electroencephalogram (EEG) recordings of alcoholic and control subjects. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Exploring Stratification Strategies for Population‐ Versus Randomization‐Based Inference.
- Author
-
Novelli, Marco and Rosenberger, William F.
- Subjects
- *
SUBGROUP analysis (Experimental design) , *TRIAL practice , *CLINICAL trials - Abstract
ABSTRACT Stratification on important variables is a common practice in clinical trials, since ensuring cosmetic balance on known baseline covariates is often deemed to be a crucial requirement for the credibility of the experimental results. However, the actual benefits of stratification are still debated in the literature. Other authors have shown that it does not improve efficiency in large samples and improves it only negligibly in smaller samples. This paper investigates different subgroup analysis strategies, with a particular focus on the potential benefits in terms of inferential precision of prestratification versus both poststratification and post hoc regression adjustment. For each of these approaches, the pros and cons of population‐based versus randomization‐based inference are discussed. The effects of the presence of a treatment‐by‐covariate interaction and the variability in the patient responses are also taken into account. Our results show that, in general, prestratifying does not provide substantial benefit. On the contrary, it may be deleterious, in particular for randomization‐based procedures in the presence of a chronological bias. Even when there is treatment‐by‐covariate interaction, prestratification may backfire by considerably reducing the inferential precision. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Potency Assay Variability Estimation in Practice.
- Author
-
Li, Hang, Witkos, Tomasz M., Umlauf, Scott, and Thompson, Christopher
- Subjects
- *
BIOLOGICAL assay , *DRUG development - Abstract
During the drug development process, testing potency plays an important role in the quality assessment required for the manufacturing and marketing of biologics. Due to multiple operational and biological factors, higher variability is usually observed in bioassays compared with physicochemical methods. In this paper, we discuss different sources of bioassay variability and how this variability can be statistically estimated. In addition, we propose an algorithm to estimate the variability of reportable results associated with different numbers of runs and their corresponding OOS rates under a given specification. Numerical experiments are conducted on multiple assay formats to elucidate the empirical distribution of bioassay variability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Futility Interim Analysis Based on Probability of Success Using a Surrogate Endpoint.
- Author
-
Fougeray, Ronan, Vidot, Loïck, Ratta, Marco, Teng, Zhaoyang, Skanji, Donia, and Saint‐Hilary, Gaëlle
- Subjects
- *
CLINICAL trials , *FRUSTRATION , *PROBABILITY theory , *SCIENTIFIC community , *OVERALL survival - Abstract
ABSTRACT In clinical trials with time‐to‐event data, the evaluation of treatment efficacy can be a long and complex process, especially when considering long‐term primary endpoints. Using surrogate endpoints to correlate the primary endpoint has become a common practice to accelerate decision‐making. Moreover, the ethical need to minimize sample size and the practical need to optimize available resources have encouraged the scientific community to develop methodologies that leverage historical data. Relying on the general theory of group sequential design and using a Bayesian framework, the methodology described in this paper exploits a documented historical relationship between a clinical “final” endpoint and a surrogate endpoint to build an informative prior for the primary endpoint, using surrogate data from an early interim analysis of the clinical trial. The predictive probability of success of the trial is then used to define a futility‐stopping rule. The methodology demonstrates substantial enhancements in trial operating characteristics when there is a good agreement between current and historical data. Furthermore, incorporating a robust approach that combines the surrogate prior with a vague component mitigates the impact of the minor prior‐data conflicts while maintaining acceptable performance even in the presence of significant prior‐data conflicts. The proposed methodology was applied to design a Phase III clinical trial in metastatic colorectal cancer, with overall survival as the primary endpoint and progression‐free survival as the surrogate endpoint. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Sample size calculation for comparing two ROC curves.
- Author
-
Jung, Sin‐Ho
- Subjects
- *
RECEIVER operating characteristic curves , *SAMPLE size (Statistics) , *FALSE positive error - Abstract
Biomarkers are key components of personalized medicine. In this paper, we consider biomarkers taking continuous values that are associated with disease status, called case and control. The performance of such a biomarker is evaluated by the area under the curve (AUC) of its receiver operating characteristic curve. Oftentimes, two biomarkers are collected from each subject to test if one has a larger AUC than the other. We propose a simple non‐parametric statistical test for comparing the performance of two biomarkers. We also present a simple sample size calculation method for this test statistic. Our sample size formula requires specification of AUC values (or the standardized effect size of each biomarker between cases and controls together with the correlation coefficient between two biomarkers), prevalence of cases in the study population, type I error rate, and power. Through simulations, we show that the testing on two biomarkers controls type I error rate accurately and the proposed sample size closely maintains specified statistical power. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Assessing the performance of group‐based trajectory modeling method to discover different patterns of medication adherence.
- Author
-
Diop, Awa, Gupta, Alind, Mueller, Sabrina, Dron, Louis, Harari, Ofir, Berringer, Heather, Kalatharan, Vinusha, Park, Jay J. H., Mésidor, Miceline, and Talbot, Denis
- Subjects
- *
PATIENT compliance , *GENERATING functions - Abstract
It is well known that medication adherence is critical to patient outcomes and can decrease patient mortality. The Pharmacy Quality Alliance (PQA) has recognized and identified medication adherence as an important indicator of medication‐use quality. Hence, there is a need to use the right methods to assess medication adherence. The PQA has endorsed the proportion of days covered (PDC) as the primary method of measuring adherence. Although easy to calculate, the PDC has however several drawbacks as a method of measuring adherence. PDC is a deterministic approach that cannot capture the complexity of a dynamic phenomenon. Group‐based trajectory modeling (GBTM) is increasingly proposed as an alternative to capture heterogeneity in medication adherence. The main goal of this paper is to demonstrate, through a simulation study, the ability of GBTM to capture treatment adherence when compared to its deterministic PDC analogue and to the nonparametric longitudinal K‐means. A time‐varying treatment was generated as a quadratic function of time, baseline, and time‐varying covariates. Three trajectory models are considered combining a cat's cradle effect, and a rainbow effect. The performance of GBTM was compared to the PDC and longitudinal K‐means using the absolute bias, the variance, the c‐statistics, the relative bias, and the relative variance. For all explored scenarios, we find that GBTM performed better in capturing different patterns of medication adherence with lower relative bias and variance even under model misspecification than PDC and longitudinal K‐means. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. On the relative conservativeness of Bayesian logistic regression method in oncology dose‐finding studies.
- Author
-
Yang, Cheng‐Han, Cheng, Guanghui, and Lin, Ruitao
- Subjects
- *
LOGISTIC regression analysis , *REGRESSION analysis , *ONCOLOGY , *DRUG overdose - Abstract
The Bayesian logistic regression method (BLRM) is a widely adopted and flexible design for finding the maximum tolerated dose in oncology phase I studies. However, the BLRM design has been criticized in the literature for being overly conservative due to the use of the overdose control rule. Recently, a discussion paper titled "Improving the performance of Bayesian logistic regression model with overall control in oncology dose‐finding studies" in Statistics in Medicine has proposed an overall control rule to address the "excessive conservativeness" of the standard BLRM design. In this short communication, we discuss the relative conservativeness of the standard BLRM design and also suggest a dose‐switching rule to further enhance its performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. The flaw of averages: Bayes factors as posterior means of the likelihood ratio.
- Author
-
Liu, Charles C., Yu, Ron Xiaolong, and Aitkin, Murray
- Subjects
- *
MEDICAL periodicals , *RESEARCH personnel - Abstract
As an alternative to the Frequentist p‐value, the Bayes factor (or ratio of marginal likelihoods) has been regarded as one of the primary tools for Bayesian hypothesis testing. In recent years, several researchers have begun to re‐analyze results from prominent medical journals, as well as from trials for FDA‐approved drugs, to show that Bayes factors often give divergent conclusions from those of p‐values. In this paper, we investigate the claim that Bayes factors are straightforward to interpret as directly quantifying the relative strength of evidence. In particular, we show that for nested hypotheses with consistent priors, the Bayes factor for the null over the alternative hypothesis is the posterior mean of the likelihood ratio. By re‐analyzing 39 results previously published in the New England Journal of Medicine, we demonstrate how the posterior distribution of the likelihood ratio can be computed and visualized, providing useful information beyond the posterior mean alone. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. To Dilute or Not to Dilute: Nominal Titer Dosing for Genetic Medicines.
- Author
-
Faya, Paul and Zhang, Tianhui
- Subjects
- *
TITERS , *DRUG labeling , *GENE therapy , *ADENO-associated virus , *TECHNICAL specifications - Abstract
ABSTRACT Recombinant adeno‐associated virus (AAV) has become a popular platform for many gene therapy applications. The strength of AAV‐based products is a critical quality attribute that affects the efficacy of the drug and is measured as the concentration of vector genomes, or physical titer. Because the dosing of patients is based on the titer measurement, it is critical for manufacturers to ensure that the measured titer of the drug product is close to the actual concentration of the batch. Historically, dosing calculations have been performed using the measured titer, which is reported on the drug product label. However, due to recent regulatory guidance, sponsors are now expected to label the drug product with nominal or “target” titer. This new expectation for gene therapy products can pose a challenge in the presence of process and analytical variability. In particular, the manufacturer must decide if a dilution of the drug substance is warranted at the drug product stage to bring the strength in line with the nominal value. In this paper, we present two straightforward statistical methods to aid the manufacturer in the dilution decision. These approaches use the understanding of process and analytical variability to compute probabilities of achieving the desired drug product titer. We also provide an approach for determining an optimal assay replication strategy for achieving the desired probability of meeting drug product release specifications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Principled leveraging of external data in the evaluation of diagnostic devices via the propensity score‐integrated composite likelihood approach.
- Author
-
Song, Changhong, Li, Heng, Chen, Wei‐Chen, Lu, Nelson, Tiwari, Ram, Wang, Chenguang, Xu, Yunling, and Yue, Lilly Q.
- Subjects
MEDICAL supplies ,SENSITIVITY & specificity (Statistics) ,DATA analysis ,EXPERIMENTAL design - Abstract
In the area of diagnostics, it is common practice to leverage external data to augment a traditional study of diagnostic accuracy consisting of prospectively enrolled subjects to potentially reduce the time and/or cost needed for the performance evaluation of an investigational diagnostic device. However, the statistical methods currently being used for such leveraging may not clearly separate study design and outcome data analysis, and they may not adequately address possible bias due to differences in clinically relevant characteristics between the subjects constituting the traditional study and those constituting the external data. This paper is intended to draw attention in the field of diagnostics to the recently developed propensity score‐integrated composite likelihood approach, which originally focused on therapeutic medical products. This approach applies the outcome‐free principle to separate study design and outcome data analysis and can mitigate bias due to imbalance in covariates, thereby increasing the interpretability of study results. While this approach was conceived as a statistical tool for the design and analysis of clinical studies for therapeutic medical products, here, we will show how it can also be applied to the evaluation of sensitivity and specificity of an investigational diagnostic device leveraging external data. We consider two common scenarios for the design of a traditional diagnostic device study consisting of prospectively enrolled subjects, which is to be augmented by external data. The reader will be taken through the process of implementing this approach step‐by‐step following the outcome‐free principle that preserves study integrity. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Incorporating historical information to improve dose optimization design with toxicity and efficacy endpoints: iBOIN‐ET.
- Author
-
Zhao, Yunqi, Liu, Rachael, and Takeda, Kentaro
- Subjects
ERROR probability ,DRUG development ,CELLULAR therapy ,ANTINEOPLASTIC agents ,INFORMATION design - Abstract
In modern oncology drug development, adaptive designs have been proposed to identify the recommended phase 2 dose. The conventional dose finding designs focus on the identification of maximum tolerated dose (MTD). However, designs ignoring efficacy could put patients under risk by pushing to the MTD. Especially in immuno‐oncology and cell therapy, the complex dose‐toxicity and dose‐efficacy relationships make such MTD driven designs more questionable. Additionally, it is not uncommon to have data available from other studies that target on similar mechanism of action and patient population. Due to the high variability from phase I trial, it is beneficial to borrow historical study information into the design when available. This will help to increase the model efficiency and accuracy and provide dose specific recommendation rules to avoid toxic dose level and increase the chance of patient allocation at potential efficacious dose levels. In this paper, we propose iBOIN‐ET design that uses prior distribution extracted from historical studies to minimize the probability of decision error. The proposed design utilizes the concept of skeleton from both toxicity and efficacy data, coupled with prior effective sample size to control the amount of historical information to be incorporated. Extensive simulation studies across a variety of realistic settings are reported including a comparison of iBOIN‐ET design to other model based and assisted approaches. The proposed novel design demonstrates the superior performances in percentage of selecting the correct optimal dose (OD), average number of patients allocated to the correct OD, and overdosing control during dose escalation process. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Vaccine clinical trials with dynamic borrowing of historical controls: Two retrospective studies.
- Author
-
Callegaro, Andrea, Karkada, Naveen, Aris, Emmanuel, and Zahaf, Toufik
- Subjects
VACCINE trials ,VACCINE development ,HUMAN papillomavirus vaccines ,VACCINE effectiveness ,FALSE positive error - Abstract
Traditional vaccine efficacy trials usually use fixed designs with fairly large sample sizes. Recruiting a large number of subjects requires longer time and higher costs. Furthermore, vaccine developers are more than ever facing the need to accelerate vaccine development to fulfill the public's medical needs. A possible approach to accelerate development is to use the method of dynamic borrowing of historical controls in clinical trials. In this paper, we evaluate the feasibility and the performance of this approach in vaccine development by retrospectively analyzing two real vaccine studies: a relatively small immunological trial (typical early phase study) and a large vaccine efficacy trial (typical Phase 3 study) assessing prophylactic human papillomavirus vaccine. Results are promising, particularly for early development immunological studies, where the adaptive design is feasible, and control of type I error is less relevant. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Statistical considerations for assessing precision of heterogeneous duplicate measurements: An application to pharmaceutical bioanalysis.
- Author
-
Quiroz, Jorge and Roychoudhury, Satrajit
- Subjects
DISTRIBUTION (Probability theory) ,GAUSSIAN distribution ,GAUSSIAN mixture models ,STATISTICAL software - Abstract
Duplicate analysis is a strategy commonly used to assess precision of bioanalytical methods. In some cases, duplicate analysis may rely on pooling data generated across organizations. Despite being generated under comparable conditions, organizations may produce duplicate measurements with different precision. Thus, these pooled data consist of a heterogeneous collection of duplicate measurements. Precision estimates are often expressed as relative difference indexes (RDI), such as relative percentage difference (RPD). Empirical evidence indicates that the frequency distribution of RDI values from heterogeneous data exhibits sharper peaks and heavier tails than normal distributions. Therefore, traditional normal‐based models may yield faulty or unreliable estimates of precision from heterogeneous duplicate data. In this paper, we survey application of the mixture models that satisfactorily represent the distribution of RDI values from heterogeneous duplicate data. A simulation study was conducted to compare the performance of the different models in providing reliable estimates and inferences of percentile calculated from RDI values. These models are readily accessible to practitioners for study implementation through the use of modern statistical software. The utility of mixture models are explained in detail using a numerical example. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Estimands for overall survival in clinical trials with treatment switching in oncology.
- Author
-
Manitz, Juliane, Kan‐Dobrosky, Natalia, Buchner, Hannes, Casadebaig, Marie‐Laure, Degtyarev, Evgeny, Dey, Jyotirmoy, Haddad, Vincent, Jie, Fei, Martin, Emily, Mo, Mindy, Rufibach, Kaspar, Shentu, Yue, Stalbovskaya, Viktoriya, Tang, Rui, Yung, Godwin, and Zhou, Jiangxiu
- Subjects
OVERALL survival ,CLINICAL trials ,ONCOLOGY ,SURVIVAL analysis (Biometry) ,THERAPEUTICS - Abstract
An addendum of the ICH E9 guideline on Statistical Principles for Clinical Trials was released in November 2019 introducing the estimand framework. This new framework aims to align trial objectives and statistical analyses by requiring a precise definition of the inferential quantity of interest, that is, the estimand. This definition explicitly accounts for intercurrent events, such as switching to new anticancer therapies for the analysis of overall survival (OS), the gold standard in oncology. Traditionally, OS in confirmatory studies is analyzed using the intention‐to‐treat (ITT) approach comparing treatment groups as they were initially randomized regardless of whether treatment switching occurred and regardless of any subsequent therapy (treatment‐policy strategy). Regulatory authorities and other stakeholders often consider ITT results as most relevant. However, the respective estimand only yields a clinically meaningful comparison of two treatment arms if subsequent therapies are already approved and reflect clinical practice. We illustrate different scenarios where subsequent therapies are not yet approved drugs and thus do not reflect clinical practice. In such situations the hypothetical strategy could be more meaningful from patient's and prescriber's perspective. The cross‐industry Oncology Estimand Working Group (www.oncoestimand.org) was initiated to foster a common understanding and consistent implementation of the estimand framework in oncology clinical trials. This paper summarizes the group's recommendations for appropriate estimands in the presence of treatment switching, one of the key intercurrent events in oncology clinical trials. We also discuss how different choices of estimands may impact study design, data collection, trial conduct, analysis, and interpretation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. Sample size calculation for recurrent event data with additive rates models.
- Author
-
Zhu, Liang, Li, Yimei, Tang, Yongqiang, Shen, Liji, Onar‐Thomas, Arzu, and Sun, Jianguo
- Subjects
SAMPLE size (Statistics) ,EXPERIMENTAL design - Abstract
This paper discusses the design of clinical trials where the primary endpoint is a recurrent event with the focus on the sample size calculation. For the problem, a few methods have been proposed but most of them assume a multiplicative treatment effect on the rate or mean number of recurrent events. In practice, sometimes the additive treatment effect may be preferred or more appealing because of its intuitive clinical meaning and straightforward interpretation compared to a multiplicative relationship. In this paper, new methods are presented and investigated for the sample size calculation based on the additive rates model for superiority, non‐inferiority, and equivalence trials. They allow for flexible baseline rate function, staggered entry, random dropout, and overdispersion in event numbers, and simulation studies show that the proposed methods perform well in a variety of settings. We also illustrate how to use the proposed methods to design a clinical trial based on real data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. The individual‐level surrogate threshold effect in a causal‐inference setting with normally distributed endpoints.
- Author
-
Van der Elst, Wim, Abad, Ariel Alonso, Coppenolle, Hans, Meyvisch, Paul, and Molenberghs, Geert
- Subjects
TREATMENT effectiveness ,CAUSAL inference ,INFORMATION theory - Abstract
In the meta‐analytic surrogate evaluation framework, the trial‐level coefficient of determination Rtrial2 quantifies the strength of the association between the expected causal treatment effects on the surrogate (S) and the true (T) endpoints. Burzykowski and Buyse supplemented this metric of surrogacy with the surrogate threshold effect (STE), which is defined as the minimum value of the causal treatment effect on S for which the predicted causal treatment effect on T exceeds zero. The STE supplements Rtrial2 with a more direct clinically interpretable metric of surrogacy. Alonso et al. proposed to evaluate surrogacy based on the strength of the association between the individual (rather than expected) causal treatment effects on S and T. In the current paper, the individual‐level surrogate threshold effect (ISTE) is introduced in the setting where S and T are normally distributed variables. ISTE is defined as the minimum value of the individual causal treatment effect on S for which the lower limit of the prediction interval around the individual causal treatment effect on T exceeds zero. The newly proposed methodology is applied in a case study, and it is illustrated that ISTE has an appealing clinical interpretation. The R package surrogate implements the methodology and a web appendix (supporting information) that details how the analyses can be conducted in practice is provided. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. Decomposition analysis as a framework for understanding heterogeneity of treatment effects in non‐randomized health care studies.
- Subjects
TREATMENT effectiveness ,PROPENSITY score matching ,MEDICAL care ,DUMMY variables - Abstract
This paper uses the decomposition framework from the economics literature to examine the statistical structure of treatment effects estimated with observational data compared to those estimated from randomized studies. It begins with the estimation of treatment effects using a dummy variable in regression models and then presents the decomposition method from economics which estimates separate regression models for the comparison groups and recovers the treatment effect using bootstrapping methods. This method shows that the overall treatment effect is a weighted average of structural relationships of patient features with outcomes within each treatment arm and differences in the distributions of these features across the arms. In large randomized trials, it is assumed that the distribution of features across arms is very similar. Importantly, randomization not only balances observed features but also unobserved. Applying high dimensional balancing methods such as propensity score matching to the observational data causes the distributional terms of the decomposition model to be eliminated but unobserved features may still not be balanced in the observational data. Finally, a correction for non‐random selection into the treatment groups is introduced via a switching regime model. Theoretically, the treatment effect estimates obtained from this model should be the same as those from a randomized trial. However, there are significant challenges in identifying instrumental variables that are necessary for estimating such models. At a minimum, decomposition models are useful tools for understanding the relationship between treatment effects estimated from observational versus randomized data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Improving precision and power in randomized trials with a two‐stage study design: Stratification using clustering method.
- Author
-
Ye, Xuan, Lu, Nelson, and Xu, Yunling
- Subjects
EXPERIMENTAL design ,RANDOMIZED controlled trials ,TREATMENT effectiveness - Abstract
In a randomized controlled trial (RCT), it is possible to improve precision and power and reduce sample size by appropriately adjusting for baseline covariates. There are multiple statistical methods to adjust for prognostic baseline covariates, such as an ANCOVA method. In this paper, we propose a clustering‐based stratification method for adjusting for the prognostic baseline covariates. Clusters (strata) are formed only based on prognostic baseline covariates, not outcome data nor treatment assignment. Therefore, the clustering procedure can be completed prior to the availability of outcome data. The treatment effect is estimated in each cluster, and the overall treatment effect is derived by combining all cluster‐specific treatment effect estimates. The proposed implementation of the procedure is described. Simulations studies and an example are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. On implementing Jeffreys' substitution likelihood for Bayesian inference concerning the medians of unknown distributions.
- Author
-
Grieve, Andrew P.
- Subjects
BAYESIAN field theory ,STATISTICAL models ,BAYESIAN analysis ,PARAMETRIC modeling ,STATISTICAL sampling - Abstract
When statisticians are uncertain as to which parametric statistical model to use to analyse experimental data, they will often resort to a non‐parametric approach. The purpose of this paper is to provide insight into a simple approach to take when it is unclear as to the appropriate parametric model and plan to conduct a Bayesian analysis. I introduce an approximate, or substitution likelihood, first proposed by Harold Jeffreys in 1939 and show how to implement the approach combined with both a non‐informative and an informative prior to provide a random sample from the posterior distribution of the median of the unknown distribution. The first example I use to demonstrate the approach is a within‐patient bioequivalence design and then show how to extend the approach to a parallel group design. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Sample size re‐estimation in Phase 2 dose‐finding: Conditional power versus Bayesian predictive power.
- Author
-
Liu, Qingyang, Hu, Guanyu, Ye, Binqi, Wang, Susan, and Wu, Yaoshi
- Subjects
SAMPLE size (Statistics) ,FALSE positive error ,CLINICAL trials - Abstract
Unblinded sample size re‐estimation (SSR) is often planned in a clinical trial when there is large uncertainty about the true treatment effect. For Proof‐of Concept (PoC) in a Phase II dose finding study, contrast test can be adopted to leverage information from all treatment groups. In this article, we propose two‐stage SSR designs using frequentist conditional power (CP) and Bayesian predictive power (PP) for both single and multiple contrast tests. The Bayesian SSR can be implemented under a wide range of prior settings to incorporate different prior knowledge. Taking the adaptivity into account, all type I errors of final analysis in this paper are rigorously protected. Simulation studies are carried out to demonstrate the advantages of unblinded SSR in multi‐arm trials. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Statistical modeling approaches for the comparison of dissolution profiles.
- Author
-
Pourmohamad, Tony and Ng, Hon Keung Tony
- Subjects
STATISTICAL models ,MONTE Carlo method ,WIENER processes ,PARAMETER estimation ,DRUG development - Abstract
Dissolution studies are a fundamental component of pharmaceutical drug development, yet many studies rely upon the f1 and f2 model‐independent approach that is not capable of accounting for uncertainty in parameter estimation when comparing dissolution profiles. In this paper, we deal with the issue of uncertainty quantification by proposing several model‐dependent approaches for assessing the similarity of two dissolution profiles. We take a statistical modeling approach and allow the dissolution data to be modeled using either a Dirichlet distribution, gamma process model, or Wiener process model. These parametric forms are shown to be reasonable assumptions that are capable of modeling dissolution data well. Furthermore, based on a given statistical model, we are able to use the f1 difference factor and f2 similarity factor to test the equivalency of two dissolution profiles via bootstrap confidence intervals. Illustrations highlighting the success of our methods are provided for both Monte Carlo simulation studies, and real dissolution data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Calculation of confidence intervals for a finite population size.
- Author
-
Julious, Steven A.
- Subjects
CONFIDENCE intervals ,POPULATION - Abstract
Summary: For any estimate of response, confidence intervals are important as they help quantify a plausible range of values for the population response. However, there may be instances in clinical research when the population size is finite, but we wish to take a sample from the population and make inference from this sample. Instances where you can have a fixed population size include when undertaking a clinical audit of patient records or in a clinical trial a researcher could be checking for transcription errors against patient notes. In this paper, we describe how confidence interval calculations can be calculated for a finite population. These confidence intervals are narrower than confidence intervals from population samples. For the extreme case of when a 100% sample from the population is taken, there is no error and the calculation is the population response. The methods in the paper are described using a case study from clinical data management. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
40. On kernel machine learning for propensity score estimation under complex confounding structures.
- Author
-
Zou, Baiming, Mi, Xinlei, Tighe, Patrick J., Koch, Gary G., and Zou, Fei
- Subjects
MACHINE learning ,ELECTRONIC health records ,PHYSICIANS ,ALGORITHMS ,POSTOPERATIVE pain - Abstract
Post marketing data offer rich information and cost‐effective resources for physicians and policy‐makers to address some critical scientific questions in clinical practice. However, the complex confounding structures (e.g., nonlinear and nonadditive interactions) embedded in these observational data often pose major analytical challenges for proper analysis to draw valid conclusions. Furthermore, often made available as electronic health records (EHRs), these data are usually massive with hundreds of thousands observational records, which introduce additional computational challenges. In this paper, for comparative effectiveness analysis, we propose a statistically robust yet computationally efficient propensity score (PS) approach to adjust for the complex confounding structures. Specifically, we propose a kernel‐based machine learning method for flexibly and robustly PS modeling to obtain valid PS estimation from observational data with complex confounding structures. The estimated propensity score is then used in the second stage analysis to obtain the consistent average treatment effect estimate. An empirical variance estimator based on the bootstrap is adopted. A split‐and‐merge algorithm is further developed to reduce the computational workload of the proposed method for big data, and to obtain a valid variance estimator of the average treatment effect estimate as a by‐product. As shown by extensive numerical studies and an application to postoperative pain EHR data comparative effectiveness analysis, the proposed approach consistently outperforms other competing methods, demonstrating its practical utility. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. Simulation‐based sample size calculations of marginal proportional means models for recurrent events with competing risks.
- Author
-
Furberg, Julie Funch, Andersen, Per Kragh, Scheike, Thomas, and Ravn, Henrik
- Abstract
In randomised controlled trials, the outcome of interest could be recurrent events, such as hospitalisations for heart failure. If mortality rates are non‐negligible, both recurrent events and competing terminal events need to be addressed when formulating the estimand and statistical analysis is no longer trivial. In order to design future trials with primary recurrent event endpoints with competing risks, it is necessary to be able to perform power calculations to determine sample sizes. This paper introduces a simulation‐based approach for power estimation based on a proportional means model for recurrent events and a proportional hazards model for terminal events. The simulation procedure is presented along with a discussion of what the user needs to specify to use the approach. The method is flexible and based on marginal quantities which are easy to specify. However, the method introduces a lack of a certain type of dependence. This is explored in a sensitivity analysis which suggests that the power is robust in spite of that. Data from a randomised controlled trial, LEADER, is used as the basis for generating data for a future trial. Finally, potential power gains of recurrent event methods as opposed to first event methods are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Statistical approaches to evaluate in vitro dissolution data against proposed dissolution specifications.
- Author
-
Li, Fasheng, Nickerson, Beverly, Van Alstine, Les, and Wang, Ke
- Abstract
In vitro dissolution testing is a regulatory required critical quality measure for solid dose pharmaceutical drug products. Setting the acceptance criteria to meet compendial criteria is required for a product to be filed and approved for marketing. Statistical approaches for analyzing dissolution data, setting specifications and visualizing results could vary according to product requirements, company's practices, and scientific judgements. This paper provides a general description of the steps taken in the evaluation and setting of in vitro dissolution specifications at release and on stability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Estimation of the odds ratio from multi‐stage randomized trials.
- Author
-
Cao, Shiwei and Jung, Sin‐Ho
- Abstract
A multi‐stage design for a randomized trial is to allow early termination of the study when the experimental arm is found to have low or high efficacy compared to the control during the study. In such a trial, an early stopping rule results in bias in the maximum likelihood estimator of the treatment effect. We consider multi‐stage randomized trials on a dichotomous outcome, such as treatment response, and investigate the estimation of the odds ratio. Typically, randomized phase II cancer clinical trials have two‐stage designs with small sample sizes, which makes the estimation of odds ratio more challenging. In this paper, we evaluate several existing estimation methods of odds ratio and propose bias‐corrected estimators for randomized multi‐stage trials, including randomized phase II cancer clinical trials. Through numerical studies, the proposed estimators are shown to have a smaller bias and a smaller mean squared error overall. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. A propensity score‐integrated approach for leveraging external data in a randomized controlled trial with time‐to‐event endpoints.
- Author
-
Chen, Wei‐Chen, Lu, Nelson, Wang, Chenguang, Li, Heng, Song, Changhong, Tiwari, Ram, Xu, Yunling, and Yue, Lilly Q.
- Abstract
In a randomized controlled trial with time‐to‐event endpoint, some commonly used statistical tests to test for various aspects of survival differences, such as survival probability at a fixed time point, survival function up to a specific time point, and restricted mean survival time, may not be directly applicable when external data are leveraged to augment an arm (or both arms) of an RCT. In this paper, we propose a propensity score‐integrated approach to extend such tests when external data are leveraged. Simulation studies are conducted to evaluate the operating characteristics of three propensity score‐integrated statistical tests, and an illustrative example is given to demonstrate how these proposed procedures can be implemented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Digital twins and Bayesian dynamic borrowing: Two recent approaches for incorporating historical control data.
- Author
-
Burman, Carl‐Fredrik, Hermansson, Erik, Bock, David, Franzén, Stefan, and Svensson, David
- Abstract
Recent years have seen an increasing interest in incorporating external control data for designing and evaluating randomized clinical trials (RCT). This may decrease costs and shorten inclusion times by reducing sample sizes. For small populations, with limited recruitment, this can be especially important. Bayesian dynamic borrowing (BDB) has been a popular choice as it claims to protect against potential prior data conflict. Digital twins (DT) has recently been proposed as another method to utilize historical data. DT, also known as PROCOVA™, is based on constructing a prognostic score from historical control data, typically using machine learning. This score is included in a pre‐specified ANCOVA as the primary analysis of the RCT. The promise of this idea is power increase while guaranteeing strong type 1 error control. In this paper, we apply analytic derivations and simulations to analyze and discuss examples of these two approaches. We conclude that BDB and DT, although similar in scope, have fundamental differences which need be considered in the specific application. The inflation of the type 1 error is a serious issue for BDB, while more evidence is needed of a tangible value of DT for real RCTs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A dynamic power prior approach to non‐inferiority trials for normal means.
- Author
-
Mariani, Francesco, De Santis, Fulvio, and Gubbiotti, Stefania
- Subjects
- *
INVESTIGATIONAL therapies , *GOVERNMENT agencies , *INFORMATION resources management , *NEW trials , *MARKOV chain Monte Carlo - Abstract
Non‐inferiority trials compare new experimental therapies to standard ones (active control). In these experiments, historical information on the control treatment is often available. This makes Bayesian methodology appealing since it allows a natural way to exploit information from past studies. In the present paper, we suggest the use of previous data for constructing the prior distribution of the control effect parameter. Specifically, we consider a dynamic power prior that possibly allows to discount the level of borrowing in the presence of heterogeneity between past and current control data. The discount parameter of the prior is based on the Hellinger distance between the posterior distributions of the control parameter based, respectively, on historical and current data. We develop the methodology for comparing normal means and we handle the unknown variance assumption using MCMC. We also provide a simulation study to analyze the proposed test in terms of frequentist size and power, as it is usually requested by regulatory agencies. Finally, we investigate comparisons with some existing methods and we illustrate an application to a real case study. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Frequentist and Bayesian tolerance intervals for setting specification limits for left‐censored gamma distributed drug quality attributes.
- Author
-
Montes, Richard O.
- Subjects
- *
STANDARD deviations , *GAMMA distributions , *MAXIMUM likelihood statistics , *PARAMETER estimation - Abstract
Tolerance intervals from quality attribute measurements are used to establish specification limits for drug products. Some attribute measurements may be below the reporting limits, that is, left‐censored data. When data has a long, right‐skew tail, a gamma distribution may be applicable. This paper compares maximum likelihood estimation (MLE) and Bayesian methods to estimate shape and scale parameters of censored gamma distributions and to calculate tolerance intervals under varying sample sizes and extents of censoring. The noninformative reference prior and the maximal data information prior (MDIP) are used to compare the impact of prior choice. Metrics used are bias and root mean square error for the parameter estimation and average length and confidence coefficient for the tolerance interval evaluation. It will be shown that Bayesian method using a reference prior overall performs better than MLE for the scenarios evaluated. When sample size is small, the Bayesian method using MDIP yields conservatively too wide tolerance intervals that are unsuitable basis for specification setting. The metrics for all methods worsened with increasing extent of censoring but improved with increasing sample size, as expected. This study demonstrates that although MLE is relatively simple and available in user‐friendly statistical software, it falls short in accurately and precisely producing tolerance limits that maintain the stated confidence depending on the scenario. The Bayesian method using noninformative prior, even though computationally intensive and requires considerable statistical programming, produces tolerance limits which are practically useful for specification setting. Real‐world examples are provided to illustrate the findings from the simulation study. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Conditional power and information fraction calculations at an interim analysis for random coefficient models.
- Author
-
Lewis, Sandra A., Carroll, Kevin J., DeVries, Todd, and Barratt, Jonathan
- Subjects
- *
IGA glomerulonephritis , *PANEL analysis , *GLOMERULAR filtration rate - Abstract
Random coefficient (RC) models are commonly used in clinical trials to estimate the rate of change over time in longitudinal data. Trials utilizing a surrogate endpoint for accelerated approval with a confirmatory longitudinal endpoint to show clinical benefit is a strategy implemented across various therapeutic areas, including immunoglobulin A nephropathy. Understanding conditional power (CP) and information fraction calculations of RC models may help in the design of clinical trials as well as provide support for the confirmatory endpoint at the time of accelerated approval. This paper provides calculation methods, with practical examples, for determining CP at an interim analysis for a RC model with longitudinal data, such as estimated glomerular filtration rate (eGFR) assessments to measure rate of change in eGFR slope. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Propensity score‐incorporated adaptive design approaches when incorporating real‐world data.
- Author
-
Lu, Nelson, Chen, Wei‐Chen, Li, Heng, Song, Changhong, Tiwari, Ram, Wang, Chenguang, Xu, Yunling, and Yue, Lilly Q.
- Subjects
- *
SAMPLE size (Statistics) - Abstract
The propensity score‐integrated composite likelihood (PSCL) method is one method that can be utilized to design and analyze an application when real‐world data (RWD) are leveraged to augment a prospectively designed clinical study. In the PSCL, strata are formed based on propensity scores (PS) such that similar subjects in terms of the baseline covariates from both the current study and RWD sources are placed in the same stratum, and then composite likelihood method is applied to down‐weight the information from the RWD. While PSCL was originally proposed for a fixed design, it can be extended to be applied under an adaptive design framework with the purpose to either potentially claim an early success or to re‐estimate the sample size. In this paper, a general strategy is proposed due to the feature of PSCL. For the possibility of claiming early success, Fisher's combination test is utilized. When the purpose is to re‐estimate the sample size, the proposed procedure is based on the test proposed by Cui, Hung, and Wang. The implementation of these two procedures is demonstrated via an example. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Treatment effect measures under nonproportional hazards.
- Author
-
Snapinn, Steven, Jiang, Qi, and Ke, Chunlei
- Subjects
SURVIVAL rate ,TREATMENT effectiveness ,SURVIVAL analysis (Biometry) ,HAZARDS - Abstract
In a clinical trial with a time‐to‐event endpoint the treatment effect can be measured in various ways. Under proportional hazards all reasonable measures (such as the hazard ratio and the difference in restricted mean survival time) are consistent in the following sense: Take any control group survival distribution such that the hazard rate remains above zero; if there is no benefit by any measure there is no benefit by all measures, and as the magnitude of treatment benefit increases by any measure it increases by all measures. Under nonproportional hazards, however, survival curves can cross, and the direction of the effect for any pair of measures can be inconsistent. In this paper we critically evaluate a variety of treatment effect measures in common use and identify flaws with them. In particular, we demonstrate that a treatment's benefit has two distinct and independent dimensions which can be measured by the difference in the survival rate at the end of follow‐up and the difference in restricted mean survival time, and that commonly used measures do not adequately capture both dimensions. We demonstrate that a generalized hazard difference, which can be estimated by the difference in exposure‐adjusted subject incidence rates, captures both dimensions, and that its inverse, the number of patient‐years of follow‐up that results in one fewer event (the NYNT), is an easily interpretable measure of the magnitude of clinical benefit. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.