25 results on '"Morris, Tim P"'
Search Results
2. Categorisation of continuous covariates for stratified randomisation: How should we adjust?
- Author
-
Sullivan, Thomas R., Morris, Tim P., Kahan, Brennan C., Cuthbert, Alana R., and Yelland, Lisa N.
- Subjects
- *
MEDICAL periodicals , *MISSING data (Statistics) , *LOGISTIC regression analysis , *SPLINES , *MEDICAL publishing - Abstract
To obtain valid inference following stratified randomisation, treatment effects should be estimated with adjustment for stratification variables. Stratification sometimes requires categorisation of a continuous prognostic variable (eg, age), which raises the question: should adjustment be based on randomisation categories or underlying continuous values? In practice, adjustment for randomisation categories is more common. We reviewed trials published in general medical journals and found none of the 32 trials that stratified randomisation based on a continuous variable adjusted for continuous values in the primary analysis. Using data simulation, this article evaluates the performance of different adjustment strategies for continuous and binary outcomes where the covariate‐outcome relationship (via the link function) was either linear or non‐linear. Given the utility of covariate adjustment for addressing missing data, we also considered settings with complete or missing outcome data. Analysis methods included linear or logistic regression with no adjustment for the stratification variable, adjustment for randomisation categories, or adjustment for continuous values assuming a linear covariate‐outcome relationship or allowing for non‐linearity using fractional polynomials or restricted cubic splines. Unadjusted analysis performed poorly throughout. Adjustment approaches that misspecified the underlying covariate‐outcome relationship were less powerful and, alarmingly, biased in settings where the stratification variable predicted missing outcome data. Adjustment for randomisation categories tends to involve the highest degree of misspecification, and so should be avoided in practice. To guard against misspecification, we recommend use of flexible approaches such as fractional polynomials and restricted cubic splines when adjusting for continuous stratification variables in randomised trials. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Comment on Oberman & Vink: Should we fix or simulate the complete data in simulation studies evaluating missing data methods?
- Author
-
Morris, Tim P., White, Ian R., Cro, Suzie, Bartlett, Jonathan W., Carpenter, James R., and Pham, Tra My
- Abstract
For simulation studies that evaluate methods of handling missing data, we argue that generating partially observed data by fixing the complete data and repeatedly simulating the missingness indicators is a superficially attractive idea but only rarely appropriate to use. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Phases of methodological research in biostatistics—Building the evidence base for new methods.
- Author
-
Heinze, Georg, Boulesteix, Anne‐Laure, Kammer, Michael, Morris, Tim P., and White, Ian R.
- Abstract
Although new biostatistical methods are published at a very high rate, many of these developments are not trustworthy enough to be adopted by the scientific community. We propose a framework to think about how a piece of methodological work contributes to the evidence base for a method. Similar to the well‐known phases of clinical research in drug development, we propose to define four phases of methodological research. These four phases cover (I) proposing a new methodological idea while providing, for example, logical reasoning or proofs, (II) providing empirical evidence, first in a narrow target setting, then (III) in an extended range of settings and for various outcomes, accompanied by appropriate application examples, and (IV) investigations that establish a method as sufficiently well‐understood to know when it is preferred over others and when it is not; that is, its pitfalls. We suggest basic definitions of the four phases to provoke thought and discussion rather than devising an unambiguous classification of studies into phases. Too many methodological developments finish before phase III/IV, but we give two examples with references. Our concept rebalances the emphasis to studies in phases III and IV, that is, carefully planned method comparison studies and studies that explore the empirical properties of existing methods in a wider range of problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. The marginality principle revisited: Should "higher‐order" terms always be accompanied by "lower‐order" terms in regression analyses?
- Author
-
Morris, Tim P., van Smeden, Maarten, and Pham, Tra My
- Abstract
The marginality principle guides analysts to avoid omitting lower‐order terms from models in which higher‐order terms are included as covariates. Lower‐order terms are viewed as "marginal" to higher‐order terms. We consider how this principle applies to three cases: regression models that may include the ratio of two measured variables; polynomial transformations of a measured variable; and factorial arrangements of defined interventions. For each case, we show that which terms or transformations are considered to be lower‐order, and therefore marginal, depends on the scale of measurement, which is frequently arbitrary. Understanding the implications of this point leads to an intuitive understanding of the curse of dimensionality. We conclude that the marginality principle may be useful to analysts in some specific cases but caution against invoking it as a context‐free recipe. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. A new approach to evaluating loop inconsistency in network meta‐analysis.
- Author
-
Turner, Rebecca M., Band, Tim, Morris, Tim P., Fisher, David J., Higgins, Julian P. T., Carpenter, James R., and White, Ian R.
- Subjects
DEGREES of freedom ,MULTIPLE comparisons (Statistics) - Abstract
In network meta‐analysis, studies evaluating multiple treatment comparisons are modeled simultaneously, and estimation is informed by a combination of direct and indirect evidence. Network meta‐analysis relies on an assumption of consistency, meaning that direct and indirect evidence should agree for each treatment comparison. Here we propose new local and global tests for inconsistency and demonstrate their application to three example networks. Because inconsistency is a property of a loop of treatments in the network meta‐analysis, we locate the local test in a loop. We define a model with one inconsistency parameter that can be interpreted as loop inconsistency. The model builds on the existing ideas of node‐splitting and side‐splitting in network meta‐analysis. To provide a global test for inconsistency, we extend the model across multiple independent loops with one degree of freedom per loop. We develop a new algorithm for identifying independent loops within a network meta‐analysis. Our proposed models handle treatments symmetrically, locate inconsistency in loops rather than in nodes or treatment comparisons, and are invariant to choice of reference treatment, making the results less dependent on model parameterization. For testing global inconsistency in network meta‐analysis, our global model uses fewer degrees of freedom than the existing design‐by‐treatment interaction approach and has the potential to increase power. To illustrate our methods, we fit the models to three network meta‐analyses varying in size and complexity. Local and global tests for inconsistency are performed and we demonstrate that the global model is invariant to choice of independent loops. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Two‐stage or not two‐stage? That is the question for IPD meta‐analysis projects.
- Author
-
Riley, Richard D., Ensor, Joie, Hattle, Miriam, Papadimitropoulou, Katerina, and Morris, Tim P.
- Subjects
RESEARCH personnel ,META-analysis ,RANDOM effects model ,BEST practices ,TREATMENT effectiveness - Abstract
Individual participant data meta‐analysis (IPDMA) projects obtain, check, harmonise and synthesise raw data from multiple studies. When undertaking the meta‐analysis, researchers must decide between a two‐stage or a one‐stage approach. In a two‐stage approach, the IPD are first analysed separately within each study to obtain aggregate data (e.g., treatment effect estimates and standard errors); then, in the second stage, these aggregate data are combined in a standard meta‐analysis model (e.g., common‐effect or random‐effects). In a one‐stage approach, the IPD from all studies are analysed in a single step using an appropriate model that accounts for clustering of participants within studies and, potentially, between‐study heterogeneity (e.g., a general or generalised linear mixed model). The best approach to take is debated in the literature, and so here we provide clearer guidance for a broad audience. Both approaches are important tools for IPDMA researchers and neither are a panacea. If most studies in the IPDMA are small (few participants or events), a one‐stage approach is recommended due to using a more exact likelihood. However, in other situations, researchers can choose either approach, carefully following best practice. Some previous claims recommending to always use a one‐stage approach are misleading, and the two‐stage approach will often suffice for most researchers. When differences do arise between the two approaches, often it is caused by researchers using different modelling assumptions or estimation methods, rather than using one or two stages per se. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Handling misclassified stratification variables in the analysis of randomised trials with continuous outcomes.
- Author
-
Yelland, Lisa N., Louise, Jennie, Kahan, Brennan C., Morris, Tim P., Lee, Katherine J., and Sullivan, Thomas R.
- Subjects
TREATMENT effectiveness ,SUBGROUP analysis (Experimental design) - Abstract
Many trials use stratified randomisation, where participants are randomised within strata defined by one or more baseline covariates. While it is important to adjust for stratification variables in the analysis, the appropriate method of adjustment is unclear when stratification variables are affected by misclassification and hence some participants are randomised in the incorrect stratum. We conducted a simulation study to compare methods of adjusting for stratification variables affected by misclassification in the analysis of continuous outcomes when all or only some stratification errors are discovered, and when the treatment effect or treatment‐by‐covariate interaction effect is of interest. The data were analysed using linear regression with no adjustment, adjustment for the strata used to perform the randomisation (randomisation strata), adjustment for the strata if all errors are corrected (true strata), and adjustment for the strata after some errors are discovered and corrected (updated strata). The unadjusted model performed poorly in all settings. Adjusting for the true strata was optimal, while the relative performance of adjusting for the randomisation strata or the updated strata varied depending on the setting. As the true strata are unlikely to be known with certainty in practice, we recommend using the updated strata for adjustment and performing subgroup analyses, provided the discovery of errors is unlikely to depend on treatment group, as expected in blinded trials. Greater transparency is needed in the reporting of stratification errors and how they were addressed in the analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Estimands for factorial trials.
- Author
-
Kahan, Brennan C., Morris, Tim P., Goulão, Beatriz, and Carpenter, James
- Abstract
Factorial trials offer an efficient method to evaluate multiple interventions in a single trial, however the use of additional treatments can obscure research objectives, leading to inappropriate analytical methods and interpretation of results. We define a set of estimands for factorial trials, and describe a framework for applying these estimands, with the aim of clarifying trial objectives and ensuring appropriate primary and sensitivity analyses are chosen. This framework is intended for use in factorial trials where the intent is to conduct "two‐trials‐in‐one" (ie, to separately evaluate the effects of treatments A and B), and is comprised of four steps: (i) specifying how additional treatment(s) (eg, treatment B) will be handled in the estimand, and how intercurrent events affecting the additional treatment(s) will be handled; (ii) designating the appropriate factorial estimator as the primary analysis strategy; (iii) evaluating the interaction to assess the plausibility of the assumptions underpinning the factorial estimator; and (iv) performing a sensitivity analysis using an appropriate multiarm estimator to evaluate to what extent departures from the underlying assumption of no interaction may affect results. We show that adjustment for other factors is necessary for noncollapsible effect measures (such as odds ratio), and through a trial re‐analysis we find that failure to consider the estimand could lead to inappropriate interpretation of results. We conclude that careful use of the estimands framework clarifies research objectives and reduces the risk of misinterpretation of trial results, and should become a standard part of both the protocol and reporting of factorial trials. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Editorial for the special collection "Towards neutral comparison studies in methodological research".
- Author
-
Boulesteix, Anne‐Laure, Baillie, Mark, Edelmann, Dominic, Held, Leonhard, Morris, Tim P., and Sauerbrei, Willi
- Published
- 2024
- Full Text
- View/download PDF
11. A comparison of methods for analyzing a binary composite endpoint with partially observed components in randomized controlled trials.
- Author
-
Pham, Tra My, White, Ian R., Kahan, Brennan C., Morris, Tim P., Stanworth, Simon J., and Forbes, Gordon
- Subjects
RANDOMIZED controlled trials ,MISSING data (Statistics) ,DIRECTLY observed therapy - Abstract
Composite endpoints are commonly used to define primary outcomes in randomized controlled trials. A participant may be classified as meeting the endpoint if they experience an event in one or several components (eg, a favorable outcome based on a composite of being alive and attaining negative culture results in trials assessing tuberculosis treatments). Partially observed components that are not missing simultaneously complicate the analysis of the composite endpoint. An intuitive strategy frequently used in practice for handling missing values in the components is to derive the values of the composite endpoint from observed components when possible, and exclude from analysis participants whose composite endpoint cannot be derived. Alternatively, complete record analysis (CRA) (excluding participants with any missing components) or multiple imputation (MI) can be used. We compare a set of methods for analyzing a composite endpoint with partially observed components mathematically and by simulation, and apply these methods in a reanalysis of a published trial (TOPPS). We show that the derived composite endpoint can be missing not at random even when the components are missing completely at random. Consequently, the treatment effect estimated from the derived endpoint is biased while CRA results without the derived endpoint are valid. Missing at random mechanisms require MI of the components. We conclude that, although superficially attractive, deriving the composite endpoint from observed components should generally be avoided. Despite the potential risk of imputation model mis‐specification, MI of missing components is the preferred approach in this study setting. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Sensitivity analysis for clinical trials with missing continuous outcome data using controlled multiple imputation: A practical guide.
- Author
-
Cro, Suzie, Morris, Tim P., Kenward, Michael G., and Carpenter, James R.
- Subjects
- *
SENSITIVITY analysis , *CLINICAL trials , *EXPECTED returns , *REFERENCE values , *DATA analysis , *STATISTICS , *RESEARCH , *RESEARCH methodology , *MEDICAL cooperation , *EVALUATION research , *COMPARATIVE studies , *RESEARCH funding - Abstract
Missing data due to loss to follow-up or intercurrent events are unintended, but unfortunately inevitable in clinical trials. Since the true values of missing data are never known, it is necessary to assess the impact of untestable and unavoidable assumptions about any unobserved data in sensitivity analysis. This tutorial provides an overview of controlled multiple imputation (MI) techniques and a practical guide to their use for sensitivity analysis of trials with missing continuous outcome data. These include δ- and reference-based MI procedures. In δ-based imputation, an offset term, δ, is typically added to the expected value of the missing data to assess the impact of unobserved participants having a worse or better response than those observed. Reference-based imputation draws imputed values with some reference to observed data in other groups of the trial, typically in other treatment arms. We illustrate the accessibility of these methods using data from a pediatric eczema trial and a chronic headache trial and provide Stata code to facilitate adoption. We discuss issues surrounding the choice of δ in δ-based sensitivity analysis. We also review the debate on variance estimation within reference-based analysis and justify the use of Rubin's variance estimator in this setting, since as we further elaborate on within, it provides information anchored inference. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
13. One-stage individual participant data meta-analysis models for continuous and binary outcomes: Comparison of treatment coding options and estimation methods.
- Author
-
Riley, Richard D., Legha, Amardeep, Jackson, Dan, Morris, Tim P., Ensor, Joie, Snell, Kym I.E., White, Ian R., and Burke, Danielle L.
- Subjects
TREATMENT effectiveness ,DATA modeling ,BINARY codes ,CONFIDENCE intervals ,META-analysis - Abstract
A one-stage individual participant data (IPD) meta-analysis synthesizes IPD from multiple studies using a general or generalized linear mixed model. This produces summary results (eg, about treatment effect) in a single step, whilst accounting for clustering of participants within studies (via a stratified study intercept, or random study intercepts) and between-study heterogeneity (via random treatment effects). We use simulation to evaluate the performance of restricted maximum likelihood (REML) and maximum likelihood (ML) estimation of one-stage IPD meta-analysis models for synthesizing randomized trials with continuous or binary outcomes. Three key findings are identified. First, for ML or REML estimation of stratified intercept or random intercepts models, a t-distribution based approach generally improves coverage of confidence intervals for the summary treatment effect, compared with a z-based approach. Second, when using ML estimation of a one-stage model with a stratified intercept, the treatment variable should be coded using "study-specific centering" (ie, 1/0 minus the study-specific proportion of participants in the treatment group), as this reduces the bias in the between-study variance estimate (compared with 1/0 and other coding options). Third, REML estimation reduces downward bias in between-study variance estimates compared with ML estimation, and does not depend on the treatment variable coding; for binary outcomes, this requires REML estimation of the pseudo-likelihood, although this may not be stable in some situations (eg, when data are sparse). Two applied examples are used to illustrate the findings. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. Using simulation studies to evaluate statistical methods.
- Author
-
Morris, Tim P., White, Ian R., and Crowther, Michael J.
- Abstract
Simulation studies are computer experiments that involve creating data by pseudo-random sampling. A key strength of simulation studies is the ability to understand the behavior of statistical methods because some "truth" (usually some parameter/s of interest) is known from the process of generating the data. This allows us to consider properties of methods, such as bias. While widely used, simulation studies are often poorly designed, analyzed, and reported. This tutorial outlines the rationale for using simulation studies and offers guidance for design, execution, analysis, reporting, and presentation. In particular, this tutorial provides a structured approach for planning and reporting simulation studies, which involves defining aims, data-generating mechanisms, estimands, methods, and performance measures ("ADEMP"); coherent terminology for simulation studies; guidance on coding simulation studies; a critical discussion of key performance measures and their estimation; guidance on structuring tabular and graphical presentation of results; and new graphical presentations. With a view to describing recent practice, we review 100 articles taken from Volume 34 of Statistics in Medicine, which included at least one simulation study and identify areas for improvement. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
15. Population-calibrated multiple imputation for a binary/categorical covariate in categorical regression models.
- Author
-
Pham, Tra My, Carpenter, James R, Morris, Tim P, Wood, Angela M, and Petersen, Irene
- Abstract
Multiple imputation (MI) has become popular for analyses with missing data in medical research. The standard implementation of MI is based on the assumption of data being missing at random (MAR). However, for missing data generated by missing not at random mechanisms, MI performed assuming MAR might not be satisfactory. For an incomplete variable in a given data set, its corresponding population marginal distribution might also be available in an external data source. We show how this information can be readily utilised in the imputation model to calibrate inference to the population by incorporating an appropriately calculated offset termed the "calibrated-δ adjustment." We describe the derivation of this offset from the population distribution of the incomplete variable and show how, in applications, it can be used to closely (and often exactly) match the post-imputation distribution to the population level. Through analytic and simulation studies, we show that our proposed calibrated-δ adjustment MI method can give the same inference as standard MI when data are MAR, and can produce more accurate inference under two general missing not at random missingness mechanisms. The method is used to impute missing ethnicity data in a type 2 diabetes prevalence case study using UK primary care electronic health records, where it results in scientifically relevant changes in inference for non-White ethnic groups compared with standard MI. Calibrated-δ adjustment MI represents a pragmatic approach for utilising available population-level information in a sensitivity analysis to explore potential departures from the MAR assumption. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
16. Individual participant data meta-analysis of continuous outcomes: A comparison of approaches for specifying and estimating one-stage models.
- Author
-
Legha, Amardeep, Riley, Richard D., Ensor, Joie, Snell, Kym I.E., Morris, Tim P., and Burke, Danielle L.
- Abstract
One-stage individual participant data meta-analysis models should account for within-trial clustering, but it is currently debated how to do this. For continuous outcomes modeled using a linear regression framework, two competing approaches are a stratified intercept or a random intercept. The stratified approach involves estimating a separate intercept term for each trial, whereas the random intercept approach assumes that trial intercepts are drawn from a normal distribution. Here, through an extensive simulation study for continuous outcomes, we evaluate the impact of using the stratified and random intercept approaches on statistical properties of the summary treatment effect estimate. Further aims are to compare (i) competing estimation options for the one-stage models, including maximum likelihood and restricted maximum likelihood, and (ii) competing options for deriving confidence intervals (CI) for the summary treatment effect, including the standard normal-based 95% CI, and more conservative approaches of Kenward-Roger and Satterthwaite, which inflate CIs to account for uncertainty in variance estimates. The findings reveal that, for an individual participant data meta-analysis of randomized trials with a 1:1 treatment:control allocation ratio and heterogeneity in the treatment effect, (i) bias and coverage of the summary treatment effect estimate are very similar when using stratified or random intercept models with restricted maximum likelihood, and thus either approach could be taken in practice, (ii) CIs are generally best derived using either a Kenward-Roger or Satterthwaite correction, although occasionally overly conservative, and (iii) if maximum likelihood is required, a random intercept performs better than a stratified intercept model. An illustrative example is provided. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
17. Multiple imputation in Cox regression when there are time-varying effects of covariates.
- Author
-
Keogh, Ruth H. and Morris, Tim P.
- Abstract
In Cox regression, it is important to test the proportional hazards assumption and sometimes of interest in itself to study time-varying effects (TVEs) of covariates. TVEs can be investigated with log hazard ratios modelled as a function of time. Missing data on covariates are common and multiple imputation is a popular approach to handling this to avoid the potential bias and efficiency loss resulting from a "complete-case" analysis. Two multiple imputation methods have been proposed for when the substantive model is a Cox proportional hazards regression: an approximate method (Imputing missing covariate values for the Cox model in Statistics in Medicine (2009) by White and Royston) and a substantive-model-compatible method (Multiple imputation of covariates by fully conditional specification: accommodating the substantive model in Statistical Methods in Medical Research (2015) by Bartlett et al). At present, neither accommodates TVEs of covariates. We extend them to do so for a general form for the TVEs and give specific details for TVEs modelled using restricted cubic splines. Simulation studies assess the performance of the methods under several underlying shapes for TVEs. Our proposed methods give approximately unbiased TVE estimates for binary covariates with missing data, but for continuous covariates, the substantive-model-compatible method performs better. The methods also give approximately correct type I errors in the test for proportional hazards when there is no TVE and gain power to detect TVEs relative to complete-case analysis. Ignoring TVEs at the imputation stage results in biased TVE estimates, incorrect type I errors, and substantial loss of power in detecting TVEs. We also propose a multivariable TVE model selection algorithm. The methods are illustrated using data from the Rotterdam Breast Cancer Study. R code is provided. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
18. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?
- Author
-
Morris, Tim P., Fisher, David J., Kenward, Michael G., and Carpenter, James R.
- Abstract
Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
19. Combining fractional polynomial model building with multiple imputation.
- Author
-
Morris, Tim P., White, Ian R., Carpenter, James R., Stanworth, Simon J., and Royston, Patrick
- Subjects
- *
COMPUTER simulation , *MULTIVARIATE analysis , *PROBABILITY theory , *PROGNOSIS , *REGRESSION analysis , *ACQUISITION of data , *STATISTICAL models - Abstract
Multivariable fractional polynomial (MFP) models are commonly used in medical research. The datasets in which MFP models are applied often contain covariates with missing values. To handle the missing values, we describe methods for combining multiple imputation with MFP modelling, considering in turn three issues: first, how to impute so that the imputation model does not favour certain fractional polynomial (FP) models over others; second, how to estimate the FP exponents in multiply imputed data; and third, how to choose between models of differing complexity. Two imputation methods are outlined for different settings. For model selection, methods based on Wald-type statistics and weighted likelihood-ratio tests are proposed and evaluated in simulation studies. The Wald-based method is very slightly better at estimating FP exponents. Type I error rates are very similar for both methods, although slightly less well controlled than analysis of complete records; however, there is potential for substantial gains in power over the analysis of complete records. We illustrate the two methods in a dataset from five trauma registries for which a prognostic model has previously been published, contrasting the selected models with that obtained by analysing the complete records only. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
20. Multiple imputation for an incomplete covariate that is a ratio.
- Author
-
Morris, Tim P., White, Ian R., Royston, Patrick, Seaman, Shaun R., and Wood, Angela M.
- Abstract
We are concerned with multiple imputation of the ratio of two variables, which is to be used as a covariate in a regression analysis. If the numerator and denominator are not missing simultaneously, it seems sensible to make use of the observed variable in the imputation model. One such strategy is to impute missing values for the numerator and denominator, or the log-transformed numerator and denominator, and then calculate the ratio of interest; we call this 'passive' imputation. Alternatively, missing ratio values might be imputed directly, with or without the numerator and/or the denominator in the imputation model; we call this 'active' imputation. In two motivating datasets, one involving body mass index as a covariate and the other involving the ratio of total to high-density lipoprotein cholesterol, we assess the sensitivity of results to the choice of imputation model and, as an alternative, explore fully Bayesian joint models for the outcome and incomplete ratio. Fully Bayesian approaches using Win bugs were unusable in both datasets because of computational problems. In our first dataset, multiple imputation results are similar regardless of the imputation model; in the second, results are sensitive to the choice of imputation model. Sensitivity depends strongly on the coefficient of variation of the ratio's denominator. A simulation study demonstrates that passive imputation without transformation is risky because it can lead to downward bias when the coefficient of variation of the ratio's denominator is larger than about 0.1. Active imputation or passive imputation after log-transformation is preferable. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
21. Analysis of multicentre trials with continuous outcomes: when and how should we account for centre effects?
- Author
-
Kahan, Brennan C and Morris, Tim P
- Abstract
In multicentre trials, randomisation is often carried out using permuted blocks stratified by centre. It has previously been shown that stratification variables used in the randomisation process should be adjusted for in the analysis to obtain correct inference. For continuous outcomes, the two primary methods of accounting for centres are fixed-effects and random-effects models. We discuss the differences in interpretation between these two models and the implications that each pose for analysis. We then perform a large simulation study comparing the performance of these analysis methods in a variety of situations. In total, we assessed 378 scenarios. We found that random centre effects performed as well or better than fixed-effects models in all scenarios. Random centre effects models led to increases in power and precision when the number of patients per centre was small (e.g. 10 patients or less) and, in some scenarios, when there was an imbalance between treatments within centres, either due to the randomisation method or to the distribution of patients across centres. With small samples sizes, random-effects models maintained nominal coverage rates when a degree-of-freedom (DF) correction was used. We assessed the robustness of random-effects models when assumptions regarding the distribution of the centre effects were incorrect and found this had no impact on results. We conclude that random-effects models offer many advantages over fixed-effects models in certain situations and should be used more often in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
22. Improper analysis of trials randomised using stratified blocks or minimisation.
- Author
-
Kahan, Brennan C. and Morris, Tim P.
- Abstract
Many clinical trials restrict randomisation using stratified blocks or minimisation to balance prognostic factors across treatment groups. It is widely acknowledged in the statistical literature that the subsequent analysis should reflect the design of the study, and any stratification or minimisation variables should be adjusted for in the analysis. However, a review of recent general medical literature showed only 14 of 41 eligible studies reported adjusting their primary analysis for stratification or minimisation variables. We show that balancing treatment groups using stratification leads to correlation between the treatment groups. If this correlation is ignored and an unadjusted analysis is performed, standard errors for the treatment effect will be biased upwards, resulting in 95% confidence intervals that are too wide, type I error rates that are too low and a reduction in power. Conversely, an adjusted analysis will give valid inference. We explore the extent of this issue using simulation for continuous, binary and time-to-event outcomes where treatment is allocated using stratified block randomisation or minimisation. Copyright © 2011 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
23. Redressing the balance: Covariate adjustment in randomised trials.
- Author
-
Kahan, Brennan C. and Morris, Tim P.
- Subjects
- *
FALSE positive error , *STATISTICAL accuracy , *BODY mass index - Abstract
Some of the covariates included were age, pre-pregnancy BMI and parity.1 Covariates can be used in the randomisation process, for example using stratified randomisation. In randomised controlled trials, information on patient characteristics (covariates) is collected before randomisation of participants. [Extracted from the article]
- Published
- 2021
- Full Text
- View/download PDF
24. A note regarding 'random effects' - authors' response.
- Author
-
Kahan, Brennan C. and Morris, Tim P.
- Published
- 2014
- Full Text
- View/download PDF
25. The Consequences of Randomizing Schools Rather Than Children.
- Author
-
Kahan, Brennan C. and Morris, Tim P.
- Subjects
- *
DRUG therapy for asthma , *EXPERIMENTAL design , *SCHOOL nursing - Abstract
A letter to the editor is presented in response to the article "A Randomized Controlled Trial of a Public Health Nurse Delivered Asthma Program to Elementary Schools" by L. Cicutto, T. To and S. Murphy, which appeared in a 2013 issue of the journal.
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.