14 results on '"Borm GF"'
Search Results
2. Small studies are more heterogeneous than large ones: a meta-meta-analysis.
- Author
-
IntHout J, Ioannidis JP, Borm GF, and Goeman JJ
- Subjects
- Bayes Theorem, Humans, Models, Theoretical, Clinical Trials as Topic, Epidemiologic Methods, Research Design
- Abstract
Objectives: Between-study heterogeneity plays an important role in random-effects models for meta-analysis. Most clinical trials are small, and small trials are often associated with larger effect sizes. We empirically evaluated whether there is also a relationship between trial size and heterogeneity (τ)., Study Design and Setting: We selected the first meta-analysis per intervention review of the Cochrane Database of Systematic Reviews Issues 2009-2013 with a dichotomous (n = 2,009) or continuous (n = 1,254) outcome. The association between estimated τ and trial size was evaluated across meta-analyses using regression and within meta-analyses using a Bayesian approach. Small trials were predefined as those having standard errors (SEs) over 0.2 standardized effects., Results: Most meta-analyses were based on few (median 4) trials. Within the same meta-analysis, the small study τS(2) was larger than the large-study τL(2) [average ratio 2.11; 95% credible interval (1.05, 3.87) for dichotomous and 3.11 (2.00, 4.78) for continuous meta-analyses]. The imprecision of τS was larger than of τL: median SE 0.39 vs. 0.20 for dichotomous and 0.22 vs. 0.13 for continuous small-study and large-study meta-analyses., Conclusion: Heterogeneity between small studies is larger than between larger studies. The large imprecision with which τ is estimated in a typical small-studies' meta-analysis is another reason for concern, and sensitivity analyses are recommended., (Copyright © 2015 Elsevier Inc. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
3. Studies with group treatments required special power calculations, allocation methods, and statistical analyses.
- Author
-
Faes MC, Reelick MF, Perry M, Olde Rikkert MG, and Borm GF
- Subjects
- Algorithms, Clinical Trials as Topic, Humans, Prognosis, Random Allocation, Randomized Controlled Trials as Topic, Research Design, Group Structure, Psychotherapy, Group
- Abstract
Objective: In some trials, the intervention is delivered to individuals in groups, for example, groups that exercise together. The group structure of such trials has to be taken into consideration in the analysis and has an impact on the power of the trial. Our aim was to provide optimal methods for the design and analysis of such trials., Study Design and Setting: We described various treatment allocation methods and presented a new allocation algorithm: optimal batchwise minimization (OBM). We carried out a simulation study to evaluate the performance of unrestricted randomization, stratification, permuted block randomization, deterministic minimization, and OBM. Furthermore, we described appropriate analysis methods and derived a formula to calculate the study size., Results: Stratification, deterministic minimization, and OBM had considerably less risk of imbalance than unrestricted randomization and permuted block randomization. Furthermore, OBM led to unpredictable treatment allocation. The sample size calculation and the analysis of the study must be based on a multilevel model that takes the group structure of the trial into account., Conclusion: Trials evaluating interventions that are carried out in subsequent groups require adapted treatment allocation, power calculation, and analysis methods. From the perspective of obtaining overall balance, we conclude that minimization is the method of choice. When the number of prognostic factors is low, stratification is an excellent alternative. OBM leads to better balance within the batches, but it is more complicated. It is probably most worthwhile in trials with many prognostic factors. From the perspective of predictability, a treatment allocation method, such as OBM, that allocates several subjects at the same time, is superior to other methods because it leads to the lowest possible predictability., (Copyright © 2012 Elsevier Inc. All rights reserved.)
- Published
- 2012
- Full Text
- View/download PDF
4. Studywise minimization: a treatment allocation method that improves balance among treatment groups and makes allocation unpredictable.
- Author
-
Perry M, Faes M, Reelick MF, Olde Rikkert MG, and Borm GF
- Subjects
- Algorithms, Female, Humans, Male, Prognosis, Random Allocation, Selection Bias, Patient Selection, Randomized Controlled Trials as Topic methods
- Abstract
Objectives: In randomized controlled trials with many potential prognostic factors, serious imbalance among treatment groups regarding these factors can occur. Minimization methods can improve balance but increase the possibility of selection bias. We described and evaluated the performance of a new method of treatment allocation, called studywise minimization, that can avoid imbalance by chance and reduce selection bias., Study Design and Setting: The studywise minimization algorithm consists of three steps: (1) calculate the imbalance for all possible allocations, (2) list all allocations with minimum imbalance, and (3) randomly select one of the allocations with minimum imbalance. We carried out a simulation study to compare the performance of studywise minimization with three other allocation methods: randomization, biased-coin minimization, and deterministic minimization. Performance was measured, calculating maximal and average imbalance as a percentage of the group size., Results: Independent of trial size and number of prognostic factors, the risk of serious imbalance was the highest in randomization and absent in studywise minimization. The largest differences among the allocation methods regarding the risk of imbalance were found in small trials., Conclusion: Studywise minimization is particularly useful in small trials, where it eliminates the risk of serious imbalances without generating the occurrence of selection bias., (Copyright (c) 2010 Elsevier Inc. All rights reserved.)
- Published
- 2010
- Full Text
- View/download PDF
5. A simple method for calculating power based on a prior trial.
- Author
-
Borm GF, Bloem BR, Munneke M, and Teerenstra S
- Subjects
- Analysis of Variance, Confidence Intervals, Controlled Clinical Trials as Topic, Data Interpretation, Statistical, Humans, Logistic Models, Sample Size, Clinical Trials as Topic statistics & numerical data, Probability, Research Design standards
- Abstract
Objective: When an investigator wants to base the power of a planned clinical trial on the outcome of another trial, the latter study may not have been reported in sufficient detail to allow this. For example, when the outcome is a change from baseline, the power calculation requires the standard deviation of the difference, and it frequently happens that only the standard deviations of the baseline and the follow-up measurements are reported. Also when a complex analysis or an analysis with covariates is planned, the power calculation may be difficult or impossible. The objective was to develop a method to determine the power of a trial, based on minimal information from a previous (reference) trial., Study Design and Setting: We investigated the power calculation for a range of statistical methods, including the t-test, analysis of covariance, analysis of variance, linear regression, logistic regression, Poisson regression, the Wilcoxon test, and the logrank test., Results: A method to calculate the power of a trial solely based on the P-value or the confidence interval of the outcome of the reference study., Conclusion: A power calculation based on an earlier similar trial only requires its P-value.
- Published
- 2010
- Full Text
- View/download PDF
6. Updating meta-analyses leads to larger type I errors than publication bias.
- Author
-
Borm GF and Donders AR
- Subjects
- Data Interpretation, Statistical, Humans, Randomized Controlled Trials as Topic, Treatment Outcome, Meta-Analysis as Topic, Publication Bias
- Abstract
Objective: To estimate the extent to which the practice of periodically updating meta-analyses causes inflation of the type I error and then to compare the estimate with the inflation caused by publication bias. We also present a simple method to adjust for the inflation associated with updating meta-analyses., Study Design and Setting: Simulations were used to estimate the error rates., Results: In general, updating meta-analyses caused 2- to 5-fold inflation of the type I error rates, which exceeded the inflation caused by publication bias. As a rule of thumb, the results of a meta-analysis are robust up to 5, 10, 15, or 22 updates, if the P-value multiplied by 4, 6, 8, or 10 remains below the desired significance level., Conclusion: Meta-analyses are likely to be updated until a clear conclusion is reached. Therefore, it is important to take the inflation of the error rate into account to interpret the results correctly.
- Published
- 2009
- Full Text
- View/download PDF
7. The evidence provided by a single trial is less reliable than its statistical analysis suggests.
- Author
-
Borm GF, Lemmers O, Fransen J, and Donders R
- Subjects
- Antirheumatic Agents therapeutic use, Arthritis, Rheumatoid drug therapy, Data Interpretation, Statistical, Electromagnetic Fields adverse effects, Humans, Leukemia, Radiation-Induced epidemiology, Leukemia, Radiation-Induced etiology, Outcome Assessment, Health Care methods, Patient Selection, Research Design, Clinical Trials as Topic standards, Evidence-Based Medicine standards
- Abstract
Objective: To investigate whether a single trial can provide sufficiently robust evidence to warrant clinical implementation of its results. Trial-specific factors, such as subject selection, study design, and execution strategy, have an impact on the outcome of trials. In multiple trials, they may lead to heterogeneity that can be taken into account in the (random effects) meta-analysis. Single trials lack this method of estimating the impact of such factors, and this affects the credibility of the results., Study Design and Setting: To indicate how much the precision of the results of a single trial might be overestimated, we calculated the ratio of the widths of the confidence intervals when heterogeneity was taken into account and when it was not., Results: The ratios of the widths of the confidence intervals with and without between-study variability were 1.15, 1.41, and 2.00, when the heterogeneity I(2) values were 0.25, 0.50, and 0.75, respectively., Conclusion: The results of a single trial should be interpreted with caution. When it is difficult to predict or determine how trial-specific factors influence the results, the best way to evaluate the performance of a treatment is to use multiple, possibly smaller, trials.
- Published
- 2009
- Full Text
- View/download PDF
8. Publication bias was not a good reason to discourage trials with low power.
- Author
-
Borm GF, den Heijer M, and Zielhuis GA
- Subjects
- Bias, Epidemiologic Studies, Ethics, Humans, Reproducibility of Results, Clinical Trials as Topic statistics & numerical data, Meta-Analysis as Topic, Publication Bias statistics & numerical data
- Abstract
Objective: The objective was to investigate whether it is justified to discourage trials with less than 80% power. Trials with low power are unlikely to produce conclusive results, but their findings can be used by pooling then in a meta-analysis. However, such an analysis may be biased, because trials with low power are likely to have a nonsignificant result and are less likely to be published than trials with a statistically significant outcome., Study Design and Setting: We simulated several series of studies with varying degrees of publication bias and then calculated the "real" one-sided type I error and the bias of meta-analyses with a "nominal" error rate (significance level) of 2.5%., Results: In single trials, in which heterogeneity was set at zero, low, and high, the error rates were 2.3%, 4.7%, and 16.5%, respectively. In multiple trials with 80%-90% power and a publication rate of 90% when the results were nonsignificant, the error rates could be as high as 5.1%. When the power was 50% and the publication rate of non-significant results was 60%, the error rates did not exceed 5.3%, whereas the bias was at most 15% of the difference used in the power calculation., Conclusion: The impact of publication bias does not warrant the exclusion of trials with 50% power.
- Published
- 2009
- Full Text
- View/download PDF
9. Pseudo cluster randomization performed well when used in practice.
- Author
-
Melis RJ, Teerenstra S, Rikkert MG, and Borm GF
- Subjects
- Aged, Aged, 80 and over, Attitude of Health Personnel, Clinical Competence, Female, Health Services Research methods, Health Services for the Aged organization & administration, Home Care Services organization & administration, Humans, Male, Outcome Assessment, Health Care methods, Patient Selection, Quality of Life, Research Design, Selection Bias, Primary Health Care organization & administration, Randomized Controlled Trials as Topic methods
- Abstract
Objective: In the Dutch EASYcare Study, pseudo cluster randomization (PCR) randomized clinicians in two groups (H and L) with a high or a low proportion of the patients of the clinician randomized to intervention or to control arm accordingly. We used PCR because cluster randomization risked selection bias and individual randomization risked contamination. We evaluated the performance of PCR., Study Design and Setting: Clinicians were asked about treatment arm preferences, recruitment behavior, possible contaminating behavior, and what they thought the allocation ratio was. We compared patients' baseline characteristics and clinicians' recruitment rates., Results: The groups were comparable at baseline. Clinicians favored the intervention arm (Visual Analogue Scale 14.5 [SD 15.6]; 0-100; 0=strongly favoring intervention arm, 100=strongly favoring usual care arm) and 58% said they would have recruited fewer patients had every participant been allocated to the control group. Sixty five percent of clinicians used intervention elements in control patients. Sixty seven percent of clinicians estimated that a 50:50 allocation ratio was used., Conclusion: The assumptions underlying PCR largely applied in this study. PCR performed satisfactorily without signs of unblinding or selection bias.
- Published
- 2008
- Full Text
- View/download PDF
10. Objective and perspective determine the choice of composite endpoint.
- Author
-
Borm GF, Teerenstra S, and Zielhuis GA
- Subjects
- Choice Behavior, Humans, Statistics as Topic, Clinical Trials as Topic methods, Goals, Research Design
- Abstract
The most important consideration in the choice of study design and endpoint is that these two features match and represent the objective and the perspective of the trial as closely as possible. The mechanism may also be helpful, but arguments based on the mechanism, structure or levels of the variables, or based on practical considerations, such as (statistical) efficiency, must always be secondary considerations.
- Published
- 2008
- Full Text
- View/download PDF
11. A simple sample size formula for analysis of covariance in randomized clinical trials.
- Author
-
Borm GF, Fransen J, and Lemmens WA
- Subjects
- Analysis of Variance, Antirheumatic Agents therapeutic use, Arthritis, Rheumatoid drug therapy, Drug Therapy, Combination, Humans, Isoxazoles therapeutic use, Leflunomide, Research Design, Sulfasalazine therapeutic use, Treatment Outcome, Randomized Controlled Trials as Topic methods, Sample Size
- Abstract
Objective: Randomized clinical trials that compare two treatments on a continuous outcome can be analyzed using analysis of covariance (ANCOVA) or a t-test approach. We present a method for the sample size calculation when ANCOVA is used., Study Design and Setting: We derived an approximate sample size formula. Simulations were used to verify the accuracy of the formula and to improve the approximation for small trials. The sample size calculations are illustrated in a clinical trial in rheumatoid arthritis., Results: If the correlation between the outcome measured at baseline and at follow-up is rho, ANCOVA comparing groups of (1-rho(2))n subjects has the same power as t-test comparing groups of n subjects. When on the same data, ANCOVA is used instead of t-test, the precision of the treatment estimate is increased, and the length of the confidence interval is reduced by a factor 1-rho(2)., Conclusion: ANCOVA may considerably reduce the number of patients required for a trial.
- Published
- 2007
- Full Text
- View/download PDF
12. A generalized concept of power helped to choose optimal endpoints in clinical trials.
- Author
-
Borm GF, van der Wilt GJ, Kremer JA, and Zielhuis GA
- Subjects
- Clinical Protocols, Confidence Intervals, Data Interpretation, Statistical, Fertilization in Vitro, Humans, Models, Statistical, Probability, Quality of Life, Research Design, Sample Size, Benchmarking, Clinical Trials as Topic
- Abstract
Objectives: A clinical trial may have multiple objectives. Sometimes the results for several parameters may need to be significant or meet certain other criteria. In such cases, it is important to evaluate the probability that all these objectives will be met, rather than the probability that each will be met. The purpose of this article is to introduce a definition of power that is tailored to handle this situation and that is helpful for the design of such trials., Study Design and Setting: We introduce a generalized concept of power. It can handle complex situations, for example, in which there is a logical combination of partial objectives. These may be formulated not only in terms of statistical tests and of confidence intervals, but also in nonstatistical terms, such as "selecting the optimal by dose.", Results: The power of a trial was calculated for various objectives and combinations of objectives., Conclusion: The generalized concept of power may lead to power calculations that closely match the objectives of the trial and contribute to choosing more efficient endpoints and designs.
- Published
- 2007
- Full Text
- View/download PDF
13. Pseudo cluster randomization dealt with selection bias and contamination in clinical trials.
- Author
-
Teerenstra S, Melis RJ, Peer PG, and Borm GF
- Subjects
- Cluster Analysis, Humans, Patient Selection, Randomized Controlled Trials as Topic standards, Research Design, Randomized Controlled Trials as Topic methods, Selection Bias
- Abstract
Background and Objectives: When contamination is present, randomization on a patient level leads to dilution of the treatment effect. The usual solution is to randomize on a cluster level, but at the cost of efficiency and more importantly, this may introduce selection bias. Furthermore, it may slow down recruitment in the clusters that are randomized to the "less interesting" treatment. We discuss an alternative randomization procedure to approach these problems., Methods: Pseudo cluster randomization is a two-stage randomization procedure that balances between individual randomization and cluster randomization. For common scenarios, the design factors needed to calculate the appropriate sample size are tabulated., Results: A pseudo cluster randomized design can reduce selection bias and contamination, while maintaining good efficiency and possibly improving enrollment. To make a well-informed choice of randomization procedure, we discuss the advantages of each method and provide a decision flow chart., Conclusion: When contamination is thought to be substantial in an individually randomized setting and a cluster randomized design would suffer from selection bias and/or slow recruitment, pseudo cluster randomization can be considered.
- Published
- 2006
- Full Text
- View/download PDF
14. An investigation of clinical studies suggests those with multiple objectives should have at least 90% power for each endpoint.
- Author
-
Borm GF, Houben RM, Welsing PM, and Zielhuis GA
- Subjects
- Anti-Inflammatory Agents, Non-Steroidal therapeutic use, Antirheumatic Agents therapeutic use, Arthritis, Rheumatoid drug therapy, Clinical Protocols, Data Interpretation, Statistical, Goals, Humans, Models, Statistical, Research Design, Sample Size, Treatment Outcome, Analysis of Variance, Clinical Trials as Topic
- Abstract
Background and Objectives: Many clinical studies have more than one objective, either formally or informally, but this is not usually taken into account in the determination of the sample size. We investigated the overall power of a study, that is, the probability that all the objectives will be met., Methods: We calculated the overall power in the case that the study has two primary outcome variables and in the case that one outcome variable is evaluated on two subsets, in particular, the Per Protocol group and the Intention to Treat group., Results: A power of 80% for each of the two end points leads to poor power for the end points combined. However, a power of 90% preserves better the overall power. The power of the Per Protocol analysis can be higher or lower than the power of the Intention to Treat analysis., Conclusion: Power should be calculated for all end points combined, and it should be at least 90% for each primary end point. If the sample size for the intention-to-treat analysis is determined by adding a percentage of "nonevaluable subjects" to the sample size required for the per protocol analysis, then this may lead to an underpowered study.
- Published
- 2006
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.