562 results on '"Altman, Douglas G."'
Search Results
2. A longitudinal assessment of trial protocols approved by research ethics committees: The Adherance to SPIrit REcommendations in the UK (ASPIRE-UK) study.
- Author
-
Speich, Benjamin, Odutayo, Ayodele, Peckham, Nicholas, Ooms, Alexander, Stokes, Jamie R., Saccilotto, Ramon, Gryaznov, Dmitry, von Niederhäusern, Belinda, Copsey, Bethan, Altman, Douglas G., Briel, Matthias, and Hopewell, Sally
- Abstract
Background: To assess the quality of reporting of RCT protocols approved by UK research ethics committees before and after the publication of the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) guideline.Methods: We had access to RCT study protocols that received ethical approval in the UK in 2012 (n=103) and 2016 (n=108). From those, we assessed the adherence to the 33 SPIRIT items (i.e. a total of 64 components of the 33 SPIRIT items). We descriptively analysed the adherence to SPIRIT guidelines as proportion of adequately reported items (median and interquartile range [IQR]) and stratified the results by year of approval and sponsor.Results: The proportion of reported SPIRIT items increased from a median of 64.9% (IQR, 57.6-69.2%) in 2012 to a median of 72.5% (IQR, 65.3-78.3%) in 2016. Industry-sponsored RCTs reported more SPIRIT items in 2012 (median 67.4%; IQR, 64.1-69.4%) compared to non-industry-sponsored trials (median 59.8%; IQR, 46.5-67.7%). This gap between industry- and non-industry-sponsored trials increased in 2016 (industry-sponsored: median 75.6%; IQR, 71.2-79.0% vs non-industry-sponsored: median 65.3%; IQR, 51.6-76.3%).Conclusions: The adherence to SPIRIT guidelines has improved in the UK from 2012 to 2016 but remains on a modest level, especially for non-industry-sponsored RCTs. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
3. A CHecklist for statistical Assessment of Medical Papers (the CHAMP statement): explanation and elaboration.
- Author
-
Mansournia, Mohammad Ali, Collins, Gary S., Nielsen, Rasmus Oestergaard, Nazemipour, Maryam, Jewell, Nicholas P., Altman, Douglas G., and Campbell, Michael J.
- Subjects
MEDICAL statistics ,SPORTS statistics ,SPORTS sciences ,MEDICAL research ,MEDICAL sciences - Abstract
Misuse of statistics in medical and sports science research is common and may lead to detrimental consequences to healthcare. Many authors, editors and peer reviewers of medical papers will not have expert knowledge of statistics or may be unconvinced about the importance of applying correct statistics in medical research. Although there are guidelines on reporting statistics in medical papers, a checklist on the more general and commonly seen aspects of statistics to assess when peer-reviewing an article is needed. In this article, we propose a CHecklist for statistical Assessment of Medical Papers (CHAMP) comprising 30 items related to the design and conduct, data analysis, reporting and presentation, and interpretation of a research paper. While CHAMP is primarily aimed at editors and peer reviewers during the statistical assessment of a medical paper, we believe it will serve as a useful reference to improve authors' and readers' practice in their use of statistics in medical research. We strongly encourage editors and peer reviewers to consult CHAMP when assessing manuscripts for potential publication. Authors also may apply CHAMP to ensure the validity of their statistical approach and reporting of medical research, and readers may consider using CHAMP to enhance their statistical assessment of a paper. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Transparent Reporting of Multivariable Prediction Models in Journal and Conference Abstracts: TRIPOD for Abstracts.
- Author
-
Heus, Pauline, Reitsma, Johannes B., Collins, Gary S., Damen, Johanna A.A.G., Scholten, Rob J.P.M., Altman, Douglas G., Moons, Karel G.M., and Hooft, Lotty
- Abstract
Clear and informative reporting in titles and abstracts is essential to help readers and reviewers identify potentially relevant studies and decide whether to read the full text. Although the TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) statement provides general recommendations for reporting titles and abstracts, more detailed guidance seems to be desirable. The authors present TRIPOD for Abstracts, a checklist and corresponding guidance for reporting prediction model studies in abstracts. A list of 32 potentially relevant items was the starting point for a modified Delphi procedure involving 110 experts, of whom 71 (65%) participated in the web-based survey. After 2 Delphi rounds, the experts agreed on 21 items as being essential to report in abstracts of prediction model studies. This number was reduced by merging some of the items. In a third round, participants provided feedback on a draft version of TRIPOD for Abstracts. The final checklist contains 12 items and applies to journal and conference abstracts that describe the development or external validation of a diagnostic or prognostic prediction model, or the added value of predictors to an existing model, regardless of the clinical domain or statistical approach used. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. The adaptive designs CONSORT extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design.
- Author
-
Dimairo, Munyaradzi, Pallmann, Philip, Wason, James, Todd, Susan, Jaki, Thomas, Julious, Steven A., Mander, Adrian P., Weir, Christopher J., Koenig, Franz, Walton, Marc K., Nicholl, Jon P., Coates, Elizabeth, Biggs, Katie, Hamasaki, Toshimitsu, Proschan, Michael A., Scott, John A., Ando, Yuki, Hind, Daniel, Altman, Douglas G., and on behalf of the ACE Consensus Group
- Subjects
CRIME & the press ,KNOWLEDGE transfer ,PRIVATE sector ,PUBLIC sector - Abstract
Adaptive designs (ADs) allow pre-planned changes to an ongoing trial without compromising the validity of conclusions and it is essential to distinguish pre-planned from unplanned changes that may also occur. The reporting of ADs in randomised trials is inconsistent and needs improving. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.This extension to the Consolidated Standards Of Reporting Trials (CONSORT) 2010 statement was developed to enhance the reporting of randomised AD clinical trials. We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting. Members of the CONSORT Group were involved during the development process.The paper presents the ACE checklists for AD randomised trial reports and abstracts, as well as an explanation with examples to aid the application of the guideline. The ACE checklist comprises seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits. In order to encourage its wide dissemination this article is freely accessible on the BMJ and Trials journal websites."To maximise the benefit to society, you need to not just do research but do it well" Douglas G Altman. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. The Adaptive designs CONSORT Extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design.
- Author
-
Dimairo, Munyaradzi, Pallmann, Philip, Wason, James, Todd, Susan, Jaki, Thomas, Julious, Steven A., Mander, Adrian P., Weir, Christopher J., Koenig, Franz, Walton, Marc K., Nicholl, Jon P., Coates, Elizabeth, Biggs, Katie, Toshimitsu Hamasaki, Proschan, Michael A., Scott, John A., Yuki Ando, Hind, Daniel, and Altman, Douglas G.
- Published
- 2020
- Full Text
- View/download PDF
7. Design, analysis and reporting of multi-arm trials and strategies to address multiple testing.
- Author
-
Odutayo, Ayodele, Gryaznov, Dmitry, Copsey, Bethan, Monk, Paul, Speich, Benjamin, Roberts, Corran, Vadher, Karan, Dutton, Peter, Briel, Matthias, Hopewell, Sally, Altman, Douglas G, group, and the ASPIRE study, and the ASPIRE study group, and ASPIRE study group
- Subjects
FALSE positive error ,CRIME & the press ,BONFERRONI correction ,ERROR rates ,DECISION making ,RESEARCH ethics ,EXPERIMENTAL design ,STATISTICS ,CLINICAL trials ,LONGITUDINAL method - Abstract
Background: It is unclear how multiple treatment comparisons are managed in the analysis of multi-arm trials, particularly related to reducing type I (false positive) and type II errors (false negative).Methods: We conducted a cohort study of clinical-trial protocols that were approved by research ethics committees in the UK, Switzerland, Germany and Canada in 2012. We examined the use of multiple-testing procedures to control the overall type I error rate. We created a decision tool to determine the need for multiple-testing procedures. We compared the result of the decision tool to the analysis plan in the protocol. We also compared the pre-specified analysis plans in trial protocols to their publications.Results: Sixty-four protocols for multi-arm trials were identified, of which 50 involved multiple testing. Nine of 50 trials (18%) used a single-step multiple-testing procedures such as a Bonferroni correction and 17 (38%) used an ordered sequence of primary comparisons to control the overall type I error. Based on our decision tool, 45 of 50 protocols (90%) required use of a multiple-testing procedure but only 28 of the 45 (62%) accounted for multiplicity in their analysis or provided a rationale if no multiple-testing procedure was used. We identified 32 protocol-publication pairs, of which 8 planned a global-comparison test and 20 planned a multiple-testing procedure in their trial protocol. However, four of these eight trials (50%) did not use the global-comparison test. Likewise, 3 of the 20 trials (15%) did not perform the multiple-testing procedure in the publication. The sample size of our study was small and we did not have access to statistical-analysis plans for the included trials in our study.Conclusions: Strategies to reduce type I and type II errors are inconsistently employed in multi-arm trials. Important analytical differences exist between planned analyses in clinical-trial protocols and subsequent publications, which may suggest selective reporting of analyses. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
8. Survival and disease characteristics of de novo versus recurrent metastatic breast cancer in a cohort of young patients.
- Author
-
McKenzie, Hayley S., Maishman, Tom, Simmonds, Peter, Durcan, Lorraine, POSH Steering Group, Altman, Douglas G., Jones, Louise, Evans, Gareth, Thompson, Alastair M., Pharaoh, Paul, Hanby, Andrew, Lakhani, Sunil, Eeles, Ros, Gilbert, Fiona J., Hamed, Hisham, Hodgson, Shirley, Eccles, Diana, and Copson, Ellen
- Abstract
Background: It is not clear how the pathology, presentation and outcome for patients who present with de novo metastatic breast cancer (dnMBC) compare with those who later develop distant metastases. DnMBC is uncommon in younger patients. We describe these differences within a cohort of young patients in the United Kingdom.Methods: Women aged 40 years or younger with a first invasive breast cancer were recruited to the prospective POSH national cohort study. Baseline clinicopathological data were collected, with annual follow-up. Overall survival (OS) and post-distant relapse-free survival (PDRS) were assessed using Kaplan-Meier curves.Results: In total, 862 patients were diagnosed with metastatic disease. DnMBC prevalence was 2.6% (76/2977). Of those with initially localised disease, 27.1% (786/2901) subsequently developed a distant recurrence. Median follow-up was 11.00 years (95% CI 10.79-11.59). Patients who developed metastatic disease within 12 months had worse OS than dnMBC patients (HR 2.64; 1.84-3.77). For PDRS, dnMBC was better than all groups, including those who relapsed after 5 years. Of dnMBC patients, 1.3% had a gBRCA1, and 11.8% a gBRCA2 mutation.Conclusions: Young women with dnMBC have better PDRS than those who develop relapsed metastatic breast cancer. A gBRCA2 mutation was overrepresented in dnMBC. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
9. Design choices for observational studies of the effect of exposure on disease incidence.
- Author
-
Gail, Mitchell H., Altman, Douglas G., Cadarette, Suzanne M., Collins, Gary, Evans, Stephen J. W., Sekula, Peggy, Williamson, Elizabeth, and Woodward, Mark
- Abstract
The purpose of this paper is to help readers choose an appropriate observational study design for measuring an association between an exposure and disease incidence. We discuss cohort studies, sub-samples from cohorts (case-cohort and nested case-control designs), and population-based or hospital-based case-control studies. Appropriate study design is the foundation of a scientifically valid observational study. Mistakes in design are often irremediable. Key steps are understanding the scientific aims of the study and what is required to achieve them. Some designs will not yield the information required to realise the aims. The choice of design also depends on the availability of source populations and resources. Choosing an appropriate design requires balancing the pros and cons of various designs in view of study aims and practical constraints. We compare various cohort and case-control designs to estimate the effect of an exposure on disease incidence and mention how certain design features can reduce threats to study validity. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. Childhood obesity intervention studies: A narrative review and guide for investigators, authors, editors, reviewers, journalists, and readers to guard against exaggerated effectiveness claims.
- Author
-
Brown, Andrew W., Altman, Douglas G., Baranowski, Tom, Bland, J. Martin, Dawson, John A., Dhurandhar, Nikhil V., Dowla, Shima, Fontaine, Kevin R., Gelman, Andrew, Heymsfield, Steven B., Jayawardene, Wasantha, Keith, Scott W., Kyle, Theodore K., Loken, Eric, Oakes, J. Michael, Stevens, June, Thomas, Diana M., and Allison, David B.
- Subjects
CHILDHOOD obesity ,STATISTICAL hypothesis testing ,CHILDREN in literature ,TECHNICAL reports ,JOURNALISTS - Abstract
Summary: Being able to draw accurate conclusions from childhood obesity trials is important to make advances in reversing the obesity epidemic. However, obesity research sometimes is not conducted or reported to appropriate scientific standards. To constructively draw attention to this issue, we present 10 errors that are commonly committed, illustrate each error with examples from the childhood obesity literature, and follow with suggestions on how to avoid these errors. These errors are as follows: using self‐reported outcomes and teaching to the test; foregoing control groups and risking regression to the mean creating differences over time; changing the goal posts; ignoring clustering in studies that randomize groups of children; following the forking paths, subsetting, p‐hacking, and data dredging; basing conclusions on tests for significant differences from baseline; equating "no statistically significant difference" with "equally effective"; ignoring intervention study results in favor of observational analyses; using one‐sided testing for statistical significance; and stating that effects are clinically significant even though they are not statistically significant. We hope that compiling these errors in one article will serve as the beginning of a checklist to support fidelity in conducting, analyzing, and reporting childhood obesity research. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
11. Statistical methodology for constructing gestational age-related charts using cross-sectional and longitudinal data: The INTERGROWTH-21st project as a case study.
- Author
-
Ohuma, Eric O., Altman, Douglas G., and International Fetal and Newborn Growth Consortium for the 21st Century (INTERGROWTH-21st Project)
- Subjects
STATISTICAL hypothesis testing ,BIG data ,GESTATIONAL age ,STATISTICAL models ,PERCENTILES - Abstract
Most studies aiming to construct reference or standard charts use a cross-sectional design, collecting one measurement per participant. Reference or standard charts can also be constructed using a longitudinal design, collecting multiple measurements per participant. The choice of appropriate statistical methodology is important as inaccurate centiles resulting from inferior methods can lead to incorrect judgements about fetal or newborn size, resulting in suboptimal clinical care. Reference or standard centiles should ideally provide the best fit to the data, change smoothly with age (eg, gestational age), use as simple a statistical model as possible without compromising model fit, and allow the computation of Z-scores from centiles to simplify assessment of individuals and enable comparison with different populations. Significance testing and goodness-of-fit statistics are usually used to discriminate between models. However, these methods tend not to be useful when examining large data sets as very small differences are statistically significant even if the models are indistinguishable on actual centile plots. Choosing the best model from amongst many is therefore not trivial. Model choice should not be based on statistical considerations (or tests) alone as sometimes the best model may not necessarily offer the best fit to the raw data across gestational age. In this paper, we describe the most commonly applied methodologies available for the construction of age-specific reference or standard centiles for cross-sectional and longitudinal data: Fractional polynomial regression, LMS, LMST, LMSP, and multilevel regression methods. For illustration, we used data from the INTERGROWTH-21st Project, ie, newborn weight (cross-sectional) and fetal head circumference (longitudinal) data as examples. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
12. Design and other methodological considerations for the construction of human fetal and neonatal size and growth charts.
- Author
-
Ohuma, Eric O., Altman, Douglas G., and International Fetal and Newborn Growth Consortium for the 21st Century (INTERGROWTH-21st Project)
- Subjects
LONGITUDINAL method ,HUMAN growth ,CROSS-sectional method ,QUALITY control ,GESTATIONAL age - Abstract
This paper discusses the features of study design and methodological considerations for constructing reference centile charts for attained size, growth, and velocity charts with a focus on human growth charts used during pregnancy. Recent systematic reviews of pregnancy dating, fetal size, and newborn size charts showed that many studies aimed at constructing charts are still conducted poorly. Important design features such as inclusion and exclusion criteria, ultrasound quality control measures, sample size determination, anthropometric evaluation, gestational age estimation, assessment of outliers, and chart presentation are seldom well addressed, considered, or reported. Many of these charts are in clinical use today and directly affect the identification of at-risk newborns that require treatment and nutritional strategies. This paper therefore reiterates some of the concepts previously identified as important for growth studies, focusing on considerations and concepts related to study design, sample size, and methodological considerations with an aim of obtaining valid reference or standard centile charts. We discuss some of the key issues and provide more details and practical examples based on our experiences from the INTERGROWTH-21st Project. We discuss the statistical methodology and analyses for cross-sectional studies and longitudinal studies in a separate article in this issue. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
13. Sample size for binary logistic prediction models: Beyond events per variable criteria.
- Author
-
van Smeden, Maarten, Moons, Karel GM, de Groot, Joris AH, Collins, Gary S, Altman, Douglas G, Eijkemans, Marinus JC, and Reitsma, Johannes B
- Subjects
PREDICTION models ,LOGISTIC regression analysis ,ERROR probability ,RECEIVER operating characteristic curves ,TECHNICAL specifications ,SAMPLE size (Statistics) ,COMPUTER simulation ,EXPERIMENTAL design ,RESEARCH ,RESEARCH methodology ,EVALUATION research ,MEDICAL cooperation ,COMPARATIVE studies ,STATISTICAL models - Abstract
Binary logistic regression is one of the most frequently applied statistical approaches for developing clinical prediction models. Developers of such models often rely on an Events Per Variable criterion (EPV), notably EPV ≥10, to determine the minimal sample size required and the maximum number of candidate predictors that can be examined. We present an extensive simulation study in which we studied the influence of EPV, events fraction, number of candidate predictors, the correlations and distributions of candidate predictor variables, area under the ROC curve, and predictor effects on out-of-sample predictive performance of prediction models. The out-of-sample performance (calibration, discrimination and probability prediction error) of developed prediction models was studied before and after regression shrinkage and variable selection. The results indicate that EPV does not have a strong relation with metrics of predictive performance, and is not an appropriate criterion for (binary) prediction model development studies. We show that out-of-sample predictive performance can better be approximated by considering the number of predictors, the total sample size and the events fraction. We propose that the development of new sample size criteria for prediction models should be based on these three parameters, and provide suggestions for improving sample size determination. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
14. CONSORT 2010 statement: extension to randomised crossover trials.
- Author
-
Dwan, Kerry, Tianjing Li, Altman, Douglas G., and Elbourne, Diana
- Published
- 2019
- Full Text
- View/download PDF
15. Can we be certain that storage duration of transfused red blood cells does not affect patient outcomes?
- Author
-
Trivella, Marialena, Stanworth, Simon J., Brunskill, Susan, Dutton, Peter, and Altman, Douglas G.
- Published
- 2019
- Full Text
- View/download PDF
16. Reporting of Multi-Arm Parallel-Group Randomized Trials: Extension of the CONSORT 2010 Statement.
- Author
-
Juszczak, Edmund, Altman, Douglas G., Hopewell, Sally, and Schulz, Kenneth
- Subjects
PUBLISHING ,CLINICAL trials ,NEWSLETTERS ,SYSTEMATIC reviews ,STANDARDS - Abstract
Importance: The quality of reporting of randomized clinical trials is suboptimal. In an era in which the need for greater research transparency is paramount, inadequate reporting hinders assessment of the reliability and validity of trial findings. The Consolidated Standards of Reporting Trials (CONSORT) 2010 Statement was developed to improve the reporting of randomized clinical trials, but the primary focus was on parallel-group trials with 2 groups. Multi-arm trials that use a parallel-group design (comparing treatments by concurrently randomizing participants to one of the treatment groups, usually with equal probability) but have 3 or more groups are relatively common. The quality of reporting of multi-arm trials varies substantially, making judgments and interpretation difficult. While the majority of the elements of the CONSORT 2010 Statement apply equally to multi-arm trials, some elements need adaptation, and, in some cases, additional issues need to be clarified.Objective: To present an extension to the CONSORT 2010 Statement for reporting multi-arm trials to facilitate the reporting of such trials.Design: A guideline writing group, which included all authors, formed following the CONSORT group meeting in 2014. The authors met in person and by teleconference bimonthly between 2014 and 2018 to develop and revise the checklist and the accompanying text, with additional discussions by email. A draft manuscript was circulated to the wider CONSORT group of 36 individuals, plus 5 other selected individuals known for their specialist knowledge in clinical trials, for review. Extensive feedback was received from 14 individuals and, after detailed consideration of their comments, a final revised version of the extension was prepared.Findings: This CONSORT extension for multi-arm trials expands on 10 items of the CONSORT 2010 checklist and provides examples of good reporting and a rationale for the importance of each extension item. Key recommendations are that multi-arm trials should be identified as such and require clear objectives and hypotheses referring to all of the treatment groups. Primary treatment comparisons should be identified and authors should report the planned and unplanned comparisons resulting from multiple groups completely and transparently. If statistical adjustments for multiplicity are applied, the rationale and method used should be described.Conclusions and Relevance: This extension of the CONSORT 2010 Statement provides specific guidance for the reporting of multi-arm parallel-group randomized clinical trials and should help provide greater transparency and accuracy in the reporting of such trials. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
17. Uniformity in measuring adherence to reporting guidelines: the example of TRIPOD for assessing completeness of reporting of prediction model studies.
- Author
-
Heus, Pauline, Damen, Johanna A. A. G., Pajouheshnia, Romin, Scholten, Rob J. P. M., Reitsma, Johannes B., Collins, Gary S., Altman, Douglas G., Moons, Karel G. M., and Hooft, Lotty
- Abstract
To promote uniformity in measuring adherence to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) statement, a reporting guideline for diagnostic and prognostic prediction model studies, and thereby facilitate comparability of future studies assessing its impact, we transformed the original 22 TRIPOD items into an adherence assessment form and defined adherence scoring rules. TRIPOD specific challenges encountered were the existence of different types of prediction model studies and possible combinations of these within publications. More general issues included dealing with multiple reporting elements, reference to information in another publication, and non-applicability of items. We recommend our adherence assessment form to be used by anyone (eg, researchers, reviewers, editors) evaluating adherence to TRIPOD, to make these assessments comparable. In general, when developing a form to assess adherence to a reporting guideline, we recommend formulating specific adherence elements (if needed multiple per reporting guideline item) using unambiguous wording and the consideration of issues of applicability in advance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. COSMOS-E: Guidance on conducting systematic reviews and meta-analyses of observational studies of etiology.
- Author
-
Dekkers, Olaf M., Vandenbroucke, Jan P., Cevallos, Myriam, Renehan, Andrew G., Altman, Douglas G., and Egger, Matthias
- Abstract
Background: To our knowledge, no publication providing overarching guidance on the conduct of systematic reviews of observational studies of etiology exists.Methods and Findings: Conducting Systematic Reviews and Meta-Analyses of Observational Studies of Etiology (COSMOS-E) provides guidance on all steps in systematic reviews of observational studies of etiology, from shaping the research question, defining exposure and outcomes, to assessing the risk of bias and statistical analysis. The writing group included researchers experienced in meta-analyses and observational studies of etiology. Standard peer-review was performed. While the structure of systematic reviews of observational studies on etiology may be similar to that for systematic reviews of randomised controlled trials, there are specific tasks within each component that differ. Examples include assessment for confounding, selection bias, and information bias. In systematic reviews of observational studies of etiology, combining studies in meta-analysis may lead to more precise estimates, but such greater precision does not automatically remedy potential bias. Thorough exploration of sources of heterogeneity is key when assessing the validity of estimates and causality.Conclusion: As many reviews of observational studies on etiology are being performed, this document may provide researchers with guidance on how to conduct and analyse such reviews. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
19. A guide to systematic review and meta-analysis of prognostic factor studies.
- Author
-
Riley, Richard D., Moons, Karel G. M., Snell, Kym I. E., Ensor, Joie, Hooft, Lotty, Altman, Douglas G., Hayden, Jill, Collins, Gary S., and Debray, Thomas P. A.
- Published
- 2019
- Full Text
- View/download PDF
20. A guide to systematic review and meta-analysis of prognostic factor studies.
- Author
-
Riley, Richard D., Moons, Karel G. M., Snell, Kym I. E., Ensor, Joie, Hooft, Lotty, Altman, Douglas G., Hayden, Jill, Collins, Gary S., and Debray, Thomas P. A.
- Published
- 2019
- Full Text
- View/download PDF
21. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: A systematic review.
- Author
-
Wilson, Blair, Burnett, Peter, Moher, David, Altman, Douglas G., and Al-Shahi Salman, Rustam
- Published
- 2018
- Full Text
- View/download PDF
22. Development process of a consensus-driven CONSORT extension for randomised trials using an adaptive design.
- Author
-
Dimairo, Munyaradzi, Coates, Elizabeth, Julious, Steven A., Biggs, Katie, Nicholl, Jon, Hind, Daniel, Hamasaki, Toshimitsu, Proschan, Michael A., Scott, John A., Ando, Yuki, Altman, Douglas G., Pallmann, Philip, Todd, Susan, Jaki, Thomas, Mander, Adrian P., Wason, James, Weir, Christopher J., Koenig, Franz, and Walton, Marc K.
- Subjects
CLINICAL trials ,RANDOMIZED controlled trials ,DECISION making ,RESEARCH ethics ,LITERATURE reviews ,CLINICAL trial registries - Abstract
Background: Adequate reporting of adaptive designs (ADs) maximises their potential benefits in the conduct of clinical trials. Transparent reporting can help address some obstacles and concerns relating to the use of ADs. Currently, there are deficiencies in the reporting of AD trials. To overcome this, we have developed a consensus-driven extension to the CONSORT statement for randomised trials using an AD. This paper describes the processes and methods used to develop this extension rather than detailed explanation of the guideline.Methods: We developed the guideline in seven overlapping stages: 1) Building on prior research to inform the need for a guideline; 2) A scoping literature review to inform future stages; 3) Drafting the first checklist version involving an External Expert Panel; 4) A two-round Delphi process involving international, multidisciplinary, and cross-sector key stakeholders; 5) A consensus meeting to advise which reporting items to retain through voting, and to discuss the structure of what to include in the supporting explanation and elaboration (E&E) document; 6) Refining and finalising the checklist; and 7) Writing-up and dissemination of the E&E document. The CONSORT Executive Group oversaw the entire development process.Results: Delphi survey response rates were 94/143 (66%), 114/156 (73%), and 79/143 (55%) in rounds 1, 2, and across both rounds, respectively. Twenty-seven delegates from Europe, the USA, and Asia attended the consensus meeting. The main checklist has seven new and nine modified items and six unchanged items with expanded E&E text to clarify further considerations for ADs. The abstract checklist has one new and one modified item together with an unchanged item with expanded E&E text. The E&E document will describe the scope of the guideline, the definition of an AD, and some types of ADs and trial adaptations and explain each reporting item in detail including case studies.Conclusions: We hope that making the development processes, methods, and all supporting information that aided decision-making transparent will enhance the acceptability and quick uptake of the guideline. This will also help other groups when developing similar CONSORT extensions. The guideline is applicable to all randomised trials with an AD and contains minimum reporting requirements. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
23. Overinterpretation and misreporting of prognostic factor studies in oncology: a systematic review.
- Author
-
Kempf, Emmanuelle, de Beyer, Jennifer A., Cook, Jonathan, Holmes, Jane, Mohammed, Seid, Nguyên, Tri-Long, Simera, Iveta, Trivella, Marialena, Altman, Douglas G., Hopewell, Sally, Moons, Karel G. M., Porcher, Raphael, Reitsma, Johannes B., Sauerbrei, Willi, and Collins, Gary S.
- Abstract
Background: Cancer prognostic biomarkers have shown disappointing clinical applicability. The objective of this study was to classify and estimate how study results are overinterpreted and misreported in prognostic factor studies in oncology.Methods: This systematic review focused on 17 oncology journals with an impact factor above 7. PubMed was searched for primary clinical studies published in 2015, evaluating prognostic factors. We developed a classification system, focusing on three domains: misleading reporting (selective, incomplete reporting, misreporting), misleading interpretation (unreliable statistical analysis, spin) and misleading extrapolation of the results (claiming irrelevant clinical applicability, ignoring uncertainty).Results: Our search identified 10,844 articles. The 98 studies included investigated a median of two prognostic factors (Q1-Q3, 1-7). The prognostic factors' effects were selectively and incompletely reported in 35/98 and 24/98 full texts, respectively. Twenty-nine articles used linguistic spin in the form of strong statements. Linguistic spin rejecting non-significant results was found in 34 full-text results and 15 abstract results sections. One in five articles had discussion and/or abstract conclusions that were inconsistent with the study findings. Sixteen reports had discrepancies between their full-text and abstract conclusions.Conclusions: Our study provides evidence of frequent overinterpretation of findings of prognostic factor assessment in high-impact medical oncology journals. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
24. Reporting Recommendations for Tumor Marker Prognostic Studies (REMARK): An Abridged Explanation and Elaboration.
- Author
-
Sauerbrei, Willi, Taube, Sheila E, McShane, Lisa M, Cavenagh, Margaret M, and Altman, Douglas G
- Subjects
TUMOR markers ,CANCER ,BIOMARKERS ,TUMORS ,TUMOR antigens ,TUMOR diagnosis ,COMPARATIVE studies ,EXPERIMENTAL design ,RESEARCH methodology ,MEDICAL cooperation ,MEDICAL protocols ,MEDICAL research ,PROGNOSIS ,PUBLISHING ,RESEARCH ,EVALUATION research ,STANDARDS - Abstract
The Reporting Recommendations for Tumor Marker Prognostic Studies (REMARK) were developed to address widespread deficiencies in the reporting of such studies. The REMARK checklist consists of 20 items to report for published tumor marker prognostic studies. A detailed paper was published explaining the rationale behind checklist items, providing positive examples and giving empirical evidence of the quality of reporting. REMARK provides a comprehensive overview to educate on good reporting and provide a valuable reference for the many issues to consider when designing, conducting, and analyzing tumor marker studies and prognostic studies in medicine in general. Despite support for REMARK from major cancer journals, prognostic factor research studies remain poorly reported. To encourage dissemination and uptake of REMARK, we have produced this considerably abridged version of the detailed explanatory manuscript, which may also serve as a brief guide to key issues for investigators planning tumor marker prognostic studies. To summarize the current situation, more recent papers investigating the quality of reporting and related reporting guidelines are cited, but otherwise the literature is not updated. Another important impetus for this paper is that it serves as a basis for literal translations into other languages. Translations will help to bring key information to a larger audience world-wide. Many more details can be found in the original paper. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
25. Reporting guidelines for oncology research: helping to maximise the impact of your research.
- Author
-
MacCarthy, Angela, Kirtley, Shona, de Beyer, Jennifer A, Altman, Douglas G, and Simera, Iveta
- Subjects
EXPERIMENTAL design ,MEDICAL protocols ,MEDICAL research ,ONCOLOGY ,REPORT writing ,STANDARDS - Abstract
Many reports of health research omit important information needed to assess their methodological robustness and clinical relevance. Without clear and complete reporting, it is not possible to identify flaws or biases, reproduce successful interventions, or use the findings in systematic reviews or meta-analyses. The EQUATOR Network (http://www.equator-network.org/) promotes responsible reporting and the use of reporting guidelines to improve the accuracy, completeness, and transparency of health research. EQUATOR supports researchers by providing online resources and training. EQUATOR Oncology, a project funded by Cancer Research UK, aims to support cancer researchers reporting their research through the provision of online resources. In this article, our objective is to highlight reporting issues related to oncology research publications and to introduce reporting guidelines that are designed to aid high-quality reporting. We describe generic reporting guidelines for the main study types, and explain how these guidelines should and should not be used. We also describe 37 oncology-specific reporting guidelines, covering different clinical areas (e.g., haematology or urology) and sections of the report (e.g., methods or study characteristics); most of these are little-used. We also provide some background information on EQUATOR Oncology, which focuses on addressing the reporting needs of the oncology research community. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
26. Safety of Perioperative Aprotinin Administration During Isolated Coronary Artery Bypass Graft Surgery: Insights From the ART (Arterial Revascularization Trial).
- Author
-
Benedetto, Umberto, Altman, Douglas G., Gerry, Stephen, Gray, Alastair, Lees, Belinda, Angelini, Gianni D., Flather, Marcus, Taggart, David P., the ART (Arterial Revascularization Trial) Investigators, and ART (Arterial Revascularization Trial) Investigators
- Published
- 2018
- Full Text
- View/download PDF
27. Choosing important health outcomes for comparative effectiveness research: An updated systematic review and involvement of low and middle income countries.
- Author
-
Davis, Katherine, Gorst, Sarah L., Harman, Nicola, Smith, Valerie, Gargon, Elizabeth, Altman, Douglas G., Blazeby, Jane M., Clarke, Mike, Tunis, Sean, and Williamson, Paula R.
- Subjects
PUBLIC health ,COMPARATIVE studies ,MIDDLE-income countries - Abstract
Background: Core outcome sets (COS) comprise a minimum set of outcomes that should be measured and reported in all trials for a specific health condition. The COMET (Core Outcome Measures in Effectiveness Trials) Initiative maintains an up to date, publicly accessible online database of published and ongoing COS. An annual systematic review update is an important part of this process. Methods: This review employed the same, multifaceted approach that was used in the original review and the previous two updates. This approach has identified studies that sought to determine which outcomes/domains to measure in clinical trials of a specific condition. This update includes an analysis of the inclusion of participants from low and middle income countries (LMICs) as identified by the OECD, in these COS. Results: Eighteen publications, relating to 15 new studies describing the development of 15 COS, were eligible for inclusion in the review. Results show an increase in the use of mixed methods, including Delphi surveys. Clinical experts remain the most common stakeholder group involved. Overall, only 16% of the 259 COS studies published up to the end of 2016 have included participants from LMICs. Conclusion: This review highlights opportunities for greater public participation in COS development and the involvement of stakeholders from a wider range of geographical settings, in particular LMICs. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
28. Influence of peer review on the reporting of primary outcome(s) and statistical analyses of randomised trials.
- Author
-
Hopewell, Sally, Witt, Claudia M., Linde, Klaus, Icke, Katja, Adedire, Olubusola, Kirtley, Shona, and Altman, Douglas G.
- Subjects
CLINICAL trials ,HEALTH outcome assessment ,STATISTICS ,INTERNET surveys ,MEDICAL research - Abstract
Background: Selective reporting of outcomes in clinical trials is a serious problem. We aimed to investigate the influence of the peer review process within biomedical journals on reporting of primary outcome(s) and statistical analyses within reports of randomised trials.Methods: Each month, PubMed (May 2014 to April 2015) was searched to identify primary reports of randomised trials published in six high-impact general and 12 high-impact specialty journals. The corresponding author of each trial was invited to complete an online survey asking authors about changes made to their manuscript as part of the peer review process. Our main outcomes were to assess: (1) the nature and extent of changes as part of the peer review process, in relation to reporting of the primary outcome(s) and/or primary statistical analysis; (2) how often authors followed these requests; and (3) whether this was related to specific journal or trial characteristics.Results: Of 893 corresponding authors who were invited to take part in the online survey 258 (29%) responded. The majority of trials were multicentre (n = 191; 74%); median sample size 325 (IQR 138 to 1010). The primary outcome was clearly defined in 92% (n = 238), of which the direction of treatment effect was statistically significant in 49%. The majority responded (1-10 Likert scale) they were satisfied with the overall handling (mean 8.6, SD 1.5) and quality of peer review (mean 8.5, SD 1.5) of their manuscript. Only 3% (n = 8) said that the editor or peer reviewers had asked them to change or clarify the trial's primary outcome. However, 27% (n = 69) reported they were asked to change or clarify the statistical analysis of the primary outcome; most had fulfilled the request, the main motivation being to improve the statistical methods (n = 38; 55%) or avoid rejection (n = 30; 44%). Overall, there was little association between authors being asked to make this change and the type of journal, intervention, significance of the primary outcome, or funding source. Thirty-six percent (n = 94) of authors had been asked to include additional analyses that had not been included in the original manuscript; in 77% (n = 72) these were not pre-specified in the protocol. Twenty-three percent (n = 60) had been asked to modify their overall conclusion, usually (n = 53; 88%) to provide a more cautious conclusion.Conclusion: Overall, most changes, as a result of the peer review process, resulted in improvements to the published manuscript; there was little evidence of a negative impact in terms of post hoc changes of the primary outcome. However, some suggested changes might be considered inappropriate, such as unplanned additional analyses, and should be discouraged. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
29. Beta-blockers for heart failure with reduced, mid-range, and preserved ejection fraction: an individual patient-level analysis of double-blind randomized trials.
- Author
-
Cleland, John G F, Bunting, Karina V, Flather, Marcus D, Altman, Douglas G, Holmes, Jane, Coats, Andrew J S, Manzano, Luis, McMurray, John J V, Ruschitzka, Frank, and Veldhuisen, Dirk J van
- Abstract
Aims: Recent guidelines recommend that patients with heart failure and left ventricular ejection fraction (LVEF) 40-49% should be managed similar to LVEF = 50%. We investigated the effect of beta-blockers according to LVEF in double-blind, randomized, placebo-controlled trials. Methods and results: Individual patient data meta-analysis of 11 trials, stratified by baseline LVEF and heart rhythm (Clinicaltrials.gov: NCT0083244; PROSPERO: CRD42014010012). Primary outcomes were all-cause mortality and cardiovascular death over 1.3 years median follow-up, with an intention-to-treat analysis. For 14 262 patients in sinus rhythm, median LVEF was 27% (interquartile range 21-33%), including 575 patients with LVEF 40-49% and 244 = 50%. Beta-blockers reduced all-cause and cardiovascular mortality compared to placebo in sinus rhythm, an effect that was consistent across LVEF strata, except for those in the small subgroup with LVEF = 50%. For LVEF 40-49%, death occurred in 21/292 [7.2%] randomized to beta-blockers compared to 35/283 [12.4%] with placebo; adjusted hazard ratio (HR) 0.59 [95% confidence interval (CI) 0.34-1.03]. Cardiovascular death occurred in 13/292 [4.5%] with beta-blockers and 26/283 [9.2%] with placebo; adjusted HR 0.48 (95% CI 0.24-0.97). Over a median of 1.0 years following randomization (n = 4601), LVEF increased with beta-blockers in all groups in sinus rhythm except LVEF =50%. For patients in atrial fibrillation at baseline (n = 3050), beta-blockers increased LVEF when < 50% at baseline, but did not improve prognosis. Conclusion: Beta-blockers improve LVEF and prognosis for patients with heart failure in sinus rhythm with a reduced LVEF. The data are most robust for LVEF < 40%, but similar benefit was observed in the subgroup of patients with LVEF 40-49%. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. Guidelines for the Content of Statistical Analysis Plans in Clinical Trials.
- Author
-
Gamble, Carrol, Krishan, Ashma, Stocken, Deborah, Lewis, Steff, Juszczak, Edmund, Doré, Caroline, Williamson, Paula R., Altman, Douglas G., Montgomery, Alan, Lim, Pilar, Berlin, Jesse, Senn, Stephen, Day, Simon, Barbachano, Yolanda, and Loder, Elizabeth
- Subjects
GUIDELINES ,CONTENT analysis ,CLINICAL trials ,REPRODUCIBLE research ,STATISTICIANS ,RESEARCH & society ,PERIODICAL editors ,STATISTICAL standards ,COMPARATIVE studies ,DELPHI method ,RESEARCH methodology ,MEDICAL cooperation ,RESEARCH ,STATISTICS ,DATA analysis ,EVALUATION research - Abstract
Importance: While guidance on statistical principles for clinical trials exists, there is an absence of guidance covering the required content of statistical analysis plans (SAPs) to support transparency and reproducibility.Objective: To develop recommendations for a minimum set of items that should be addressed in SAPs for clinical trials, developed with input from statisticians, previous guideline authors, journal editors, regulators, and funders.Design: Funders and regulators (n = 39) of randomized trials were contacted and the literature was searched to identify existing guidance; a survey of current practice was conducted across the network of UK Clinical Research Collaboration-registered trial units (n = 46, 1 unit had 2 responders) and a Delphi survey (n = 73 invited participants) was conducted to establish consensus on SAPs. The Delphi survey was sent to statisticians in trial units who completed the survey of current practice (n = 46), CONSORT (Consolidated Standards of Reporting Trials) and SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) guideline authors (n = 16), pharmaceutical industry statisticians (n = 3), journal editors (n = 9), and regulators (n = 2) (3 participants were included in 2 groups each), culminating in a consensus meeting attended by experts (N = 12) with representatives from each group. The guidance subsequently underwent critical review by statisticians from the surveyed trial units and members of the expert panel of the consensus meeting (N = 51), followed by piloting of the guidance document in the SAPs of 5 trials.Findings: No existing guidance was identified. The registered trials unit survey (46 responses) highlighted diversity in current practice and confirmed support for developing guidance. The Delphi survey (54 of 73, 74% participants completing both rounds) reached consensus on 42% (n = 46) of 110 items. The expert panel (N = 12) agreed that 63 items should be included in the guidance, with an additional 17 items identified as important but may be referenced elsewhere. Following critical review and piloting, some overlapping items were combined, leaving 55 items.Conclusions and Relevance: Recommendations are provided for a minimum set of items that should be addressed and included in SAPs for clinical trials. Trial registration, protocols, and statistical analysis plans are critically important in ensuring appropriate reporting of clinical trials. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
31. Core Outcome Set-STAndards for Development: The COS-STAD recommendations.
- Author
-
Kirkham, Jamie J., Davis, Katherine, Altman, Douglas G., Blazeby, Jane M., Clarke, Mike, Tunis, Sean, and Williamson, Paula R.
- Subjects
RESEARCH methodology ,RESEARCH ,MEDICAL scientists ,MEDICAL research ,STANDARDS ,BIOLOGICAL assay ,DELPHI method ,EXPERIMENTAL design ,HEALTH outcome assessment ,RESEARCH funding - Abstract
Background: The use of core outcome sets (COS) ensures that researchers measure and report those outcomes that are most likely to be relevant to users of their research. Several hundred COS projects have been systematically identified to date, but there has been no formal quality assessment of these studies. The Core Outcome Set-STAndards for Development (COS-STAD) project aimed to identify minimum standards for the design of a COS study agreed upon by an international group, while other specific guidance exists for the final reporting of COS development studies (Core Outcome Set-STAndards for Reporting [COS-STAR]).Methods and Findings: An international group of experienced COS developers, methodologists, journal editors, potential users of COS (clinical trialists, systematic reviewers, and clinical guideline developers), and patient representatives produced the COS-STAD recommendations to help improve the quality of COS development and support the assessment of whether a COS had been developed using a reasonable approach. An open survey of experts generated an initial list of items, which was refined by a 2-round Delphi survey involving nearly 250 participants representing key stakeholder groups. Participants assigned importance ratings for each item using a 1-9 scale. Consensus that an item should be included in the set of minimum standards was defined as at least 70% of the voting participants from each stakeholder group providing a score between 7 and 9. The Delphi survey was followed by a consensus discussion with the study management group representing multiple stakeholder groups. COS-STAD contains 11 minimum standards that are the minimum design recommendations for all COS development projects. The recommendations focus on 3 key domains: the scope, the stakeholders, and the consensus process.Conclusions: The COS-STAD project has established 11 minimum standards to be followed by COS developers when planning their projects and by users when deciding whether a COS has been developed using reasonable methods. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
32. Interventions to improve adherence to reporting guidelines in health research: a scoping review protocol.
- Author
-
Blanco, David, Kirkham, Jamie J., Altman, Douglas G., Moher, David, Boutron, Isabelle, and Cobo, Erik
- Abstract
Introduction There is evidence that the use of some reporting guidelines, such as the Consolidated Standards for Reporting Trials, is associated with improved completeness of reporting in health research. However, the current levels of adherence to reporting guidelines are suboptimal. Over the last few years, several actions aiming to improve compliance with reporting guidelines have been taken and proposed. We will conduct a scoping review of interventions to improve adherence to reporting guidelines in health research that have been evaluated or suggested, in order to inform future interventions. Methods and analysis Our review will follow the Joanna Briggs Institute scoping review methods manual. We will search for relevant studies in MEDLINE, EMBASE and Cochrane Library databases. Moreover, we will carry out lateral searches from the reference lists of the included studies, as well as from the lists of articles citing the included ones. One reviewer will screen the full list, which will be randomly split into two halves and independently screened by the other two reviewers. Two reviewers will perform data extraction independently. Discrepancies will be solved through discussion. In addition, this search strategy will be supplemented by a grey literature search. The interventions found will be classified as assessed or suggested, as well as according to different criteria, in relation to their target (journal policies, journal editors, authors, reviewers, funders, ethical boards or others) or the research stage at which they are performed (design, conducting, reporting or peer review). Descriptive statistical analysis will be performed. Ethics and dissemination A paper summarising the findings from this review will be published in a peerreviewed journal. This scoping review will contribute to a better understanding and a broader perspective on how the problem of adhering better to reporting guidelines has been tackled so far. This could be a major first step towards developing future strategies to improve compliance with reporting guidelines in health research. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
33. Impact of dual antiplatelet therapy after coronary artery bypass surgery on 1-year outcomes in the Arterial Revascularization Trial.
- Author
-
Benedetto, Umberto, Altman, Douglas G., Gerry, Stephen, Gray, Alastair, Lees, Belinda, Flather, Marcus, and Taggart, David P.
- Subjects
CORONARY artery bypass ,REVASCULARIZATION (Surgery) ,NERVE grafting ,ASPIRIN ,MYOCARDIAL infarction diagnosis ,PHYSIOLOGY - Abstract
OBJECTIVES: There is still little evidence to boldport routine dual antiplatelet therapy (DAPT) with P
2 Y12 antagonists following coronary artery bypass grafting (CABG). The Arterial Revascularization Trial (ART) was designed to compare 10-year survival after bilateral versus single internal thoracic artery grafting. We aimed to get insights into the effect of DAPT (with clopidogrel) following CABG on 1-year outcomes by performing a post hoc ART analysis. METHODS: Among patients enrolled in the ART (n = 3102), 609 (21%) and 2308 (79%) were discharged on DAPT or aspirin alone, respectively. The primary end-point was the incidence of major adverse cerebrovascular and cardiac events (MACCE) at 1 year including cardiac death, myocardial infarction, cerebrovascular accident and reintervention; safety end-point was bleeding requiring hospitalization. Propensity score (PS) matching was used to create comparable groups. RESULTS: Among 609 PS-matched pairs, MACCE occurred in 34 (5.6%) and 34 (5.6%) in the DAPT and aspirin alone groups, respectively, with no significant difference between the 2 groups [hazard ratio (HR) 0.97, 95% confidence interval (CI) 0.59-1.59; P = 0.90]. Only 188 (31%) subjects completed 1 year of DAPT, and in this subgroup, MACCE rate was 5.8% (HR 1.11, 95% CI 0.53-2.30; P = 0.78). In the overall sample, bleeding rate was higher in DAPT group (2.3% vs 1.1%; P = 0.02), although this difference was no longer significant after matching (2.3% vs 1.8%; P = 0.54). CONCLUSIONS: Based on these findings, when compared with aspirin alone, DAPT with clopidogrel prescribed at discharge was not associated with a significant reduction of adverse cardiac and cerebrovascular events at 1 year following CABG. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
34. Associations Between Adding a Radial Artery Graft to Single and Bilateral Internal Thoracic Artery Grafts and Outcomes: Insights From the Arterial Revascularization Trial.
- Author
-
Taggart, David P., Altman, Douglas G., Flather, Marcus, Gerry, Stephen, Gray, Alastair, Lees, Belinda, Benedetto, Umberto, and ART (Arterial Revascularization Trial) Investigators
- Published
- 2017
- Full Text
- View/download PDF
35. Systematic review adherence to methodological or reporting quality.
- Author
-
Pussegoda, Kusala, Turner, Lucy, Garritty, Chantelle, Mayhew, Alain, Skidmore, Becky, Stevens, Adrienne, Boutron, Isabelle, Sarkis-Onofre, Rafael, Bjerre, Lise M., Hróbjartsson, Asbjørn, Altman, Douglas G., and Moher, David
- Subjects
META-analysis ,GUIDELINES ,ACCURACY in journalism - Abstract
Background: Guidelines for assessing methodological and reporting quality of systematic reviews (SRs) were developed to contribute to implementing evidence-based health care and the reduction of research waste. As SRs assessing a cohort of SRs is becoming more prevalent in the literature and with the increased uptake of SR evidence for decision-making, methodological quality and standard of reporting of SRs is of interest. The objective of this study is to evaluate SR adherence to the Quality of Reporting of Meta-analyses (QUOROM) and PRISMA reporting guidelines and the A Measurement Tool to Assess Systematic Reviews (AMSTAR) and Overview Quality Assessment Questionnaire (OQAQ) quality assessment tools as evaluated in methodological overviews. Methods: The Cochrane Library, MEDLINE®, and EMBASE® databases were searched from January 1990 to October 2014. Title and abstract screening and full-text screening were conducted independently by two reviewers. Reports assessing the quality or reporting of a cohort of SRs of interventions using PRISMA, QUOROM, OQAQ, or AMSTAR were included. All results are reported as frequencies and percentages of reports and SRs respectively. Results: Of the 20,765 independent records retrieved from electronic searching, 1189 reports were reviewed for eligibility at full text, of which 56 reports (5371 SRs in total) evaluating the PRISMA, QUOROM, AMSTAR, and/or OQAQ tools were included. Notable items include the following: of the SRs using PRISMA, over 85% (1532/1741) provided a rationale for the review and less than 6% (102/1741) provided protocol information. For reports using QUOROM, only 9% (40/449) of SRs provided a trial flow diagram. However, 90% (402/449) described the explicit clinical problem and review rationale in the introduction section. Of reports using AMSTAR, 30% (534/1794) used duplicate study selection and data extraction. Conversely, 80% (1439/1794) of SRs provided study characteristics of included studies. In terms of OQAQ, 37% (499/1367) of the SRs assessed risk of bias (validity) in the included studies, while 80% (1112/1387) reported the criteria for study selection. Conclusions: Although reporting guidelines and quality assessment tools exist, reporting and methodological quality of SRs are inconsistent. Mechanisms to improve adherence to established reporting guidelines and methodological assessment tools are needed to improve the quality of SRs. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
36. CONSORT Extension for Chinese Herbal Medicine Formulas 2017: Recommendations, Explanation, and Elaboration (Traditional Chinese Version).
- Author
-
Cheng, Chung-Wah, Wu, Tai-Xiang, Shang, Hong-Cai, Li, You-Ping, Altman, Douglas G, Moher, David, Bian, Zhao-Xiang, and CONSORT-CHM Formulas 2017 Group
- Subjects
PUBLISHING ,CLINICAL trials ,EXPERIMENTAL design ,HERBAL medicine ,CHINESE medicine ,QUALITY control ,STANDARDS - Abstract
Editors' Note: This article is the traditional Chinese version of the CONSORT Extension for Chinese Herbal Medicine Formulas 2017: Recommendations, Explanation, and Elaboration. (Cheng C, Wu T, Shang H, Li, Y, Altman D, Moher D; CONSORT-CHM Formulas 2017 Group. CONSORT Extension for Chinese Herbal Medicine Formulas 2017: Recommendations, Explanation, and Elaboration. Ann Intern Med. 2017;167:112-21. [Epub 27 June 2017]. doi:10.7326/M16-2977). [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
37. CONSORT Extension for Chinese Herbal Medicine Formulas 2017: Recommendations, Explanation, and Elaboration (Simplified Chinese Version).
- Author
-
Cheng, Chung-Wah, Wu, Tai-Xiang, Shang, Hong-Cai, Li, You-Ping, Altman, Douglas G, Moher, David, Bian, Zhao-Xiang, and CONSORT-CHM Formulas 2017 Group
- Abstract
Editors' Note: This article is the simplified Chinese version of the CONSORT Extension for Chinese Herbal Medicine Formulas 2017: Recommendations, Explanation, and Elaboration. (Cheng C, Wu T, Shang H, Li, Y, Altman D, Moher D; CONSORT-CHM Formulas 2017 Group. CONSORT Extension for Chinese Herbal Medicine Formulas 2017: Recommendations, Explanation, and Elaboration. Ann Intern Med. 2017;167:112-21. [Epub 27 June 2017]. doi:10.7326/M16-2977). [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
38. CONSORT Extension for Chinese Herbal Medicine Formulas 2017: Recommendations, Explanation, and Elaboration.
- Author
-
Chung-wah Cheng, Tai-xiang Wu, Hong-cai Shang, You-ping Li, Altman, Douglas G., Moher, David, Zhao-xiang Bian, Cheng, Chung-Wah, Wu, Tai-Xiang, Shang, Hong-Cai, Li, You-Ping, Bian, Zhao-Xiang, and CONSORT-CHM Formulas 2017 Group
- Abstract
Copyright of Annals of Internal Medicine is the property of American College of Physicians and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2017
- Full Text
- View/download PDF
39. Enhancing the usability of systematic reviews by improving the consideration and description of interventions.
- Author
-
Hoffmann, Tammy C., Oxman, Andrew D., Ioannidis, John P. A., Moher, David, Lasserson, Toby J., Tovey, David I., Stein, Ken, Sutcliffe, Katy, Ravaud, Philippe, Altman, Douglas G., Perera, Rafael, and Glasziou, Paul
- Published
- 2017
- Full Text
- View/download PDF
40. CONSORT Statement for Randomized Trials of Nonpharmacologic Treatments: A 2017 Update and a CONSORT Extension for Nonpharmacologic Trial Abstracts.
- Author
-
Boutron, Isabelle, Altman, Douglas G., Moher, David, Schulz, Kenneth F., and Ravaud, Philippe
- Subjects
CLINICAL trials ,RANDOMIZED controlled trials ,CLINICAL trial registries ,MEDICAL research ,RESEARCH bias - Abstract
Incomplete and inadequate reporting is an avoidable waste that reduces the usefulness of research. The CONSORT (Consolidated Standards of Reporting Trials) Statement is an evidence-based reporting guideline that aims to improve research transparency and reduce waste. In 2008, the CONSORT Group developed an extension to the original statement that addressed methodological issues specific to trials of nonpharmacologic treatments (NPTs), such as surgery, rehabilitation, or psychotherapy. This article describes an update of that extension and presents an extension for reporting abstracts of NPT trials. To develop these materials, the authors reviewed pertinent literature published up to July 2016; surveyed authors of NPT trials; and conducted a consensus meeting with editors, trialists, and methodologists. Changes to the CONSORT Statement extension for NPT trials include wording modifications to improve readers' understanding and the addition of 3 new items. These items address whether and how adherence of participants to interventions is assessed or enhanced, description of attempts to limit bias if blinding is not possible, and specification of the delay between randomization and initiation of the intervention. The CONSORT extension for abstracts of NPT trials includes 2 new items that were not specified in the original CONSORT Statement for abstracts. The first addresses reporting of eligibility criteria for centers where the intervention is performed and for care providers. The second addresses reporting of important changes to the intervention versus what was planned. Both the updated CONSORT extension for NPT trials and the CONSORT extension for NPT trial abstracts should help authors, editors, and peer reviewers improve the transparency of NPT trial reports. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
41. CONSORT 2010 statement: extension checklist for reporting within person randomised trials.
- Author
-
Pandis, Nikolaos, Bryan Chung, Scherer, Roberta W., Elbourne, Diana, and Altman, Douglas G.
- Published
- 2017
- Full Text
- View/download PDF
42. Identifying approaches for assessing methodological and reporting quality of systematic reviews: a descriptive study.
- Author
-
Pussegoda, Kusala, Turner, Lucy, Garritty, Chantelle, Mayhew, Alain, Skidmore, Becky, Stevens, Adrienne, Boutron, Isabelle, Sarkis-Onofre, Rafael, Bjerre, Lise M., Hróbjartsson, Asbjørn, Altman, Douglas G., and Moher, David
- Subjects
EVIDENCE-based medicine ,SYSTEMATIC reviews ,DESCRIPTIVE statistics - Abstract
Background: The methodological quality and completeness of reporting of the systematic reviews (SRs) is fundamental to optimal implementation of evidence-based health care and the reduction of research waste. Methods exist to appraise SRs yet little is known about how they are used in SRs or where there are potential gaps in research best-practice guidance materials. The aims of this study are to identify reports assessing the methodological quality (MQ) and/or reporting quality (RQ) of a cohort of SRs and to assess their number, general characteristics, and approaches to 'quality' assessment over time. Methods: The Cochrane Library, MEDLINE®, and EMBASE® were searched from January 1990 to October 16, 2014, for reports assessing MQ and/or RQ of SRs. Title, abstract, and full-text screening of all reports were conducted independently by two reviewers. Reports assessing the MQ and/or RQ of a cohort of ten or more SRs of interventions were included. All results are reported as frequencies and percentages of reports. Results: Of 20,765 unique records retrieved, 1189 of them were reviewed for full-text review, of which 76 reports were included. Eight previously published approaches to assessing MQ or reporting guidelines used as proxy to assess RQ were used in 80% (61/76) of identified reports. These included two reporting guidelines (PRISMA and QUOROM) and five quality assessment tools (AMSTAR, R-AMSTAR, OQAQ, Mulrow, Sacks) and GRADE criteria. The remaining 24% (18/76) of reports developed their own criteria. PRISMA, OQAQ, and AMSTAR were the most commonly used published tools to assess MQ or RQ. In conjunction with other approaches, published tools were used in 29% (22/76) of reports, with 36% (8/22) assessing adherence to both PRISMA and AMSTAR criteria and 26% (6/22) using QUOROM and OQAQ. Conclusions: The methods used to assess quality of SRs are diverse, and none has become universally accepted. The most commonly used quality assessment tools are AMSTAR, OQAQ, and PRISMA. As new tools and guidelines are developed to improve both the MQ and RQ of SRs, authors of methodological studies are encouraged to put thoughtful consideration into the use of appropriate tools to assess quality and reporting. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
43. The COMET Handbook: version 1.0.
- Author
-
Williamson, Paula R., Altman, Douglas G., Bagley, Heather, Barnes, Karen L., Blazeby, Jane M., Brookes, Sara T., Clarke, Mike, Gargon, Elizabeth, Gorst, Sarah, Harman, Nicola, Kirkham, Jamie J., McNair, Angus, Prinsen, Cecilia A. C., Schmitt, Jochen, Terwee, Caroline B., and Young, Bridget
- Subjects
CLINICAL trials ,DECISION making ,LEGAL status of stakeholders ,HEALTH care industry ,PATIENTS' attitudes - Abstract
The selection of appropriate outcomes is crucial when designing clinical trials in order to compare the effects of different interventions directly. For the findings to influence policy and practice, the outcomes need to be relevant and important to key stakeholders including patients and the public, health care professionals and others making decisions about health care. It is now widely acknowledged that insufficient attention has been paid to the choice of outcomes measured in clinical trials. Researchers are increasingly addressing this issue through the development and use of a core outcome set, an agreed standardised collection of outcomes which should be measured and reported, as a minimum, in all trials for a specific clinical area.Accumulating work in this area has identified the need for guidance on the development, implementation, evaluation and updating of core outcome sets. This Handbook, developed by the COMET Initiative, brings together current thinking and methodological research regarding those issues. We recommend a four-step process to develop a core outcome set. The aim is to update the contents of the Handbook as further research is identified. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
44. Did the reporting of prognostic studies of tumour markers improve since the introduction of REMARK guideline? A comparison of reporting in published articles.
- Author
-
Sekula, Peggy, Mallett, Susan, Altman, Douglas G., and Sauerbrei, Willi
- Subjects
TUMOR markers ,PROGNOSTIC tests ,PROGNOSIS ,DIFFERENTIAL diagnosis ,EARLY diagnosis ,MOLECULAR biology ,MEDICAL protocols ,GENETICS - Abstract
Although biomarkers are perceived as highly relevant for future clinical practice, few biomarkers reach clinical utility for several reasons. Among them, poor reporting of studies is one of the major problems. To aid improvement, reporting guidelines like REMARK for tumour marker prognostic (TMP) studies were introduced several years ago. The aims of this project were to assess whether reporting quality of TMP-studies improved in comparison to a previously conducted study assessing reporting quality of TMP-studies (PRE-study) and to assess whether articles citing REMARK (citing group) are better reported, in comparison to articles not citing REMARK (not-citing group). For the POST-study, recent articles citing and not citing REMARK (53 each) were identified in selected journals through systematic literature search and evaluated in same way as in the PRE-study. Ten of the 20 items of the REMARK checklist were evaluated and used to define an overall score of reporting quality. The observed overall scores were 53.4% (range: 10%-90%) for the PRE-study, 57.7% (range: 20%-100%) for the not-citing group and 58.1% (range: 30%-100%) for the citing group of the POST-study. While there is no difference between the two groups of the POST-study, the POST-study shows a slight but not relevant improvement in reporting relative to the PRE-study. Not all the articles of the citing group, cited REMARK appropriately. Irrespective of whether REMARK was cited, the overall score was slightly higher for articles published in journals requesting adherence to REMARK than for those published in journals not requesting it: 59.9% versus 51.9%, respectively. Several years after the introduction of REMARK, many key items of TMP-studies are still very poorly reported. A combined effort is needed from authors, editors, reviewers and methodologists to improve the current situation. Good reporting is not just nice to have but is essential for any research to be useful. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
45. Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in Cochrane reviews: research on research study.
- Author
-
Dechartres, Agnes, Trinquart, Ludovic, Atal, Ignacio, Moher, David, Dickersin, Kay, Boutron, Isabelle, Perrodeau, Elodie, Altman, Douglas G., and Ravaud, Philippe
- Published
- 2017
- Full Text
- View/download PDF
46. Association between trial registration and positive study findings: cross sectional study (Epidemiological Study of Randomized Trials--ESORT).
- Author
-
Odutayo, Ayodele, Emdin, Connor A., Hsiao, Allan J., Shakir, Mubeen, Copsey, Bethan, Dutton, Susan, Chiocchia, Virginia, Schlussel, Michael, Dutton, Peter, Roberts, Corran, Altman, Douglas G., and Hopewell, Sally
- Published
- 2017
- Full Text
- View/download PDF
47. Harms of outcome switching in reports of randomised trials: CONSORT perspective.
- Author
-
Altman, Douglas G., Moher, David, and Schulz, Kenneth F.
- Published
- 2017
- Full Text
- View/download PDF
48. Review and publication of protocol submissions to Trials - what have we learned in 10 years?
- Author
-
Tianjing Li, Boutron, Isabelle, Salman, Rustam Al-Shahi, Cobo, Erik, Flemyng, Ella, Grimshaw, Jeremy M., Altman, Douglas G., Li, Tianjing, and Al-Shahi Salman, Rustam
- Subjects
MASS media ,NEWSLETTERS ,PUBLISHING - Abstract
Trials has 10 years of experience in providing open access publication of protocols for randomised controlled trials. In this editorial, the senior editors and editors-in-chief of Trials discuss editorial issues regarding managing trial protocol submissions, including the content and format of the protocol, timing of submission, approaches to tracking protocol amendments, and the purpose of peer reviewing a protocol submission. With the clarification and guidance provided, we hope we can make the process of publishing trial protocols more efficient and useful to trial investigators and readers. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
49. Risk and treatment effect heterogeneity: re-analysis of individual participant data from 32 large clinical trials.
- Author
-
Kent, David M., Nelson, Jason, Dahabreh, Issa J., Rothwell, Peter M., Altman, Douglas G., and Hayward, Rodney A.
- Subjects
DRUG side effects ,PATIENT-centered care ,PROPORTIONAL hazards models ,LOGISTIC regression analysis ,KIDNEY diseases ,THERAPEUTICS ,STATISTICS ,CLINICAL trials ,RISK assessment ,RESEARCH funding ,DATA analysis - Abstract
Background: Risk of the outcome is a mathematical determinant of the absolute treatment benefit of an intervention, yet this can vary substantially within a trial population, complicating the interpretation of trial results.Methods: We developed risk models using Cox or logistic regression on a set of large publicly available randomized controlled trials (RCTs). We evaluated risk heterogeneity using the extreme quartile risk ratio (EQRR, the ratio of outcome rates in the lowest risk quartile to that in the highest) and skewness using the median to mean risk ratio (MMRR, the ratio of risk in the median risk patient to the average). We also examined heterogeneity of treatment effects (HTE) across risk strata.Results: We describe 39 analyses using data from 32 large trials, with event rates across studies ranging from 3% to 63% (median = 15%, 25th-75th percentile = 9-29%). C-statistics of risk models ranged from 0.59 to 0.89 (median = 0.70, 25th-75th percentile = 0.65-0.71). The EQRR ranged from 1.8 to 50.7 (median = 4.3, 25th-75th percentile = 3.0-6.1). The MMRR ranged from 0.4 to 1.0 (median = 0.86, 25th-75th percentile = 0.80-0.92). EQRRs were predictably higher and MMRRs predictably lower as the c-statistic increased or the overall outcome incidence decreased. Among 18 comparisons with a significant overall treatment effect, there was a significant interaction between treatment and baseline risk on the proportional scale in only one. The difference in the absolute risk reduction between extreme risk quartiles ranged from -3.2 to 28.3% (median = 5.1%; 25th-75th percentile = 0.3-10.9).Conclusions: There is typically substantial variation in outcome risk in clinical trials, commonly leading to clinically significant differences in absolute treatment effects Most patients have outcome risks lower than the trial average reflected in the summary result. Risk-stratified trial analyses are feasible and may be clinically informative, particularly when the outcome is predictable and uncommon. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
50. Impact of a web-based tool (WebCONSORT) to improve the reporting of randomised trials: results of a randomised controlled trial.
- Author
-
Hopewell, Sally, Boutron, Isabelle, Altman, Douglas G., Barbour, Ginny, Moher, David, Montori, Victor, Schriger, David, Cook, Jonathan, Gerry, Stephen, Omar, Omar, Dutton, Peter, Roberts, Corran, Frangou, Eleni, Clifton, Lei, Chiocchia, Virginia, Rombach, Ines, Wartolowska, Karolina, and Ravaud, Philippe
- Subjects
RANDOMIZED controlled trials ,MEDICAL periodicals ,MEDICAL publishing ,MANUSCRIPTS ,CLINICAL trials - Abstract
Background: The CONSORT Statement is an evidence-informed guideline for reporting randomised controlled trials. A number of extensions have been developed that specify additional information to report for more complex trials. The aim of this study was to evaluate the impact of using a simple web-based tool (WebCONSORT, which incorporates a number of different CONSORT extensions) on the completeness of reporting of randomised trials published in biomedical publications. Methods: We conducted a parallel group randomised trial. Journals which endorsed the CONSORT Statement (i.e. referred to it in the Instruction to Authors) but do not actively implement it (i.e. require authors to submit a completed CONSORT checklist) were invited to participate. Authors of randomised trials were requested by the editor to use the web-based tool at the manuscript revision stage. Authors registering to use the tool were randomised (centralised computer generated) to WebCONSORT or control. In the WebCONSORT group, they had access to a tool allowing them to combine the different CONSORT extensions relevant to their trial and generate a customised checklist and flow diagram that they must submit to the editor. In the control group, authors had only access to a CONSORT flow diagram generator. Authors, journal editors, and outcome assessors were blinded to the allocation. The primary outcome was the proportion of CONSORT items (main and extensions) reported in each article post revision. Results: A total of 46 journals actively recruited authors into the trial (25 March 2013 to 22 September 2015); 324 author manuscripts were randomised (WebCONSORT n = 166; control n = 158), of which 197 were reports of randomised trials (n=94; n = 103). Over a third (39%; n = 127) of registered manuscripts were excluded from the analysis, mainly because the reported study was not a randomised trial. Of those included in the analysis, the most common CONSORT extensions selected were non-pharmacologic (n = 43; n = 50), pragmatic (n = 20; n = 16) and cluster (n=10; n=9). In a quarter of manuscripts, authors either wrongly selected an extension or failed to select the right extension when registering their manuscript on the WebCONSORT study site. Overall, there was no important difference in the overall mean score between WebCONSORT (mean score 0.51) and control (0.47) in the proportion of CONSORT and CONSORT extension items reported pertaining to a given study (mean difference, 0.04; 95% CI -0.02 to 0.10). Conclusions: This study failed to show a beneficial effect of a customised web-based CONSORT checklist to help authors prepare more complete trial reports. However, the exclusion of a large number of inappropriately registered manuscripts meant we had less precision than anticipated to detect a difference. Better education is needed, earlier in the publication process, for both authors and journal editorial staff on when and how to implement CONSORT and, in particular, CONSORT-related extensions. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.