11 results on '"P. Hardwicke"'
Search Results
2. Expanding the data Ark: an attempt to make the data from highly cited social science papers publicly available
- Author
-
Coby Dulitzki, Steven Michael Crane, Tom E. Hardwicke, and John P. A. Ioannidis
- Subjects
metaresearch ,reproducibility ,open science ,data transparency ,open data ,social science ,Science - Abstract
Access to scientific data can enable independent reuse and verification; however, most data are not available and become increasingly irrecoverable over time. This study aimed to retrieve and preserve important datasets from 160 of the most highly-cited social science articles published between 2008–2013 and 2015–2018. We asked authors if they would share data in a public repository—the Data Ark—or provide reasons if data could not be shared. Of the 160 articles, data for 117 (73%, 95% CI [67%–80%]) were not available and data for 7 (4%, 95% CI [0%–12%]) were available with restrictions. Data for 36 (22%, 95% CI [16%–30%]) articles were available in unrestricted form: 29 of these datasets were already available and 7 datasets were made available in the Data Ark. Most authors did not respond to our data requests and a minority shared reasons for not sharing, such as legal or ethical constraints. These findings highlight an unresolved need to preserve important scientific datasets and increase their accessibility to the scientific community.
- Published
- 2024
- Full Text
- View/download PDF
3. Reducing bias in secondary data analysis via an Explore and Confirm Analysis Workflow (ECAW): a proposal and survey of observational researchers
- Author
-
Robert T. Thibault, Marton Kovacs, Tom E. Hardwicke, Alexandra Sarafoglou, John P. A. Ioannidis, and Marcus R. Munafò
- Subjects
blind data analysis ,preregistration ,ALSPAC ,meta-research ,open science ,Explore and Confirm Analysis Workflow (ECAW) ,Science - Abstract
Background. Although preregistration can reduce researcher bias and increase transparency in primary research settings, it is less applicable to secondary data analysis. An alternative method that affords additional protection from researcher bias, which cannot be gained from conventional forms of preregistration alone, is an Explore and Confirm Analysis Workflow (ECAW). In this workflow, a data management organization initially provides access to only a subset of their dataset to researchers who request it. The researchers then prepare an analysis script based on the subset of data, upload the analysis script to a registry, and then receive access to the full dataset. ECAWs aim to achieve similar goals to preregistration, but make access to the full dataset contingent on compliance. The present survey aimed to garner information from the research community where ECAWs could be applied—employing the Avon Longitudinal Study of Parents and Children (ALSPAC) as a case example. Methods. We emailed a Web-based survey to researchers who had previously applied for access to ALSPAC's transgenerational observational dataset. Results. We received 103 responses, for a 9% response rate. The results suggest that—at least among our sample of respondents—ECAWs hold the potential to serve their intended purpose and appear relatively acceptable. For example, only 10% of respondents disagreed that ALSPAC should run a study on ECAWs (versus 55% who agreed). However, as many as 26% of respondents agreed that they would be less willing to use ALSPAC data if they were required to use an ECAW (versus 45% who disagreed). Conclusion. Our data and findings provide information for organizations and individuals interested in implementing ECAWs and related interventions. Preregistration. https://osf.io/g2fw5 Deviations from the preregistration are outlined in electronic supplementary material A.
- Published
- 2023
- Full Text
- View/download PDF
4. Post-publication critique at top-ranked journals across scientific disciplines: a cross-sectional assessment of policies and practice
- Author
-
Tom E. Hardwicke, Robert T. Thibault, Jessica E. Kosie, Loukia Tzavella, Theiss Bendixen, Sarah A. Handcock, Vivian E. Köneke, and John P. A. Ioannidis
- Subjects
peer review ,post-publication critique ,letter to the editor ,meta-research ,journal policy ,scientific criticism ,Science - Abstract
Journals exert considerable control over letters, commentaries and online comments that criticize prior research (post-publication critique). We assessed policies (Study One) and practice (Study Two) related to post-publication critique at 15 top-ranked journals in each of 22 scientific disciplines (N = 330 journals). Two-hundred and seven (63%) journals accepted post-publication critique and often imposed limits on length (median 1000, interquartile range (IQR) 500–1200 words) and time-to-submit (median 12, IQR 4–26 weeks). The most restrictive limits were 175 words and two weeks; some policies imposed no limits. Of 2066 randomly sampled research articles published in 2018 by journals accepting post-publication critique, 39 (1.9%, 95% confidence interval [1.4, 2.6]) were linked to at least one post-publication critique (there were 58 post-publication critiques in total). Of the 58 post-publication critiques, 44 received an author reply, of which 41 asserted that original conclusions were unchanged. Clinical Medicine had the most active culture of post-publication critique: all journals accepted post-publication critique and published the most post-publication critique overall, but also imposed the strictest limits on length (median 400, IQR 400–550 words) and time-to-submit (median 4, IQR 4–6 weeks). Our findings suggest that top-ranked academic journals often pose serious barriers to the cultivation, documentation and dissemination of post-publication critique.
- Published
- 2022
- Full Text
- View/download PDF
5. Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: an observational study
- Author
-
Tom E. Hardwicke, Manuel Bohn, Kyle MacDonald, Emily Hembacher, Michèle B. Nuijten, Benjamin N. Peloquin, Benjamin E. deMayo, Bria Long, Erica J. Yoon, and Michael C. Frank
- Subjects
open data ,open badges ,reproducibility ,open science ,meta-research ,journal policy ,Science - Abstract
For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015. Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one ‘major numerical discrepancy' (>10% difference) prompting us to request input from original authors. Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles. Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected. Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility.
- Published
- 2021
- Full Text
- View/download PDF
6. An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014–2017)
- Author
-
Tom E. Hardwicke, Joshua D. Wallach, Mallory C. Kidwell, Theiss Bendixen, Sophia Crüwell, and John P. A. Ioannidis
- Subjects
transparency ,reproducibility ,meta-research ,social sciences ,open science ,Science - Abstract
Serious concerns about research quality have catalysed a number of reform initiatives intended to improve transparency and reproducibility and thus facilitate self-correction, increase efficiency and enhance research credibility. Meta-research has evaluated the merits of some individual initiatives; however, this may not capture broader trends reflecting the cumulative contribution of these efforts. In this study, we manually examined a random sample of 250 articles in order to estimate the prevalence of a range of transparency and reproducibility-related indicators in the social sciences literature published between 2014 and 2017. Few articles indicated availability of materials (16/151, 11% [95% confidence interval, 7% to 16%]), protocols (0/156, 0% [0% to 1%]), raw data (11/156, 7% [2% to 13%]) or analysis scripts (2/156, 1% [0% to 3%]), and no studies were pre-registered (0/156, 0% [0% to 1%]). Some articles explicitly disclosed funding sources (or lack of; 74/236, 31% [25% to 37%]) and some declared no conflicts of interest (36/236, 15% [11% to 20%]). Replication studies were rare (2/156, 1% [0% to 3%]). Few studies were included in evidence synthesis via systematic review (17/151, 11% [7% to 16%]) or meta-analysis (2/151, 1% [0% to 3%]). Less than half the articles were publicly available (101/250, 40% [34% to 47%]). Minimal adoption of transparency and reproducibility-related research practices could be undermining the credibility and efficiency of social science research. The present study establishes a baseline that can be revisited in the future to assess progress.
- Published
- 2020
- Full Text
- View/download PDF
7. How often do leading biomedical journals use statistical experts to evaluate statistical methods? The results of a survey.
- Author
-
Tom E Hardwicke and Steven N Goodman
- Subjects
Medicine ,Science - Abstract
Scientific claims in biomedical research are typically derived from statistical analyses. However, misuse or misunderstanding of statistical procedures and results permeate the biomedical literature, affecting the validity of those claims. One approach journals have taken to address this issue is to enlist expert statistical reviewers. How many journals do this, how statistical review is incorporated, and how its value is perceived by editors is of interest. Here we report an expanded version of a survey conducted more than 20 years ago by Goodman and colleagues (1998) with the intention of characterizing contemporary statistical review policies at leading biomedical journals. We received eligible responses from 107 of 364 (28%) journals surveyed, across 57 fields, mostly from editors in chief. 34% (36/107) rarely or never use specialized statistical review, 34% (36/107) used it for 10-50% of their articles and 23% used it for all articles. These numbers have changed little since 1998 in spite of dramatically increased concern about research validity. The vast majority of editors regarded statistical review as having substantial incremental value beyond regular peer review and expressed comparatively little concern about the potential increase in reviewing time, cost, and difficulty identifying suitable statistical reviewers. Improved statistical education of researchers and different ways of employing statistical expertise are needed. Several proposals are discussed.
- Published
- 2020
- Full Text
- View/download PDF
8. From symptom to cancer diagnosis: Perspectives of patients and family members in Alberta, Canada.
- Author
-
Anna Pujadas Botey, Paula J Robson, Adam M Hardwicke-Brown, Dorothy M Rodehutskors, Barbara M O'Neill, and Douglas A Stewart
- Subjects
Medicine ,Science - Abstract
BackgroundSignificant intervals from the identification of suspicious symptoms to a definitive diagnosis of cancer are common. Streamlining pathways to diagnosis may increase survival, quality of life post-treatment, and patient experience. Discussions of pathways to diagnosis from the perspective of patients and family members are crucial to advancing cancer diagnosis.AimTo examine the perspectives of a group of patients with cancer and family members in Alberta, Canada, on factors associated with timelines to diagnosis and overall experience.MethodsA qualitative approach was used. In-depth, semi-structured interviews with patients with cancer (n = 18) and patient relatives (n = 5) were conducted and subjected to a thematic analysis.FindingsParticipants struggled emotionally in the diagnostic period. Relevant to their experience were: potentially avoidable delays, concerns about health status, and misunderstood investigation process. Participants emphasized the importance of their active involvement in the care process, and had unmet supportive care needs.ConclusionPsychosocial supports available to potential cancer patients and their families are minimal, and may be important for improved experiences before diagnosis. Access to other patients' lived experiences with the diagnostic process and with cancer, and an enhanced supportive role of family doctors might help improve experiences for patients and families in the interval before receiving a diagnosis of cancer, which may have a significant impact on wellbeing.
- Published
- 2020
- Full Text
- View/download PDF
9. How often do leading biomedical journals use statistical experts to evaluate statistical methods? The results of a survey
- Author
-
Tom E. Hardwicke, Steven N. Goodman, and Despina Koletsi
- Subjects
Medicine ,Science - Abstract
Scientific claims in biomedical research are typically derived from statistical analyses. However, misuse or misunderstanding of statistical procedures and results permeate the biomedical literature, affecting the validity of those claims. One approach journals have taken to address this issue is to enlist expert statistical reviewers. How many journals do this, how statistical review is incorporated, and how its value is perceived by editors is of interest. Here we report an expanded version of a survey conducted more than 20 years ago by Goodman and colleagues (1998) with the intention of characterizing contemporary statistical review policies at leading biomedical journals. We received eligible responses from 107 of 364 (28%) journals surveyed, across 57 fields, mostly from editors in chief. 34% (36/107) rarely or never use specialized statistical review, 34% (36/107) used it for 10–50% of their articles and 23% used it for all articles. These numbers have changed little since 1998 in spite of dramatically increased concern about research validity. The vast majority of editors regarded statistical review as having substantial incremental value beyond regular peer review and expressed comparatively little concern about the potential increase in reviewing time, cost, and difficulty identifying suitable statistical reviewers. Improved statistical education of researchers and different ways of employing statistical expertise are needed. Several proposals are discussed.
- Published
- 2020
10. Populating the Data Ark: An attempt to retrieve, preserve, and liberate data from the most highly-cited psychology and psychiatry articles.
- Author
-
Tom E Hardwicke and John P A Ioannidis
- Subjects
Medicine ,Science - Abstract
The vast majority of scientific articles published to-date have not been accompanied by concomitant publication of the underlying research data upon which they are based. This state of affairs precludes the routine re-use and re-analysis of research data, undermining the efficiency of the scientific enterprise, and compromising the credibility of claims that cannot be independently verified. It may be especially important to make data available for the most influential studies that have provided a foundation for subsequent research and theory development. Therefore, we launched an initiative-the Data Ark-to examine whether we could retrospectively enhance the preservation and accessibility of important scientific data. Here we report the outcome of our efforts to retrieve, preserve, and liberate data from 111 of the most highly-cited articles published in psychology and psychiatry between 2006-2011 (n = 48) and 2014-2016 (n = 63). Most data sets were not made available (76/111, 68%, 95% CI [60, 77]), some were only made available with restrictions (20/111, 18%, 95% CI [10, 27]), and few were made available in a completely unrestricted form (15/111, 14%, 95% CI [5, 22]). Where extant data sharing systems were in place, they usually (17/22, 77%, 95% CI [54, 91]) did not allow unrestricted access. Authors reported several barriers to data sharing, including issues related to data ownership and ethical concerns. The Data Ark initiative could help preserve and liberate important scientific data, surface barriers to data sharing, and advance community discussions on data stewardship.
- Published
- 2018
- Full Text
- View/download PDF
11. Data availability, reusability, and analytic reproducibility: evaluating the impact of a mandatory open data policy at the journal Cognition
- Author
-
Tom E. Hardwicke, Maya B. Mathur, Kyle MacDonald, Gustav Nilsonne, George C. Banks, Mallory C. Kidwell, Alicia Hofelich Mohr, Elizabeth Clayton, Erica J. Yoon, Michael Henry Tessler, Richie L. Lenne, Sara Altman, Bria Long, and Michael C. Frank
- Subjects
open data ,reproducibility ,open science ,meta-science ,interrupted time series ,journal policy ,Science - Abstract
Access to data is a critical feature of an efficient, progressive and ultimately self-correcting scientific ecosystem. But the extent to which in-principle benefits of data sharing are realized in practice is unclear. Crucially, it is largely unknown whether published findings can be reproduced by repeating reported analyses upon shared data (‘analytic reproducibility’). To investigate this, we conducted an observational evaluation of a mandatory open data policy introduced at the journal Cognition. Interrupted time-series analyses indicated a substantial post-policy increase in data available statements (104/417, 25% pre-policy to 136/174, 78% post-policy), although not all data appeared reusable (23/104, 22% pre-policy to 85/136, 62%, post-policy). For 35 of the articles determined to have reusable data, we attempted to reproduce 1324 target values. Ultimately, 64 values could not be reproduced within a 10% margin of error. For 22 articles all target values were reproduced, but 11 of these required author assistance. For 13 articles at least one value could not be reproduced despite author assistance. Importantly, there were no clear indications that original conclusions were seriously impacted. Mandatory open data policies can increase the frequency and quality of data sharing. However, suboptimal data curation, unclear analysis specification and reporting errors can impede analytic reproducibility, undermining the utility of data sharing and the credibility of scientific findings.
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.