4,110 results
Search Results
2. Semi-automated assessment of the risk of bias due to missing evidence in network meta-analysis: a guidance paper for the ROB-MEN web-application
- Author
-
Chiocchia, Virginia, Holloway, Alexander, and Salanti, Georgia
- Published
- 2023
- Full Text
- View/download PDF
3. Correction: Impact of sampling and data collection methods on maternity survey response: a randomised controlled trial of paper and push-to-web surveys and a concurrent social media survey
- Author
-
Harrison, Siân, Alderdice, Fiona, and Quigley, Maria A.
- Published
- 2023
- Full Text
- View/download PDF
4. Correction: Impact of sampling and data collection methods on maternity survey response: a randomised controlled trial of paper and push‑to‑web surveys and a concurrent social media survey.
- Author
-
Harrison S, Alderdice F, and Quigley MA
- Published
- 2024
- Full Text
- View/download PDF
5. Going web or staying paper? The use of web-surveys among older people
- Author
-
Kelfve, Susanne, Kivi, Marie, Johansson, Boo, and Lindwall, Magnus
- Published
- 2020
- Full Text
- View/download PDF
6. Agreement between the Schedule for the Evaluation of Individual Quality of Life-Direct Weighting (SEIQoL-DW) interview and a paper-administered adaption
- Author
-
Burckhardt, Marion, Fleischer, Steffen, and Berg, Almuth
- Published
- 2020
- Full Text
- View/download PDF
7. A review of the use of propensity score diagnostics in papers published in high-ranking medical journals
- Author
-
Granger, Emily, Watkins, Tim, Sergeant, Jamie C., and Lunt, Mark
- Published
- 2020
- Full Text
- View/download PDF
8. GPT-4 performance on querying scientific publications: reproducibility, accuracy, and impact of an instruction sheet.
- Author
-
Tao, Kaiming, Osman, Zachary A., Tzou, Philip L., Rhee, Soo-Yon, Ahluwalia, Vineet, and Shafer, Robert W.
- Subjects
GENERATIVE pre-trained transformers ,LANGUAGE models ,ANTI-HIV agents - Abstract
Background: Large language models (LLMs) that can efficiently screen and identify studies meeting specific criteria would streamline literature reviews. Additionally, those capable of extracting data from publications would enhance knowledge discovery by reducing the burden on human reviewers. Methods: We created an automated pipeline utilizing OpenAI GPT-4 32 K API version "2023–05-15" to evaluate the accuracy of the LLM GPT-4 responses to queries about published papers on HIV drug resistance (HIVDR) with and without an instruction sheet. The instruction sheet contained specialized knowledge designed to assist a person trying to answer questions about an HIVDR paper. We designed 60 questions pertaining to HIVDR and created markdown versions of 60 published HIVDR papers in PubMed. We presented the 60 papers to GPT-4 in four configurations: (1) all 60 questions simultaneously; (2) all 60 questions simultaneously with the instruction sheet; (3) each of the 60 questions individually; and (4) each of the 60 questions individually with the instruction sheet. Results: GPT-4 achieved a mean accuracy of 86.9% – 24.0% higher than when the answers to papers were permuted. The overall recall and precision were 72.5% and 87.4%, respectively. The standard deviation of three replicates for the 60 questions ranged from 0 to 5.3% with a median of 1.2%. The instruction sheet did not significantly increase GPT-4's accuracy, recall, or precision. GPT-4 was more likely to provide false positive answers when the 60 questions were submitted individually compared to when they were submitted together. Conclusions: GPT-4 reproducibly answered 3600 questions about 60 papers on HIVDR with moderately high accuracy, recall, and precision. The instruction sheet's failure to improve these metrics suggests that more sophisticated approaches are necessary. Either enhanced prompt engineering or finetuning an open-source model could further improve an LLM's ability to answer questions about highly specialized HIVDR papers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. An experimental comparison of web-push vs. paper-only survey procedures for conducting an in-depth health survey of military spouses.
- Author
-
McMaster HS, LeardMann CA, Speigle S, and Dillman DA
- Subjects
- Adolescent, Adult, Cohort Studies, Data Collection methods, Female, Humans, Male, Paper, Postal Service, Spouses, Young Adult, Health Surveys methods, Internet, Military Personnel
- Abstract
Background: Previous research has found that a "web-push" approach to data collection, which involves contacting people by mail to request an Internet survey response while withholding a paper response option until later in the contact process, consistently achieves lower response rates than a "paper-only" approach, whereby all respondents are contacted and requested to respond by mail., Method: An experiment was designed, as part of the Millennium Cohort Family Study, to compare response rates, sample representativeness, and cost between a web-push and a paper-only approach; each approach comprised 3 stages of mail contacts. The invited sample (n = 4,935) consisted of spouses married to U.S. Service members, who had been serving in the military between 2 and 5 years as of October, 2011., Results: The web-push methodology produced a significantly higher response rate, 32.8% compared to 27.8%. Each of the 3 stages of postal contact significantly contributed to response for both treatments with 87.1% of the web-push responses received over the Internet. The per-respondent cost of the paper-only treatment was almost 40% higher than the web-push treatment group. Analyses revealed no meaningfully significant differences between treatment groups in representation., Conclusion: These results provide evidence that a web-push methodology is more effective and less expensive than a paper-only approach among young military spouses, perhaps due to their heavy reliance on the internet, and we suggest that this approach may be more effective with the general population as they become more uniformly internet savvy.
- Published
- 2017
- Full Text
- View/download PDF
10. Documenting patients' and providers' preferences when proposing a randomized controlled trial: a qualitative exploration.
- Author
-
Oberoi, Devesh, Kwok, Cynthia, Li, Yong, Railton, Cindy, Horsman, Susan, Reynolds, Kathleen, Joy, Anil A., King, Karen Marie, Lupichuk, Sasha Michelle, Speca, Michael, Culos-Reed, Nicole, Carlson, Linda E., and Giese-Davis, Janine
- Subjects
- *
RANDOMIZED controlled trials , *ELECTRONIC paper , *BREAST cancer , *ADVERSE childhood experiences , *THEMATIC analysis - Abstract
Background: With advances in cancer diagnosis and treatment, women with early-stage breast cancer (ESBC) are living longer, increasing the number of patients receiving post-treatment follow-up care. Best-practice survivorship models recommend transitioning ESBC patients from oncology-provider (OP) care to community-based care. While developing materials for a future randomized controlled trial (RCT) to test the feasibility of a nurse-led Telephone Survivorship Clinic (TSC) for a smooth transition of ESBC survivors to follow-up care, we explored patients' and OPs' reactions to several of our proposed methods.Methods: We used a qualitative study design with thematic analysis and a two-pronged approach. We interviewed OPs, seeking feedback on ways to recruit their ESBC patients for the trial, and ESBC patients, seeking input on a questionnaire package assessing outcomes and processes in the trial.Results: OPs identified facilitators and barriers and offered suggestions for study design and recruitment process improvement. Facilitators included the novelty and utility of the study and simplicity of methods; barriers included lack of coordination between treating and discharging clinicians, time constraints, language barriers, motivation, and using a paper-based referral letter. OPs suggested using a combination of electronic and paper referral letters and supporting clinicians to help with recruitment. Patient advisors reported satisfaction with the content and length of the assessment package. However, they questioned the relevance of some questions (childhood trauma) while adding questions about trust in physicians and proximity to primary-care providers.Conclusions: OPs and patient advisors rated our methods for the proposed trial highly for their simplicity and relevance then suggested changes. These findings document processes that could be effective for cancer-patient recruitment in survivorship clinical trials. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
11. Agreement between the Schedule for the Evaluation of Individual Quality of Life-Direct Weighting (SEIQoL-DW) interview and a paper-administered adaption
- Author
-
Almuth Berg, Steffen Fleischer, and Marion Burckhardt
- Subjects
Quality of life ,Male ,Schedule ,Psychometrics ,Epidemiology ,Health Informatics ,A-weighting ,03 medical and health sciences ,0302 clinical medicine ,Clinical trials ,Germany ,Surveys and Questionnaires ,Statistics ,Humans ,030212 general & internal medicine ,Mathematics ,Aged ,lcsh:R5-920 ,030503 health policy & services ,Data Collection ,Cardiac surgery ,Crossover study ,Patient reported outcome measures ,Confidence interval ,humanities ,Weighting ,Female ,lcsh:Medicine (General) ,0305 other medical science ,Seiqol dw ,Research Article - Abstract
Background The Schedule for the Evaluation of Individual Quality of Life-Direct Weighting (SEIQoL-DW) is a prevalent face-to-face interview method for measuring quality of life by integrating respondent-generated dimensions. To apply this method in clinical trials, a paper-administered alternative would be of interest. Therefore, our study aimed to analyze the agreement between the SEIQoL-DW and a paper questionnaire version (SEIQoL-PF/G). Methods In a crossover design, both measures were completed in a random sequence. 104 patients at a heart surgery hospital in Germany were randomly assigned to receive either the SEIQoL-DW or the SEIQoL-PF/G as the first measurement in the sequence. Patients were approached on their earliest stable day after surgery. The average time between both measurements was 1 day (mean 1.3; SD 0.8). Agreement regarding the indices, ratings, and weightings of nominated life areas (cues) was explored using Bland-Altman plots with 95% limits of agreement (LoA). Agreement of the SEIQoL indices was defined as acceptable if the LoA did not exceed a threshold of 10 scale points. Data from n = 99 patients were included in the agreement analysis. Results Both measures led to similarly nominated cues. The most frequently nominated cues were “physical health” and “family”. In the Bland-Altman plot, the indices showed a mean of differences of 2 points (95% CI, − 1 to 6). The upper LoA showed a difference of 36 points (95% CI, 30 to 42), and the lower LoA showed a difference of − 31 points (95% CI, − 37 to − 26). Thus, the LoAs and confidence intervals exceeded the predefined threshold. The Bland-Altman plots for the cue levels and cue weights showed similar results. The SEIQoL-PF/G version showed a tendency for equal weighting of cues, while the weighting procedure of the SEIQoL-DW led to greater variability. Conclusions For cardiac surgery patients, use of the current version of the SEIQoL-PF/G as a substitute for the SEIQoL-DW is not recommended. The current questionnaire weighting method seems to be unable to distinguish weighting for different cues. Therefore, the further design of a weighting method without interviewer support as a paper-administered measure of individual quality of life is desirable.
- Published
- 2020
12. A systematic review of simulation studies which compare existing statistical methods to account for non-compliance in randomised controlled trials.
- Author
-
Abell, Lucy, Maher, Francesca, Jennings, Angus C, and Gray, Laura J
- Subjects
RANDOMIZED controlled trials ,NONCOMPLIANCE ,DATA extraction ,DATA integrity ,RESEARCH personnel - Abstract
Introduction: Non-compliance is a common challenge for researchers and may reduce the power of an intention-to-treat analysis. Whilst a per protocol approach attempts to deal with this issue, it can result in biased estimates. Several methods to resolve this issue have been identified in previous reviews, but there is limited evidence supporting their use. This review aimed to identify simulation studies which compare such methods, assess the extent to which certain methods have been investigated and determine their performance under various scenarios. Methods: A systematic search of several electronic databases including MEDLINE and Scopus was carried out from conception to 30th November 2022. Included papers were published in a peer-reviewed journal, readily available in the English language and focused on comparing relevant methods in a superiority randomised controlled trial under a simulation study. Articles were screened using these criteria and a predetermined extraction form used to identify relevant information. A quality assessment appraised the risk of bias in individual studies. Extracted data was synthesised using tables, figures and a narrative summary. Both screening and data extraction were performed by two independent reviewers with disagreements resolved by consensus. Results: Of 2325 papers identified, 267 full texts were screened and 17 studies finally included. Twelve methods were identified across papers. Instrumental variable methods were commonly considered, but many authors found them to be biased in some settings. Non-compliance was generally assumed to be all-or-nothing and only occurring in the intervention group, although some methods considered it as time-varying. Simulation studies commonly varied the level and type of non-compliance and factors such as effect size and strength of confounding. The quality of papers was generally good, although some lacked detail and justification. Therefore, their conclusions were deemed to be less reliable. Conclusions: It is common for papers to consider instrumental variable methods but more studies are needed that consider G-methods and compare a wide range of methods in realistic scenarios. It is difficult to make conclusions about the best method to deal with non-compliance due to a limited body of evidence and the difficulty in combining results from independent simulation studies. PROSPERO registration number: CRD42022370910. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Mobile electronic versus paper case report forms in clinical trials: a randomized controlled trial.
- Author
-
Fleischmann R, Decker AM, Kraft A, Mai K, and Schmidt S
- Subjects
- Adult, Data Accuracy, Female, Humans, Male, Middle Aged, Quality Improvement, Weight Loss, Electronic Health Records
- Abstract
Background: Regulations, study design complexity and amounts of collected and shared data in clinical trials render efficient data handling procedures inevitable. Recent research suggests that electronic data capture can be key in this context but evidence is insufficient. This randomized controlled parallel group study tested the hypothesis that time efficiency is superior when electronic (eCRF) instead of paper case report forms (pCRF) are used for data collection. We additionally investigated predictors of time saving effects and data integrity., Methods: This study was conducted on top of a clinical weight loss trial performed at a clinical research facility over six months. All study nurses and patients participating in the clinical trial were eligible to participate and randomly allocated to enter cross-sectional data obtained during routine visits either through pCRF or eCRF. A balanced randomization list was generated before enrolment commenced. 90 and 30 records were gathered for the time that 27 patients and 2 study nurses required to report 2025 and 2037 field values, respectively. The primary hypothesis, that eCRF use is faster than pCRF use, was tested by a two-tailed t-test. Analysis of variance and covariance were used to evaluate predictors of entry performance. Data integrity was evaluated by descriptive statistics., Results: All randomized patients were included in the study (eCRF group n = 13, pCRF group n = 14). eCRF, as compared to pCRF, data collection was associated with significant time savings across all conditions (8.29 ± 5.15 min vs. 10.54 ± 6.98 min, p = .047). This effect was not defined by participant type, i.e. patients or study nurses (F
(1,112) = .15, p = .699), CRF length (F(2,112) = .49, p = .609) or patient age (Beta = .09, p = .534). Additional 5.16 ± 2.83 min per CRF were saved with eCRFs due to data transcription redundancy when patients answered questionnaires directly in eCRFs. Data integrity was superior in the eCRF condition (0 versus 3 data entry errors)., Conclusions: This is the first study to prove in direct comparison that using eCRFs instead of pCRFs increases time efficiency of data collection in clinical trials, irrespective of item quantity or patient age, and improves data quality., Trial Registration: Clinical Trials NCT02649907 .- Published
- 2017
- Full Text
- View/download PDF
14. Data visualisation approaches for component network meta-analysis: visualising the data structure.
- Author
-
Freeman, Suzanne C., Saeedi, Elnaz, Ordóñez-Mena, José M., Nevill, Clareece R., Hartmann-Boyce, Jamie, Caldwell, Deborah M., Welton, Nicky J., Cooper, Nicola J., and Sutton, Alex J.
- Subjects
DATA structures ,RANDOMIZED controlled trials - Abstract
Background: Health and social care interventions are often complex and can be decomposed into multiple components. Multicomponent interventions are often evaluated in randomised controlled trials. Across trials, interventions often have components in common which are given alongside other components which differ across trials. Multicomponent interventions can be synthesised using component NMA (CNMA). CNMA is limited by the structure of the available evidence, but it is not always straightforward to visualise such complex evidence networks. The aim of this paper is to develop tools to visualise the structure of complex evidence networks to support CNMA. Methods: We performed a citation review of two key CNMA methods papers to identify existing published CNMA analyses and reviewed how they graphically represent intervention complexity and comparisons across trials. Building on identified shortcomings of existing visualisation approaches, we propose three approaches to standardise visualising the data structure and/or availability of data: CNMA-UpSet plot, CNMA heat map, CNMA-circle plot. We use a motivating example to illustrate these plots. Results: We identified 34 articles reporting CNMAs. A network diagram was the most common plot type used to visualise the data structure for CNMA (26/34 papers), but was unable to express the complex data structures and large number of components and potential combinations of components associated with CNMA. Therefore, we focused visualisation development around representing the data structure of a CNMA more completely. The CNMA-UpSet plot presents arm-level data and is suitable for networks with large numbers of components or combinations of components. Heat maps can be utilised to inform decisions about which pairwise interactions to consider for inclusion in a CNMA model. The CNMA-circle plot visualises the combinations of components which differ between trial arms and offers flexibility in presenting additional information such as the number of patients experiencing the outcome of interest in each arm. Conclusions: As CNMA becomes more widely used for the evaluation of multicomponent interventions, the novel CNMA-specific visualisations presented in this paper, which improve on the limitations of existing visualisations, will be important to aid understanding of the complex data structure and facilitate interpretation of the CNMA results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Features of databases that supported searching for rapid evidence synthesis during COVID-19: implications for future public health emergencies.
- Author
-
Hagerman, Leah, Clark, Emily C., Neil-Sztramko, Sarah E., Colangeli, Taylor, and Dobbins, Maureen
- Subjects
COVID-19 pandemic ,LEGAL evidence ,KNOWLEDGE management ,EMERGENCY contraceptives ,PUBLIC health ,DATABASES - Abstract
Background: As evidence related to the COVID-19 pandemic surged, databases, platforms, and repositories evolved with features and functions to assist users in promptly finding the most relevant evidence. In response, research synthesis teams adopted novel searching strategies to sift through the vast amount of evidence to synthesize and disseminate the most up-to-date evidence. This paper explores the key database features that facilitated systematic searching for rapid evidence synthesis during the COVID-19 pandemic to inform knowledge management infrastructure during future global health emergencies. Methods: This paper outlines the features and functions of previously existing and newly created evidence sources routinely searched as part of the NCCMT's Rapid Evidence Service methods, including databases, platforms, and repositories. Specific functions of each evidence source were assessed as they pertain to searching in the context of a public health emergency, including the topics of indexed citations, the level of evidence of indexed citations, and specific usability features of each evidence source. Results: Thirteen evidence sources were assessed, of which four were newly created and nine were either pre-existing or adapted from previously existing resources. Evidence sources varied in topics indexed, level of evidence indexed, and specific searching functions. Conclusion: This paper offers insights into which features enabled systematic searching for the completion of rapid reviews to inform decision makers within 5–10 days. These findings provide guidance for knowledge management strategies and evidence infrastructures during future public health emergencies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. A methodological systematic review of what's wrong with meta-ethnography reporting.
- Author
-
France, Emma F., Ring, Nicola, Thomas, Rebecca, Noyes, Jane, Maxwell, Margaret, and Jepson, Ruth
- Subjects
META-analysis ,SYSTEMATIC reviews ,ETHNOLOGY ,PUBLIC health research ,MEDICAL research ,CONCEPTUAL models - Abstract
Background Syntheses of qualitative studies can inform health policy, services and our understanding of patient experience. Meta-ethnography is a systematic seven-phase interpretive qualitative synthesis approach well-suited to producing new theories and conceptual models. However, there are concerns about the quality of meta-ethnography reporting, particularly the analysis and synthesis processes. Our aim was to investigate the application and reporting of methods in recent meta-ethnography journal papers, focusing on the analysis and synthesis process and output. Methods Methodological systematic review of health-related meta-ethnography journal papers published from 2012-2013. We searched six electronic databases, Google Scholar and Zetoc for papers using key terms including 'meta-ethnography.' Two authors independently screened papers by title and abstract with 100% agreement. We identified 32 relevant papers. Three authors independently extracted data and all authors analysed the application and reporting of methods using content analysis. Results Meta-ethnography was applied in diverse ways, sometimes inappropriately. In 13% of papers the approach did not suit the research aim. In 66% of papers reviewers did not follow the principles of meta-ethnography. The analytical and synthesis processes were poorly reported overall. In only 31% of papers reviewers clearly described how they analysed conceptual data from primary studies (phase 5, 'translation' of studies) and in only one paper (3%) reviewers explicitly described how they conducted the analytic synthesis process (phase 6). In 38% of papers we could not ascertain if reviewers had achieved any new interpretation of primary studies. In over 30% of papers seminal methodological texts which could have informed methods were not cited. Conclusions We believe this is the first in-depth methodological systematic review of meta-ethnography conduct and reporting. Meta-ethnography is an evolving approach. Current reporting of methods, analysis and synthesis lacks clarity and comprehensiveness. This is a major barrier to use of meta-ethnography findings that could contribute significantly to the evidence base because it makes judging their rigour and credibility difficult. To realise the high potential value of meta-ethnography for enhancing health care and understanding patient experience requires reporting that clearly conveys the methodology, analysis and findings. Tailored meta-ethnography reporting guidelines, developed through expert consensus, could improve reporting. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
17. survextrap: a package for flexible and transparent survival extrapolation.
- Author
-
Jackson, Christopher H.
- Subjects
FLEXIBLE packaging ,EXTRAPOLATION ,MEDICAL registries ,TECHNOLOGY assessment ,MEDICAL technology - Abstract
Background: Health policy decisions are often informed by estimates of long-term survival based primarily on short-term data. A range of methods are available to include longer-term information, but there has previously been no comprehensive and accessible tool for implementing these. Results: This paper introduces a novel model and software package for parametric survival modelling of individual-level, right-censored data, optionally combined with summary survival data on one or more time periods. It could be used to estimate long-term survival based on short-term data from a clinical trial, combined with longer-term disease registry or population data, or elicited judgements. All data sources are represented jointly in a Bayesian model. The hazard is modelled as an M-spline function, which can represent potential changes in the hazard trajectory at any time. Through Bayesian estimation, the model automatically adapts to fit the available data, and acknowledges uncertainty where the data are weak. Therefore long-term estimates are only confident if there are strong long-term data, and inferences do not rely on extrapolating parametric functions learned from short-term data. The effects of treatment or other explanatory variables can be estimated through proportional hazards or with a flexible non-proportional hazards model. Some commonly-used mechanisms for survival can also be assumed: cure models, additive hazards models with known background mortality, and models where the effect of a treatment wanes over time. All of these features are provided for the first time in an R package, survextrap, in which models can be fitted using standard R survival modelling syntax. This paper explains the model, and demonstrates the use of the package to fit a range of models to common forms of survival data used in health technology assessments. Conclusions: This paper has provided a tool that makes comprehensive and principled methods for survival extrapolation easily usable. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Response is increased using postal rather than electronic questionnaires – new results from an updated Cochrane Systematic Review.
- Author
-
Edwards, Phil and Perkins, Chloe
- Subjects
RANDOMIZED controlled trials ,SELF-evaluation ,INTERNET access ,POLAR effects (Chemistry) ,RESEARCH personnel - Abstract
Background: A decade ago paper questionnaires were more common in epidemiology than those administered online, but increasing Internet access may have changed this. Researchers planning to use a self-administered questionnaire should know whether response rates to questionnaires administered electronically differ to those of questionnaires administered by post. We analysed trials included in a recently updated Cochrane Review to answer this question. Methods: We exported data of randomised controlled trials included in three comparisons in the Cochrane Review that had evaluated hypotheses relevant to our research objective and imported them into Stata for a series of meta-analyses not conducted in the Cochrane review. We pooled odds ratios for response using random effects meta-analyses. We explored causes of heterogeneity among study results using subgroups. We assessed evidence for reporting bias using Harbord's modified test for small-study effects. Results: Twenty-seven trials (66,118 participants) evaluated the effect on response of an electronic questionnaire compared with postal. Results were heterogeneous (I-squared = 98%). There was evidence for biased (greater) effect estimates in studies at high risk of bias; A synthesis of studies at low risk of bias indicates that response was increased (OR = 1.43; 95% CI 1.08–1.89) using postal questionnaires. Ten trials (39,523 participants) evaluated the effect of providing a choice of mode (postal or electronic) compared to an electronic questionnaire only. Response was increased with a choice of mode (OR = 1.63; 95% CI 1.18–2.26). Eight trials (20,909 participants) evaluated the effect of a choice of mode (electronic or postal) compared to a postal questionnaire only. There was no evidence for an effect on response of a choice of mode compared with postal only (OR = 0.94; 95% CI 0.86–1.02). Conclusions: Postal questionnaires should be used in preference to, or offered in addition to, electronic modes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Recruiting people with selected citizenships for the health interview survey GEDA Fokus throughout Germany: evaluation of recruitment efforts and recommendations for future research.
- Author
-
Koschollek, Carmen, Gaertner, Beate, Geerlings, Julia, Kuhnert, Ronny, Mauz, Elvira, and Hövener, Claudia
- Subjects
PUBLIC opinion polls ,HEALTH behavior ,TELEPHONE interviewing ,MENTAL depression ,POPULATION health - Abstract
Background: Germany is the second most common country of immigration after the US. However, people with own or familial history of migration are not represented proportionately to the population within public health monitoring and reporting. To bridge this data gap and enable differentiated analyses on migration and health, we conducted the health interview survey GEDA Fokus among adults with Croatian, Italian, Polish, Syrian, or Turkish citizenship living throughout Germany. The aim of this paper is to evaluate the effects of recruitment efforts regarding participation and sample composition. Methods: Data collection for this cross-sectional and multilingual survey took place between 11/2021 and 5/2022 utilizing a sequential mixed-mode design, including self-administered web- and paper-based questionnaires as well as face-to-face and telephone interviews. The gross sample (n = 33436; age range 18–79 years) was randomly drawn from the residents' registers in 120 primary sampling units based on citizenship. Outcome rates according to the American Association for Public Opinion Research, the sample composition throughout the multistage recruitment process, utilization of survey modes, and questionnaire languages are presented. Results: Overall, 6038 persons participated, which corresponded to a response rate of 18.4% (range: 13.8% for Turkish citizenship to 23.9% for Syrian citizenship). Home visits accounted for the largest single increase in response. During recruitment, more female, older, as well as participants with lower levels of education and income took part in the survey. People with physical health problems and less favourable health behaviour more often took part in the survey at a later stage, while participants with symptoms of depression or anxiety more often participated early. Utilization of survey modes and questionnaire languages differed by sociodemographic and migration-related characteristics, e.g. participants aged 50 years and above more often used paper- than web-based questionnaires and those with a shorter duration of residence more often used a translated questionnaire. Conclusion: Multiple contact attempts, including home visits and different survey languages, as well as offering different modes of survey administration, increased response rates and most likely reduced non-response bias. In order to adequately represent and include the diversifying population in public health monitoring, national public health institutes should tailor survey designs to meet the needs of different population groups considered hard to survey to enable their survey participation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Addressing researcher degrees of freedom through minP adjustment.
- Author
-
Mandl, Maximilian M., Becker-Pennrich, Andrea S., Hinske, Ludwig C., Hoffmann, Sabine, and Boulesteix, Anne-Laure
- Subjects
RESEARCH personnel ,DEGREES of freedom ,FALSE positive error ,BONFERRONI correction ,SURGICAL complications - Abstract
When different researchers study the same research question using the same dataset they may obtain different and potentially even conflicting results. This is because there is often substantial flexibility in researchers' analytical choices, an issue also referred to as "researcher degrees of freedom". Combined with selective reporting of the smallest p-value or largest effect, researcher degrees of freedom may lead to an increased rate of false positive and overoptimistic results. In this paper, we address this issue by formalizing the multiplicity of analysis strategies as a multiple testing problem. As the test statistics of different analysis strategies are usually highly dependent, a naive approach such as the Bonferroni correction is inappropriate because it leads to an unacceptable loss of power. Instead, we propose using the "minP" adjustment method, which takes potential test dependencies into account and approximates the underlying null distribution of the minimal p-value through a permutation-based procedure. This procedure is known to achieve more power than simpler approaches while ensuring a weak control of the family-wise error rate. We illustrate our approach for addressing researcher degrees of freedom by applying it to a study on the impact of perioperative p a O 2 on post-operative complications after neurosurgery. A total of 48 analysis strategies are considered and adjusted using the minP procedure. This approach allows to selectively report the result of the analysis strategy yielding the most convincing evidence, while controlling the type 1 error—and thus the risk of publishing false positive results that may not be replicable. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Survey response over 15 years of follow-up in the Millennium Cohort Study.
- Author
-
Kolaja, Claire A., Belding, Jennifer N., Boparai, Satbir K., Castañeda, Sheila F., Geronimo-Hara, Toni Rose, Powell, Teresa M., Tu, Xin M., Walstrom, Jennifer L., Sheppard, Beverly D., and Rull, Rudolph P.
- Subjects
COHORT analysis ,GENERALIZED estimating equations ,HEALTH behavior ,MILITARY medicine ,DEPLOYMENT (Military strategy) - Abstract
Background: Patterns of survey response and the characteristics associated with response over time in longitudinal studies are important to discern for the development of tailored retention efforts aimed at minimizing response bias. The Millennium Cohort Study, the largest and longest running cohort study of military personnel and veterans, is designed to examine the long-term health effects of military service and experiences and thus relies on continued participant survey responses over time. Here, we describe the response rates for follow-up survey data collected over 15 years and identify characteristics associated with follow-up survey response and mode of response (paper vs. web). Method: Patterns of follow-up survey response and response mode (web, paper, none) were examined among eligible participants (n=198,833), who were initially recruited in four panels from 2001 to 2013 in the Millennium Cohort Study, for a follow-up period of 3–15 years (2004–2016). Military and sociodemographic factors (i.e., enrollment panel, sex, birth year, race and ethnicity, educational attainment, marital status, service component, service branch, pay grade, military occupation, length of service, and time deployed), life experiences and health-related factors (i.e., military deployment/combat experience, life stressors, mental health, physical health, and unhealthy behaviors) were used to examine follow-up response and survey mode over time in multivariable generalized estimating equation models. Results: Overall, an average response rate of 60% was observed across all follow-up waves. Factors associated with follow-up survey response over time included increased educational attainment, married status, female sex, older age, military deployment (regardless of combat experience), and higher number of life stressors, mental health issues, and physical health diagnoses. Conclusion: Despite the challenges associated with collecting multiple waves of follow-up survey data from members of the U.S. military during and after service, the Millennium Cohort Study has maintained a relatively robust response rate over time. The incorporation of tailored messages and outreach to those groups least likely to respond over time may improve retention and thereby increase the representativeness and generalizability of collected survey data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. A literature review of applied adaptive design methodology within the field of oncology in randomised controlled trials and a proposed extension to the CONSORT guidelines.
- Author
-
Mistry, Pankaj, Dunn, Janet A., and Marshall, Andrea
- Subjects
CANCER patients ,FOSTER home care ,MEDICAL care ,CLINICAL trials ,PUBLIC health ,EXPERIMENTAL design ,MEDICAL protocols ,ONCOLOGY ,SYSTEMATIC reviews ,STANDARDS - Abstract
Background: The application of adaptive design methodology within a clinical trial setting is becoming increasingly popular. However the application of these methods within trials is not being reported as adaptive designs hence making it more difficult to capture the emerging use of these designs. Within this review, we aim to understand how adaptive design methodology is being reported, whether these methods are explicitly stated as an 'adaptive design' or if it has to be inferred and to identify whether these methods are applied prospectively or concurrently.Methods: Three databases; Embase, Ovid and PubMed were chosen to conduct the literature search. The inclusion criteria for the review were phase II, phase III and phase II/III randomised controlled trials within the field of Oncology that published trial results in 2015. A variety of search terms related to adaptive designs were used.Results: A total of 734 results were identified, after screening 54 were eligible. Adaptive designs were more commonly applied in phase III confirmatory trials. The majority of the papers performed an interim analysis, which included some sort of stopping criteria. Additionally only two papers explicitly stated the term 'adaptive design' and therefore for most of the papers, it had to be inferred that adaptive methods was applied. Sixty-five applications of adaptive design methods were applied, from which the most common method was an adaptation using group sequential methods.Conclusions: This review indicated that the reporting of adaptive design methodology within clinical trials needs improving. The proposed extension to the current CONSORT 2010 guidelines could help capture adaptive design methods. Furthermore provide an essential aid to those involved with clinical trials. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
23. Standardized approach to extract candidate outcomes from literature for a standard outcome set: a case- and simulation study.
- Author
-
Veen, KM, Joseph, A, Sossi, F, Jaber, P Blancarte, Lansac, E, Das-Gupta, E, Aktaa, S, and Takkenberg, JJM
- Subjects
HEART valve diseases ,HEALTH outcome assessment ,MEDICAL care ,DATA visualization ,WORLD health - Abstract
Aims: Standard outcome sets enable the value-based evaluation of health care delivery. Whereas the attainment of expert opinion has been structured using methods such as the modified-Delphi process, standardized guidelines for extraction of candidate outcomes from literature are lacking. As such, we aimed to describe an approach to obtain a comprehensive list of candidate outcomes for potential inclusion in standard outcome sets. Methods: This study describes an iterative saturation approach, using randomly selected batches from a systematic literature search to develop a long list of candidate outcomes to evaluate healthcare. This approach can be preceded with an optional benchmark review of relevant registries and Clinical Practice Guidelines and data visualization techniques (e.g. as a WordCloud) to potentially decrease the number of iterations. The development of the International Consortium of Health Outcome Measures Heart valve disease set is used to illustrate the approach. Batch cutoff choices of the iterative saturation approach were validated using data of 1000 simulated cases. Results: Simulation showed that on average 98% (range 92–100%) saturation is reached using a 100-article batch initially, with 25 articles in the subsequent batches. On average 4.7 repeating rounds (range 1–9) of 25 new articles were necessary to achieve saturation if no outcomes are first identified from a benchmark review or a data visualization. Conclusion: In this paper a standardized approach is proposed to identify relevant candidate outcomes for a standard outcome set. This approach creates a balance between comprehensiveness and feasibility in conducting literature reviews for the identification of candidate outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Hidden analyses: a review of reporting practice and recommendations for more transparent reporting of initial data analyses.
- Author
-
Huebner, Marianne, Vach, Werner, le Cessie, Saskia, Schmidt, Carsten Oliver, Lusa, Lara, on behalf of the Topic Group "Initial Data Analysis" of the STRATOS Initiative (STRengthening Analytical Thinking for Observational Studies, http://www.stratos-initiative.org), Cook, Dianne, and Topic Group “Initial Data Analysis” of the STRATOS Initiative (STRengthening Analytical Thinking for Observational Studies, http://www.stratos-initiative.org)
- Subjects
DATA analysis ,DATA scrubbing ,DATA distribution ,DATA integrity ,STATISTICS - Abstract
Background: In the data pipeline from the data collection process to the planned statistical analyses, initial data analysis (IDA) typically takes place between the end of the data collection and do not touch the research questions. A systematic process for IDA and clear reporting of the findings would help to understand the potential shortcomings of a dataset, such as missing values, or subgroups with small sample sizes, or shortcomings in the collection process, and to evaluate the impact of these shortcomings on the research results. A clear reporting of findings is also relevant when making datasets available to other researchers. Initial data analyses can provide valuable insights into the suitability of a data set for a future research study. Our aim was to describe the practice of reporting of initial data analyses in observational studies in five highly ranked medical journals with focus on data cleaning, screening, and reporting of findings which led to a potential change in the analysis plan.Methods: This review was carried out using systematic search strategies with eligibility criteria for articles to be reviewed. A total of 25 papers about observational studies were selected from five medical journals published in 2018. Each paper was reviewed by two reviewers and IDA statements were further discussed by all authors. The consensus was reported.Results: IDA statements were reported in the methods, results, discussion, and supplement of papers. Ten out of 25 papers (40%) included a statement about data cleaning. Data screening statements were included in all articles, and 18 (72%) indicated the methods used to describe them. Item missingness was reported in 11 papers (44%), unit missingness in 15 papers (60%). Eleven papers (44%) mentioned some changes in the analysis plan. Reported changes referred to missing data treatment, unexpected values, population heterogeneity and aspects related to variable distributions or data properties.Conclusion: Reporting of initial data analyses were sparse, and statements on IDA were located throughout the research articles. There is a lack of systematic reporting of IDA. We conclude the article with recommendations on how to overcome shortcomings in the practice of IDA reporting in observational studies. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
25. The conduct and reporting of qualitative evidence syntheses in health and social care guidelines: a content analysis.
- Author
-
Carmona, Chris, Baxter, Susan, and Carroll, Christopher
- Subjects
FERRANS & Powers Quality of Life Index ,SOCIAL participation ,SOCIAL support ,EVIDENCE-based medicine ,NATIONAL health services ,QUALITATIVE research ,PSYCHOLOGICAL tests ,PSYCHOLOGICAL adaptation - Abstract
Background: This paper is part of a broader investigation into the ways in which health and social care guideline producers are using qualitative evidence syntheses (QESs) alongside more established methods of guideline development such as systematic reviews and meta-analyses of quantitative data. This study is a content analysis of QESs produced over a 5-year period by a leading provider of guidelines for the National Health Service in the UK (the National Institute for Health and Care Excellence) to explore how closely they match a reporting framework for QES.Methods: Guidelines published or updated between Jan 2015 and Dec 2019 were identified via searches of the National Institute for Health and Care excellence (NICE) website. These guidelines were searched to identify any QES conducted during the development of the guideline. Data relating to the compliance of these syntheses against a reporting framework for QES (ENTREQ) were extracted and compiled, and descriptive statistics used to provide an analysis of the of QES conduct, reporting and use by this major international guideline producer.Results: QES contributed, in part, to 54 out of a total of 192 guidelines over the five-year period. Although methods for producing and reporting QES have changed substantially over the past decade, this study found that there has been little change in the number or quality of NICE QESs over time. The largest predictor of quality was the centre or team which undertook the synthesis. Analysis indicated that elements of review methods which were similar to those used in quantitative systematic reviews tended to be carried out well and mostly matched the criteria in the reporting framework, but review methods which were more specific to a QES tended to be carried out less well, with fewer examples of criteria in the reporting framework being achieved.Conclusion: The study suggests that use, conduct and reporting of optimal QES methods requires development, as over time the quality of reporting of QES both overall, and by specific centres, has not improved in spite of clearer reporting frameworks and important methodological developments. Further staff training in QES methods may be helpful for reviewers who are more familiar with conventional forms of systematic review if the highest standards of QES are to be achieved. There seems potential for greater use of evidence from qualitative research during guideline development. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
26. School-level intra-cluster correlation coefficients and autocorrelations for children's accelerometer-measured physical activity in England by age and gender.
- Author
-
Salway, Ruth, Jago, Russell, de Vocht, Frank, House, Danielle, Porter, Alice, Walker, Robert, Kipping, Ruth, Owen, Christopher G., Hudda, Mohammed T., Northstone, Kate, van Sluijs, Esther, Atkin, Andrew, Ekelund, Ulf, Esliger, Dale, Hansen, Bjorge H., and Sherar, Lauren
- Subjects
HIGH-income countries ,PHYSICAL activity ,SCHOOL size ,AGE groups ,PRIMARY schools - Abstract
Background: Randomised, cluster-based study designs in schools are commonly used to evaluate children's physical activity interventions. Sample size estimation relies on accurate estimation of the intra-cluster correlation coefficient (ICC), but published estimates, especially using accelerometry-measured physical activity, are few and vary depending on physical activity outcome and participant age. Less commonly-used cluster-based designs, such as stepped wedge designs, also need to account for correlations over time, e.g. cluster autocorrelation (CAC) and individual autocorrelation (IAC), but no estimates are currently available. This paper estimates the school-level ICC, CAC and IAC for England children's accelerometer-measured physical activity outcomes by age group and gender, to inform the design of future school-based cluster trials. Methods: Data were pooled from seven large English datasets of accelerometer-measured physical activity data between 2002–18 (> 13,500 pupils, 540 primary and secondary schools). Linear mixed effect models estimated ICCs for weekday and whole week for minutes spent in moderate-to-vigorous physical activity (MVPA) and being sedentary for different age groups, stratified by gender. The CAC (1,252 schools) and IAC (34,923 pupils) were estimated by length of follow-up from pooled longitudinal data. Results: School-level ICCs for weekday MVPA were higher in primary schools (from 0.07 (95% CI: 0.05, 0.10) to 0.08 (95% CI: 0.06, 0.11)) compared to secondary (from 0.04 (95% CI: 0.03, 0.07) to (95% CI: 0.04, 0.10)). Girls' ICCs were similar for primary and secondary schools, but boys' were lower in secondary. For all ages, combined the CAC was 0.60 (95% CI: 0.44–0.72), and the IAC was 0.46 (95% CI: 0.42–0.49), irrespective of follow-up time. Estimates were higher for MVPA vs sedentary time, and for weekdays vs the whole week. Conclusions: Adequately powered studies are important to evidence effective physical activity strategies. Our estimates of the ICC, CAC and IAC may be used to plan future school-based physical activity evaluations and were fairly consistent across a range of ages and settings, suggesting that results may be applied to other high income countries with similar school physical activity provision. It is important to use estimates appropriate to the study design, and that match the intended study population as closely as possible. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Development of a diagnostic predictive model for determining child stunting in Malawi: a comparative analysis of variable selection approaches.
- Author
-
Mkungudza, Jonathan, Twabi, Halima S., and Manda, Samuel O. M.
- Subjects
LITERATURE reviews ,INDEPENDENT variables ,DISEASE risk factors ,STUNTED growth ,MULTIPLE birth ,DEMOGRAPHIC surveys - Abstract
Background: Childhood stunting is a major indicator of child malnutrition and a focus area of Global Nutrition Targets for 2025 and Sustainable Development Goals. Risk factors for childhood stunting are well studied and well known and could be used in a risk prediction model for assessing whether a child is stunted or not. However, the selection of child stunting predictor variables is a critical step in the development and performance of any such prediction model. This paper compares the performance of child stunting diagnostic predictive models based on predictor variables selected using a set of variable selection methods. Methods: Firstly, we conducted a subjective review of the literature to identify determinants of child stunting in Sub-Saharan Africa. Secondly, a multivariate logistic regression model of child stunting was fitted using the identified predictors on stunting data among children aged 0–59 months in the Malawi Demographic Health Survey (MDHS 2015–16) data. Thirdly, several reduced multivariable logistic regression models were fitted depending on the predictor variables selected using seven variable selection algorithms, namely backward, forward, stepwise, random forest, Least Absolute Shrinkage and Selection Operator (LASSO), and judgmental. Lastly, for each reduced model, a diagnostic predictive model for the childhood stunting risk score, defined as the child propensity score based on derived coefficients, was calculated for each child. The prediction risk models were assessed using discrimination measures, including area under-receiver operator curve (AUROC), sensitivity and specificity. Results: The review identified 68 predictor variables of child stunting, of which 27 were available in the MDHS 2016–16 data. The common risk factors selected by all the variable selection models include household wealth index, age of the child, household size, type of birth (singleton/multiple births), and birth weight. The best cut-off point on the child stunting risk prediction model was 0.37 based on risk factors determined by the judgmental variable selection method. The model's accuracy was estimated with an AUROC value of 64% (95% CI: 60%-67%) in the test data. For children residing in urban areas, the corresponding AUROC was AUC = 67% (95% CI: 58–76%), as opposed to those in rural areas, AUC = 63% (95% CI: 59–67%). Conclusion: The derived child stunting diagnostic prediction model could be useful as a first screening tool to identify children more likely to be stunted. The identified children could then receive necessary nutritional interventions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. A simple and effective method for simulating nested exchangeable correlated binary data for longitudinal cluster randomised trials.
- Author
-
Bowden, Rhys A., Kasza, Jessica, and Forbes, Andrew B.
- Subjects
MULTILEVEL models ,RANDOM variables ,LONGITUDINAL method ,CROSSOVER trials ,STATISTICS - Abstract
Background: Simulation is an important tool for assessing the performance of statistical methods for the analysis of data and for the planning of studies. While methods are available for the simulation of correlated binary random variables, all have significant practical limitations for simulating outcomes from longitudinal cluster randomised trial designs, such as the cluster randomised crossover and the stepped wedge trial designs. For these trial designs as the number of observations in each cluster increases these methods either become computationally infeasible or their range of allowable correlations rapidly shrinks to zero. Methods: In this paper we present a simple method for simulating binary random variables with a specified vector of prevalences and correlation matrix. This method allows for the outcome prevalence to change due to treatment or over time, and for a 'nested exchangeable' correlation structure, in which observations in the same cluster are more highly correlated if they are measured in the same time period than in different time periods, and where different individuals are measured in each time period. This means that our method is also applicable to more general hierarchical clustered data contexts, such as students within classrooms within schools. The method is demonstrated by simulating 1000 datasets with parameters matching those derived from data from a cluster randomised crossover trial assessing two variants of stress ulcer prophylaxis. Results: Our method is orders of magnitude faster than the most well known general simulation method while also allowing a much wider range of correlations than alternative methods. An implementation of our method is available in an R package NestBin. Conclusions: This simulation method is the first to allow for practical and efficient simulation of large datasets of binary outcomes with the commonly used nested exchangeable correlation structure. This will allow for much more effective testing of designs and inference methods for longitudinal cluster randomised trials with binary outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Bayesian sequential monitoring strategies for trials of digestive cancer therapeutics.
- Author
-
Mulier, Guillaume, Lin, Ruitao, Aparicio, Thomas, and Biard, Lucie
- Subjects
TOXICITY testing ,DRUG development ,ANTINEOPLASTIC agents ,WORK design - Abstract
Background: New therapeutics in oncology have presented challenges to existing paradigms and trial designs in all phases of drug development. As a motivating example, we considered an ongoing phase II trial planned to evaluate the combination of a MET inhibitor and an anti-PD-L1 immunotherapy to treat advanced oesogastric carcinoma. The objective of the paper was to exemplify the planning of an adaptive phase II trial with novel anti-cancer agents, including prolonged observation windows and joint sequential evaluation of efficacy and toxicity. Methods: We considered various candidate designs and computed decision rules assuming correlations between efficacy and toxicity. Simulations were conducted to evaluate the operating characteristics of all designs. Results: Design approaches allowing continuous accrual, such as the time-to-event Bayesian Optimal Phase II design (TOP), showed good operating characteristics while ensuring a reduced trial duration. All designs were sensitive to the specification of the correlation between efficacy and toxicity during planning, but TOP can take that correlation into account more easily. Conclusions: While specifying design working hypotheses requires caution, Bayesian approaches such as the TOP design had desirable operating characteristics and allowed incorporating concomittant information, such as toxicity data from concomitant observations in another relevant patient population (e.g., defined by mutational status). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Methods, strategies, and incentives to increase response to mental health surveys among adolescents: a systematic review.
- Author
-
Bidonde, Julia, Meneses-Echavez, Jose F., Hafstad, Elisabet, Brunborg, Geir Scott, and Bang, Lasse
- Subjects
MENTAL health surveys ,HIGH-income countries ,TEENAGERS ,MONETARY incentives ,MENTAL health - Abstract
Background: This systematic review aimed to identify effective methods to increase adolescents' response to surveys about mental health and substance use, to improve the quality of survey information. Methods: We followed a protocol and searched for studies that compared different survey delivery modes to adolescents. Eligible studies reported response rates, mental health score variation per survey mode and participant variations in mental health scores. We searched CENTRAL, PsycINFO, MEDLINE and Scopus in May 2022, and conducted citation searches in June 2022. Two reviewers independently undertook study selection, data extraction, and risk of bias assessments. Following the assessment of heterogeneity, some studies were pooled using meta-analysis. Results: Fifteen studies were identified, reporting six comparisons related to survey methods and strategies. Results indicate that response rates do not differ between survey modes (e.g., web versus paper-and-pencil) delivered in classroom settings. However, web surveys may yield higher response rates outside classroom settings. The largest effects on response rates were achieved using unconditional monetary incentives and obtaining passive parental consent. Survey mode influenced mental health scores in certain comparisons. Conclusions: Despite the mixed quality of the studies, the low volume for some comparisons and the limit to studies in high income countries, several effective methods and strategies to improve adolescents' response rates to mental health surveys were identified. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Spontaneously generated online patient experience data - how and why is it being used in health research: an umbrella scoping review.
- Author
-
Walsh, Julia, Dwumfour, Christine, Cave, Jonathan, and Griffiths, Frances
- Abstract
Purpose: Social media has led to fundamental changes in the way that people look for and share health related information. There is increasing interest in using this spontaneously generated patient experience data as a data source for health research. The aim was to summarise the state of the art regarding how and why SGOPE data has been used in health research. We determined the sites and platforms used as data sources, the purposes of the studies, the tools and methods being used, and any identified research gaps.Methods: A scoping umbrella review was conducted looking at review papers from 2015 to Jan 2021 that studied the use of SGOPE data for health research. Using keyword searches we identified 1759 papers from which we included 58 relevant studies in our review.Results: Data was used from many individual general or health specific platforms, although Twitter was the most widely used data source. The most frequent purposes were surveillance based, tracking infectious disease, adverse event identification and mental health triaging. Despite the developments in machine learning the reviews included lots of small qualitative studies. Most NLP used supervised methods for sentiment analysis and classification. Very early days, methods need development. Methods not being explained. Disciplinary differences - accuracy tweaks vs application. There is little evidence of any work that either compares the results in both methods on the same data set or brings the ideas together.Conclusion: Tools, methods, and techniques are still at an early stage of development, but strong consensus exists that this data source will become very important to patient centred health research. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
32. Classifying information-sharing methods.
- Author
-
Nikolaidis, Georgios F., Woods, Beth, Palmer, Stephen, and Soares, Marta O.
- Subjects
STATISTICAL decision making ,TECHNOLOGY assessment ,ADULTS ,MEDICAL technology ,TREATMENT effectiveness - Abstract
Background: Sparse relative effectiveness evidence is a frequent problem in Health Technology Assessment (HTA). Where evidence directly pertaining to the decision problem is sparse, it may be feasible to expand the evidence-base to include studies that relate to the decision problem only indirectly: for instance, when there is no evidence on a comparator, evidence on other treatments of the same molecular class could be used; similarly, a decision on children may borrow-strength from evidence on adults. Usually, in HTA, such indirect evidence is either included by ignoring any differences ('lumping') or not included at all ('splitting'). However, a range of more sophisticated methods exists, primarily in the biostatistics literature. The objective of this study is to identify and classify the breadth of the available information-sharing methods.Methods: Forwards and backwards citation-mining techniques were used on a set of seminal papers on the topic of information-sharing. Papers were included if they specified (network) meta-analytic methods for combining information from distinct populations, interventions, outcomes or study-designs.Results: Overall, 89 papers were included. A plethora of evidence synthesis methods have been used for information-sharing. Most papers (n=79) described methods that shared information on relative treatment effects. Amongst these, there was a strong emphasis on methods for information-sharing across multiple outcomes (n=42) and treatments (n=25), with fewer papers focusing on study-designs (n=23) or populations (n=8). We categorise and discuss the methods under four 'core' relationships of information-sharing: functional, exchangeability-based, prior-based and multivariate relationships, and explain the assumptions made within each of these core approaches.Conclusions: This study highlights the range of information-sharing methods available. These methods often impose more moderate assumptions than lumping or splitting. Hence, the degree of information-sharing that they impose could potentially be considered more appropriate. Our identification of four 'core' methods of information-sharing allows for an improved understanding of the assumptions underpinning the different methods. Further research is required to understand how the methods differ in terms of the strength of sharing they impose and the implications of this for health care decisions. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
33. Measures of fragmentation of rest activity patterns: mathematical properties and interpretability based on accelerometer real life data.
- Author
-
Danilevicz, Ian Meneghel, van Hees, Vincent Theodoor, van der Heide, Frank C. T., Jacob, Louis, Landré, Benjamin, Benadjaoud, Mohamed Amine, and Sabia, Séverine
- Subjects
PATTERNS (Mathematics) ,MATHEMATICAL proofs ,MAXIMUM likelihood statistics ,BODY mass index ,ACCELEROMETERS - Abstract
Accelerometers, devices that measure body movements, have become valuable tools for studying the fragmentation of rest-activity patterns, a core circadian rhythm dimension, using metrics such as inter-daily stability (IS), intradaily variability (IV), transition probability (TP), and self-similarity parameter (named α ). However, their use remains mainly empirical. Therefore, we investigated the mathematical properties and interpretability of rest-activity fragmentation metrics by providing mathematical proofs for the ranges of IS and IV, proposing maximum likelihood and Bayesian estimators for TP, introducing the activity balance index (ABI) metric, a transformation of α , and describing distributions of these metrics in real-life setting. Analysis of accelerometer data from 2,859 individuals (age=60-83 years, 21.1% women) from the Whitehall II cohort (UK) shows modest correlations between the metrics, except for ABI and α . Sociodemographic (age, sex, education, employment status) and clinical (body mass index (BMI), and number of morbidities) factors were associated with these metrics, with differences observed according to metrics. For example, a difference of 5 units in BMI was associated with all metrics (differences ranging between -0.261 (95% CI -0.302, -0.220) to 0.228 (0.18, 0.268) for standardised TP rest to activity during the awake period and TP activity to rest during the awake period, respectively). These results reinforce the value of these rest-activity fragmentation metrics in epidemiological and clinical studies to examine their role for health. This paper expands on a set of methods that have previously demonstrated empirical value, improves the theoretical foundation for these methods, and evaluates their empirical use in a large dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Bayesian modeling of spatially differentiated multivariate enamel defects of the children's primary maxillary central incisor teeth.
- Author
-
Keller, Everette P., Lawson, Andrew B., Wagner, Carol L., and Reed, Susan G.
- Subjects
DEVELOPMENTAL defects of enamel ,DENTITION ,INCISORS ,DENTAL caries ,DENTAL enamel - Abstract
Background: The analysis of dental caries has been a major focus of recent work on modeling dental defect data. While a dental caries focus is of major importance in dental research, the examination of developmental defects which could also contribute at an early stage of dental caries formation, is also of potential interest. This paper proposes a set of methods which address the appearance of different combinations of defects across different tooth regions. In our modeling we assess the linkages between tooth region development and both the type of defect and associations with etiological predictors of the defects which could be influential at different times during the tooth crown development. Methods: We develop different hierarchical model formulations under the Bayesian paradigm to assess exposures during primary central incisor (PMCI) tooth development and PMCI defects. We evaluate the Bayesian hierarchical models under various simulation scenarios to compare their performance with both simulated dental defect data and real data from a motivating application. Results: The proposed model provides inference on identifying a subset of etiological predictors of an individual defect accounting for the correlation between tooth regions and on identifying a subset of etiological predictors for the joint effect of defects. Furthermore, the model provides inference on the correlation between the regions of the teeth as well as between the joint effect of the developmental enamel defects and dental caries. Simulation results show that the proposed model consistently yields steady inferences in identifying etiological biomarkers associated with the outcome of localized developmental enamel defects and dental caries under varying simulation scenarios as deemed by small mean square error (MSE) when comparing the simulation results to real application results. Conclusion: We evaluate the proposed model under varying simulation scenarios to develop a model for multivariate dental defects and dental caries assuming a flexible covariance structure that can handle regional and joint effects. The proposed model shed new light on methods for capturing inclusive predictors in different multivariate joint models under the same covariance structure and provides a natural extension to a nested hierarchical model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Spatial-temporal Bayesian accelerated failure time models for survival endpoints with applications to prostate cancer registry data.
- Author
-
Wang, Ming, Li, Zheng, Lu, Jun, Zhang, Lijun, Li, Yimei, and Zhang, Liangliang
- Subjects
SKIN cancer ,PROSTATE cancer ,PROPORTIONAL hazards models ,MARKOV chain Monte Carlo ,LARGE space structures (Astronautics) ,FLEXIBLE structures - Abstract
Prostate cancer is the most common cancer after non-melanoma skin cancer and the second leading cause of cancer deaths in US men. Its incidence and mortality rates vary substantially across geographical regions and over time, with large disparities by race, geographic regions (i.e., Appalachia), among others. The widely used Cox proportional hazards model is usually not applicable in such scenarios owing to the violation of the proportional hazards assumption. In this paper, we fit Bayesian accelerated failure time models for the analysis of prostate cancer survival and take dependent spatial structures and temporal information into account by incorporating random effects with multivariate conditional autoregressive priors. In particular, we relax the proportional hazards assumption, consider flexible frailty structures in space and time, and also explore strategies for handling the temporal variable. The parameter estimation and inference are based on a Monte Carlo Markov chain technique under a Bayesian framework. The deviance information criterion is used to check goodness of fit and to select the best candidate model. Extensive simulations are performed to examine and compare the performances of models in different contexts. Finally, we illustrate our approach by using the 2004-2014 Pennsylvania Prostate Cancer Registry data to explore spatial-temporal heterogeneity in overall survival and identify significant risk factors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. An analysis of current practices in undertaking literature reviews in nursing: findings from a focused mapping review and synthesis.
- Author
-
Aveyard, Helen and Bradbury-Jones, Caroline
- Subjects
LITERATURE reviews ,META-analysis ,NURSING research ,RESEARCH methodology ,TERMS & phrases - Abstract
Background: In this paper we discuss the emergence of many different methods for doing a literature review. Referring back to the early days, when there were essentially two types of review; a Cochrane systematic review and a narrative review, we identify how the term systematic review is now widely used to describe a variety of review types and how the number of available methods for doing a literature review has increased dramatically. This led us to undertake a review of current practice of those doing a literature review and the terms used to describe them.Method: We undertook a focused mapping review and synthesis. Literature reviews; defined as papers with the terms review or synthesis in the title, published in five nursing journals between January 2017-June 2018 were identified. We recorded the type of review and how these were undertaken.Results: We identified more than 35 terms used to describe a literature review. Some terms reflected established methods for doing a review whilst others could not be traced to established methods and/or the description of method in the paper was limited. We also found inconsistency in how the terms were used.Conclusion: We have identified a proliferation of terms used to describe doing a literature review; although it is not clear how many distinct methods are being used. Our review indicates a move from an era when the term narrative review was used to describe all 'non Cochrane' reviews; to a time of expansion when alternative systematic approaches were developed to enhance rigour of such narrative reviews; to the current situation in which these approaches have proliferated to the extent so that the academic discipline of doing a literature review has become muddled and confusing. We argue that an 'era of consolidation' is needed in which those undertaking reviews are explicit about the method used and ensure that their processes can be traced back to a well described, original primary source. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
37. Infrastructure challenges to doing health research "where populations with the most disease live" in Covid times-a response to Rai et al. (2021).
- Author
-
MacLellan, Jennifer, Turnbull, Joanne, and Pope, Catherine
- Subjects
PUBLIC health research ,COVID-19 ,RANDOMIZED controlled trials ,MEDICAL research ,RESEARCH teams - Abstract
Background: The failure of randomised controlled trials to adequately reflect areas of highest health need have been repeatedly highlighted. This has implications for the validity and generalisability of findings, for equity and efficiency, but also for research capacity-building. Rai et al. (BMC Med Res Methodol 21:80, 2021) recently argued that the poor alignment between UK clinical research activity (specifically multi-centre RCTs) and local prevalence of disease was, in part, the outcome of behaviour and decision-making by Chief Investigators involved in trial research. They argued that a shift in research culture was needed. Following our recent multi-site mixed methods evaluative study about NHS 111 online we identify some of the additional structural barriers to delivering health research "where populations with the most disease live", accounting for the Covid-19 disruption to processes and delivery.Methods: The NHS 111 study used a mixed-method research design, including interviews with healthcare staff and stakeholders within the primary, urgent and emergency health care system, and a survey of users and potential users of the NHS 111 online service. This paper draws on data collated by the research team during site identification and selection, as we followed an action research cycle of planning, action, observation and reflection. The process results were discussed among the authors, and grouped into the two themes presented.Results: We approached 22 primary and secondary care sites across England, successfully recruiting half of these. Time from initial approach to first participant recruitment in successful sites ranged from one to ten months. This paper describes frontline bureaucratic barriers to research delivery and recruitment in the local Clinical Research Network system and secondary care sites carrying large research portfolios, alongside the adaptive practices of research practitioners that mitigate these.Conclusions: This paper augments the recommendations of Rai et al., describing delays encountered during the COVID-19 pandemic, and suggesting in addition to cultural change, it may be additionally important to dismantle infrastructural barriers and improve support to research teams so they can conduct health research "where populations with the most disease live". [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
38. Mediation analysis methods used in observational research: a scoping review and recommendations.
- Author
-
Rijnhart, Judith J. M., Lamp, Sophia J., Valente, Matthew J., MacKinnon, David P., Twisk, Jos W. R., and Heymans, Martijn W.
- Subjects
INTEGRATED software ,COMPUTER software development ,SCIENTIFIC observation ,SENSITIVITY analysis - Abstract
Background: Mediation analysis methodology underwent many advancements throughout the years, with the most recent and important advancement being the development of causal mediation analysis based on the counterfactual framework. However, a previous review showed that for experimental studies the uptake of causal mediation analysis remains low. The aim of this paper is to review the methodological characteristics of mediation analyses performed in observational epidemiologic studies published between 2015 and 2019 and to provide recommendations for the application of mediation analysis in future studies.Methods: We searched the MEDLINE and EMBASE databases for observational epidemiologic studies published between 2015 and 2019 in which mediation analysis was applied as one of the primary analysis methods. Information was extracted on the characteristics of the mediation model and the applied mediation analysis method.Results: We included 174 studies, most of which applied traditional mediation analysis methods (n = 123, 70.7%). Causal mediation analysis was not often used to analyze more complicated mediation models, such as multiple mediator models. Most studies adjusted their analyses for measured confounders, but did not perform sensitivity analyses for unmeasured confounders and did not assess the presence of an exposure-mediator interaction.Conclusions: To ensure a causal interpretation of the effect estimates in the mediation model, we recommend that researchers use causal mediation analysis and assess the plausibility of the causal assumptions. The uptake of causal mediation analysis can be enhanced through tutorial papers that demonstrate the application of causal mediation analysis, and through the development of software packages that facilitate the causal mediation analysis of relatively complicated mediation models. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
39. Evaluating complex interventions in context: systematic, meta-narrative review of case study approaches.
- Author
-
Paparini, Sara, Papoutsi, Chrysanthi, Murdoch, Jamie, Green, Judith, Petticrew, Mark, Greenhalgh, Trisha, and Shaw, Sara E.
- Subjects
PUBLIC health research ,RESEARCH methodology ,NUMBER theory ,RESEARCH ,MEDICAL care ,MEDICAL cooperation ,EVALUATION research ,COMPARATIVE studies ,RESEARCH funding - Abstract
Background: There is a growing need for methods that acknowledge and successfully capture the dynamic interaction between context and implementation of complex interventions. Case study research has the potential to provide such understanding, enabling in-depth investigation of the particularities of phenomena. However, there is limited guidance on how and when to best use different case study research approaches when evaluating complex interventions. This study aimed to review and synthesise the literature on case study research across relevant disciplines, and determine relevance to the study of contextual influences on complex interventions in health systems and public health research.Methods: Systematic meta-narrative review of the literature comprising (i) a scoping review of seminal texts (n = 60) on case study methodology and on context, complexity and interventions, (ii) detailed review of empirical literature on case study, context and complex interventions (n = 71), and (iii) identifying and reviewing 'hybrid papers' (n = 8) focused on the merits and challenges of case study in the evaluation of complex interventions.Results: We identified four broad (and to some extent overlapping) research traditions, all using case study in a slightly different way and with different goals: 1) developing and testing complex interventions in healthcare; 2) analysing change in organisations; 3) undertaking realist evaluations; 4) studying complex change naturalistically. Each tradition conceptualised context differently-respectively as the backdrop to, or factors impacting on, the intervention; sets of interacting conditions and relationships; circumstances triggering intervention mechanisms; and socially structured practices. Overall, these traditions drew on a small number of case study methodologists and disciplines. Few studies problematised the nature and boundaries of 'the case' and 'context' or considered the implications of such conceptualisations for methods and knowledge production.Conclusions: Case study research on complex interventions in healthcare draws on a number of different research traditions, each with different epistemological and methodological preferences. The approach used and consequences for knowledge produced often remains implicit. This has implications for how researchers, practitioners and decision makers understand, implement and evaluate complex interventions in different settings. Deeper engagement with case study research as a methodology is strongly recommended. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
40. Simulation analysis of an adjusted gravity model for hospital admissions robust to incomplete data.
- Author
-
Latruwe, Timo, Van der Wee, Marlies, Vanleenhove, Pieter, Michielsen, Kwinten, Verbrugge, Sofie, and Colle, Didier
- Subjects
HOSPITAL admission & discharge - Abstract
Background: Gravity models are often hard to apply in practice due to their data-hungry nature. Standard implementations of gravity models require that data on each variable is available for each supply node. Since these model types are often applied in a competitive context, data availability of specific variables is commonly limited to a subset of supply nodes. Methods: This paper introduces a methodology that accommodates the use of variables for which data availability is incomplete, developed for a health care context, but more broadly applicable. The study uses simulated data to evaluate the performance of the proposed methodology in comparison with a conventional approach of dropping variables from the model. Results: It is shown that the proposed methodology is able to improve overall model accuracy compared to dropping variables from the model, and that model accuracy is considerably improved within the subset of supply nodes for which data is available, even when that availability is sparse. Conclusion: The proposed methodology is a viable approach to improve the performance of gravity models in a competitive health care context, where data availability is limited, and especially where a the supply nodes with complete data are most relevant for the practitioner. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Introducing the participant-generated experience and satisfaction (PaGES) index: a novel, longitudinal mixed-methods evaluation tool.
- Author
-
Symon, Andrew, Lightly, Kate, Howard, Rachel, Mundle, Shuchita, Faragher, Brian, Hanley, Molly, Durocher, Jill, Winikoff, Beverly, and Weeks, Andrew
- Subjects
PARTICIPANT-researcher relationships ,SATISFACTION ,INDUCED labor (Obstetrics) ,RANDOMIZED controlled trials ,RESEARCH assistants ,RESEARCH personnel - Abstract
Background: Patient-Reported Outcomes or Experience Measures (PROMS / PREMS) are routinely used in clinical studies to assess participants' views and experiences of trial interventions and related quality of life. Purely quantitative approaches lack the necessary detail and flexibility to understand the real-world impact of study interventions on participants, according to their own priorities. Conversely, purely qualitative assessments are time consuming and usually restricted to a small, possibly unrepresentative, sub-sample. This paper, which reports a pilot study within a randomised controlled trial of induction of labour, reports the feasibility, and acceptability of the Participant-Generated Experience and Satisfaction (PaGES) Index, a new mixed qualitative / quantitative PREM tool. Methods: The single-sheet PaGES Index was completed by hypertensive pregnant women in two hospitals in Nagpur, India before and after taking part in the 'Misoprostol or Oxytocin for Labour Induction' (MOLI) randomised controlled trial. Participants recorded aspects of the impending birth they considered most important, and then ranked them. After the birth, participants completed the PaGES Index again, this time also scoring their satisfaction with each item. Forms were completed on paper in the local language or in English, supported by Research Assistants. Following translation (when needed), responses were uploaded to a REDCap database, coded in Excel and analysed thematically. A formal qualitative evaluation (qMOLI) was also conducted to obtain stakeholder perspectives of the PaGES Index and the wider trial. Semi-structured interviews were conducted with participants, and focus groups with researchers and clinicians. Data were managed using NVivo 12 software and analysed using the framework approach. Results: Participants and researchers found the PaGES Index easy to complete and administer; mothers valued the opportunity to speak about their experience. Qualitative analysis of the initial 68 PaGES Index responses identified areas of commonality and difference among participants and also when comparing antenatal and postnatal responses. Theme citations and associated comments scores were fairly stable before and after the birth. The qMOLI phase, comprising 53 one-to-one interviews with participants and eight focus groups involving 83 researchers and clinicians, provided support that the PaGES Index was an acceptable and even helpful means of capturing participant perspectives. Conclusions: Subjective participant experiences are an important aspect of clinical trials. The PaGES Index was found to be a feasible and acceptable measure that unites qualitative research's explanatory power with the comparative power of quantitative designs. It also offers the opportunity to conduct a before-and-after evaluation, allowing researchers to examine the expectations and actual experiences of all clinical trial participants, not just a small sub-sample. This study also shows that, with appropriate research assistant input, the PaGES Index can be used in different languages by participants with varying literacy levels. Trial registration: Clinical Trials.gov (21/11/2018) (NCT03749902). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. The feasibility of web surveys for obtaining patient-reported outcomes from cancer survivors: a randomized experiment comparing survey modes and brochure enclosures.
- Author
-
Millar, Morgan M., Elena, Joanne W., Gallicchio, Lisa, Edwards, Sandra L., Carter, Marjorie E., Herget, Kimberly A., and Sweeney, Carol
- Subjects
INTERNET surveys ,CANCER treatment ,SURVEYS ,DEMOGRAPHIC characteristics ,CANCER survivors ,TUMOR treatment ,PILOT projects ,RESEARCH ,PATIENT participation ,INTERNET ,RESEARCH methodology ,ACQUISITION of data ,EVALUATION research ,MEDICAL cooperation ,COMPARATIVE studies ,RESEARCH funding ,PAMPHLETS - Abstract
Background: Central cancer registries are often used to survey population-based samples of cancer survivors. These surveys are typically administered via paper or telephone. In most populations, web surveys obtain much lower response rates than paper surveys. This study assessed the feasibility of web surveys for collecting patient-reported outcomes via a central cancer registry.Methods: Potential participants were sampled from Utah Cancer Registry records. Sample members were randomly assigned to receive a web or paper survey, and then randomized to either receive or not receive an informative brochure describing the cancer registry. We calculated adjusted risk ratios with 95% confidence intervals to compare response likelihood and the demographic profile of respondents across study arms.Results: The web survey response rate (43.2%) was lower than the paper survey (50.4%), but this difference was not statistically significant (adjusted risk ratio = 0.88, 95% confidence interval = 0.72, 1.07). The brochure also did not significantly influence the proportion responding (adjusted risk ratio = 1.03, 95% confidence interval = 0.85, 1.25). There were few differences in the demographic profiles of respondents across the survey modes. Older age increased likelihood of response to a paper questionnaire but not a web questionnaire.Conclusions: Web surveys of cancer survivors are feasible without significantly influencing response rates, but providing a paper response option may be advisable particularly when surveying older individuals. Further examination of the varying effects of brochure enclosures across different survey modes is warranted. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
43. Measuring the burden of treatment for chronic disease: implications of a scoping review of the literature.
- Author
-
Sav, Adem, Salehi, Asiyeh, Mair, Frances S., and McMillan, Sara S.
- Subjects
CHRONIC diseases ,MEDLINE ,DATABASES ,PUBLISHED articles ,PUBLICATIONS ,CHRONIC disease treatment ,ECONOMIC aspects of diseases ,SYSTEMATIC reviews - Abstract
Background: Although there has been growing research on the burden of treatment, the current state of evidence on measuring this concept is unknown. This scoping review aimed to provide an overview of the current state of knowledge as well as clear recommendations for future research, within the context of chronic disease.Methods: Four health-based databases, Scopus, CINAHL, Medline, and PsychInfo, were comprehensively searched for peer-reviewed articles published between the periods of 2000-2016. Titles and abstracts were independently read by two authors. All discrepancies between the authors were resolved by a third author. Data was extracted using a standardized proforma and a comparison analysis was used in order to explore the key treatment burden measures and categorize them into three groups.Results: Database searching identified 1458 potential papers. After removal of duplications, and irrelevant articles by title, 1102 abstracts remained. An additional 22 papers were added via snowball searching. In the end, 101 full papers were included in the review. A large number of the studies involved quantitative measures and conceptualizations of treatment burden (n = 64; 63.4%), and were conducted in North America (n = 49; 48.5%). There was significant variation in how the treatment burden experienced by those with chronic disease was operationalized and measured.Conclusion: Despite significant work, there is still much ground to cover to comprehensively measure treatment burden for chronic disease. Greater qualitative focus, more research with cultural and minority populations, a larger emphasis on longitudinal studies and the consideration of the potential effects of "identity" on treatment burden, should be considered. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
44. An investigation of the constancy of effect in Cochrane systematic reviews in context with the assumptions for noninferiority trials.
- Author
-
Duro, Enass M., Julious, Steven A., and Ren, Shijie
- Abstract
When designing a noninferiority (NI) study one of the most important steps is to set the noninferiority (NI) limit. The NI limit is an acceptable loss of efficacy for a new investigative treatment compared to an active control treatment - often standard care. The limit should be a value so small that the loss efficacy is clinically zero. An approach to the setting of a noninferiority limit such that an effect over placebo can be shown through an indirect comparison to placebo-controlled trials where the active control treatment was compared to placebo. In this context, the setting of the NI limit depends on three assumptions: assay sensitivity, bias minimisation, and the constancy assumption. The last assumption of constancy assumes the effect of the active control over placebo is constant. This paper aims to assess the constancy assumption in placebo-controlled trials.
Methods: 236 Cochrane reviews of placebo-controlled trials published in 2015-2016 were collected and used to assess the relation between the placebo, active treatment, and the standardised treatment different (SMD) with the time (year of publication).Results: The analysis showed that both the size of the study and the treatment effect were associated with year of publication. The three main variables that affect the estimate of any future trial are the estimate from the meta-analysis of previous trials prior to the trial, the year difference in the meta-analysis, and the year of the trial conduction. The regression analysis showed that an increase of one unit in the point estimate of the historical meta-analysis would lead to an increase in the predicted estimate of future trial on the SMD scale by 0.88. This result suggests the final trial results are 12% smaller than that from the meta-analysis of trials until that point.Conclusion: The result of this study indicates that assuming constancy of the treatment difference between the active control and placebo can be questioned. It is therefore important to consider the effect of time in estimating the treatment response if indirect comparisons are being used as the basis of a NI limit. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
45. Development of a factorial survey for use in an international study examining clinicians' likelihood to support the decision to initiate invasive long-term ventilation for a child (the TechChild study).
- Author
-
Quirke, Mary Brigid, Alexander, Denise, Masterson, Kate, Greene, Jo, Walsh, Cathal, Leroy, Piet, Berry, Jay, Polikoff, Lee, and Brenner, Maria
- Abstract
Background: The decision to initiate invasive long-term ventilation for a child with complex medical needs can be extremely challenging. TechChild is a research programme that aims to explore the liminal space between initial consideration of such technology dependence and the final decision. This paper presents a best practice example of the development of a unique use of the factorial survey method to identify the main influencing factors in this critical juncture in a child's care.Methods: We developed a within-subjects design factorial survey. In phase 1 (design) we defined the survey goal (dependent variable, mode and sample). We defined and constructed the factors and factor levels (independent variables) using previous qualitative research and existing scientific literature. We further refined these factors based on expert feedback from expert clinicians and a statistician. In phase two (pretesting), we subjected the survey tool to several iterations (cognitive interviewing, face validity testing, statistical review, usability testing). In phase three (piloting) testing focused on feasibility testing with members of the target population (n = 18). Ethical approval was obtained from the then host institution's Health Sciences Ethics Committee.Results: Initial refinement of factors was guided by literature and interviews with clinicians and grouped into four broad categories: Clinical, Child and Family, Organisational, and Professional characteristics. Extensive iterative consultations with clinical and statistical experts, including analysis of cognitive interviews, identified best practice in terms of appropriate: inclusion and order of clinical content; cognitive load and number of factors; as well as language used to suit an international audience. The pilot study confirmed feasibility of the survey. The final survey comprised a 43-item online tool including two age-based sets of clinical vignettes, eight of which were randomly presented to each participant from a total vignette population of 480.Conclusions: This paper clearly explains the processes involved in the development of a factorial survey for the online environment that is internationally appropriate, relevant, and useful to research an increasingly important subject in modern healthcare. This paper provides a framework for researchers to apply a factorial survey approach in wider health research, making this underutilised approach more accessible to a wider audience. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
46. The difference between concealment and blinding in clinical trials and why both are important. A reply to Garg and Mickenautsch, BMC Medical Research Methodology (2022) 22:17.
- Author
-
Lyden, Patrick, Broderick, Joseph, Grotta, James, Kwiatkowski, Thomas, Levine, Steven, Frankel, Michael, Haley, E. Clarke, and Tilley, Barbara
- Subjects
ISCHEMIC stroke ,CLINICAL trials ,MEDICAL research ,RESEARCH methodology ,THROMBOLYTIC therapy - Abstract
The letter responds to a paper that claims there was a risk of selection bias in the NINDS rt-PA for Acute Ischemic Stroke Study. The authors of the letter argue that the paper contains factual errors and misconceptions about the study. They clarify that there were errors in the randomization process, but these errors did not contribute to bias in the treatment effect. The authors also explain the difference between concealment and blinding in clinical trials and refute the suggestion that treatment assignment was manipulated. They emphasize the benefits of thrombolytic therapy for acute ischemic stroke and urge readers to review the literature on the topic. [Extracted from the article]
- Published
- 2023
- Full Text
- View/download PDF
47. Flexible Bayesian semiparametric mixed-effects model for skewed longitudinal data.
- Author
-
Ferede, Melkamu M., Dagne, Getachew A., Mwalili, Samuel M., Bilchut, Workagegnehu H., Engida, Habtamu A., and Karanja, Simon M.
- Subjects
PANEL analysis ,RANDOM effects model ,CHRONIC kidney failure ,GAUSSIAN distribution ,GLOMERULAR filtration rate - Abstract
Background: In clinical trials and epidemiological research, mixed-effects models are commonly used to examine population-level and subject-specific trajectories of biomarkers over time. Despite their increasing popularity and application, the specification of these models necessitates a great deal of care when analysing longitudinal data with non-linear patterns and asymmetry. Parametric (linear) mixed-effect models may not capture these complexities flexibly and adequately. Additionally, assuming a Gaussian distribution for random effects and/or model errors may be overly restrictive, as it lacks robustness against deviations from symmetry. Methods: This paper presents a semiparametric mixed-effects model with flexible distributions for complex longitudinal data in the Bayesian paradigm. The non-linear time effect on the longitudinal response was modelled using a spline approach. The multivariate skew-t distribution, which is a more flexible distribution, is utilized to relax the normality assumptions associated with both random-effects and model errors. Results: To assess the effectiveness of the proposed methods in various model settings, simulation studies were conducted. We then applied these models on chronic kidney disease (CKD) data and assessed the relationship between covariates and estimated glomerular filtration rate (eGFR). First, we compared the proposed semiparametric partially linear mixed-effect (SPPLM) model with the fully parametric one (FPLM), and the results indicated that the SPPLM model outperformed the FPLM model. We then further compared four different SPPLM models, each assuming different distributions for the random effects and model errors. The model with a skew-t distribution exhibited a superior fit to the CKD data compared to the Gaussian model. The findings from the application revealed that hypertension, diabetes, and follow-up time had a substantial association with kidney function, specifically leading to a decrease in GFR estimates. Conclusions: The application and simulation studies have demonstrated that our work has made a significant contribution towards a more robust and adaptable methodology for modeling intricate longitudinal data. We achieved this by proposing a semiparametric Bayesian modeling approach with a spline smoothing function and a skew-t distribution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. A Bayesian Bernoulli-Exponential joint model for binary longitudinal outcomes and informative time with applications to bladder cancer recurrence data.
- Author
-
Oduro, Michael Safo
- Subjects
CANCER relapse ,BLADDER cancer ,PANEL analysis ,DISEASE relapse ,SAMPLE size (Statistics) - Abstract
Background: A variety of methods exist for the analysis of longitudinal data, many of which are characterized with the assumption of fixed visit time points for study individuals. This, however is not always a tenable assumption. Phenomenon that alter subject visit patterns such as adverse events due to investigative treatment administered, travel or any other emergencies may result in unbalanced data and varying individual visit time points. Visit times can be considered informative, because subsequent or current subject outcomes can change or be adapted due to previous subject outcomes. Methods: In this paper, a Bayesian Bernoulli-Exponential model for analyzing joint binary outcomes and exponentially distributed informative visit times is developed. Via statistical simulations, the influence of controlled variations in visit patterns, prior and sample size schemes on model performance is assessed. As an application example, the proposed model is applied to a Bladder Cancer Recurrence data. Results and conclusions: Results from the simulation analysis indicated that the Bayesian Bernoulli-Exponential joint model converged in stationarity, and performed relatively better for small to medium sample size scenarios with less varying time sequences regardless of the choice of prior. In larger samples, the model performed better for less varying time sequences. This model's application to the bladder cancer data showed a statistically significant effect of prior tumor recurrence on the probability of subsequent recurrences. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Selecting a randomization method for a multi-center clinical trial with stochastic recruitment considerations.
- Author
-
Sverdlov, Oleksandr, Ryeznik, Yevgen, Anisimov, Volodymyr, Kuznetsova, Olga M., Knight, Ruth, Carter, Kerstine, Drescher, Sonja, and Zhao, Wenle
- Subjects
PATIENT selection ,CLINICAL trials ,MONTE Carlo method ,RANDOMIZED controlled trials ,DYNAMIC balance (Mechanics) - Abstract
Background: The design of a multi-center randomized controlled trial (RCT) involves multiple considerations, such as the choice of the sample size, the number of centers and their geographic location, the strategy for recruitment of study participants, amongst others. There are plenty of methods to sequentially randomize patients in a multi-center RCT, with or without considering stratification factors. The goal of this paper is to perform a systematic assessment of such randomization methods for a multi-center 1:1 RCT assuming a competitive policy for the patient recruitment process. Methods: We considered a Poisson-gamma model for the patient recruitment process with a uniform distribution of center activation times. We investigated 16 randomization methods (4 unstratified, 4 region-stratified, 4 center-stratified, 3 dynamic balancing randomization (DBR), and a complete randomization design) to sequentially randomize n = 500 patients. Statistical properties of the recruitment process and the randomization procedures were assessed using Monte Carlo simulations. The operating characteristics included time to complete recruitment, number of centers that recruited a given number of patients, several measures of treatment imbalance and estimation efficiency under a linear model for the response, the expected proportions of correct guesses under two different guessing strategies, and the expected proportion of deterministic assignments in the allocation sequence. Results: Maximum tolerated imbalance (MTI) randomization methods such as big stick design, Ehrenfest urn design, and block urn design result in a better balance–randomness tradeoff than the conventional permuted block design (PBD) with or without stratification. Unstratified randomization, region-stratified randomization, and center-stratified randomization provide control of imbalance at a chosen level (trial, region, or center) but may fail to achieve balance at the other two levels. By contrast, DBR does a very good job controlling imbalance at all 3 levels while maintaining the randomized nature of treatment allocation. Adding more centers into the study helps accelerate the recruitment process but at the expense of increasing the number of centers that recruit very few (or no) patients—which may increase center-level imbalances for center-stratified and DBR procedures. Increasing the block size or the MTI threshold(s) may help obtain designs with improved randomness–balance tradeoff. Conclusions: The choice of a randomization method is an important component of planning a multi-center RCT. Dynamic balancing randomization with carefully chosen MTI thresholds could be a very good strategy for trials with the competitive policy for patient recruitment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Smoothed quantile residual life regression analysis with application to the Korea HIV/AIDS cohort study.
- Author
-
Kim, Soo Min, Choi, Yunsu, Kang, Sangwook, and HIV/AIDS cohort study, Korea
- Subjects
REGRESSION analysis ,AIDS ,HIV ,CD4 lymphocyte count ,QUANTILE regression - Abstract
Background: The residual life of a patient with human immunodeficiency virus (HIV) is of major interest to patients and their physicians. While existing analyses of HIV patient survival focus mostly on data collected at baseline, residual life analysis allows for dynamic analysis based on additional data collected over a period of time. As survival times typically exhibit a right-skewed distribution, the median provides a more useful summary of the underlying distribution than the mean. In this paper, we propose an efficient inference procedure that fits a semiparametric quantile regression model assessing the effect of longitudinal biomarkers on the residual life of HIV patients until the development of dyslipidemia, a disease becoming more prevalent among those with HIV. Methods: For estimation of model parameters, we propose an induced smoothing method that smooths nonsmooth estimating functions based on check functions. For variance estimation, we propose an efficient resampling-based estimator. The proposed estimators are theoretically justified. Simulation studies are used to evaluate their finite sample performances, including their prediction accuracy. We analyze the Korea HIV/AIDS cohort study data to examine the effects of CD4 (cluster of differentiation 4) cell count on the residual life of HIV patients to the onset of dyslipidemia. Results: The proposed estimator is shown to be consistent and normally distributed asymptotically. Under various simulation settings, our estimates are approximately unbiased. Their variances estimates are close to the empirical variances and their computational efficiency is superior to that of the nonsmooth counterparts. Two measures of prediction performance indicate that our method adequately reflects the dynamic character of longitudinal biomarkers and residual life. The analysis of the Korea HIV/AIDS cohort study data shows that CD4 cell count is positively associated with residual life to the onset of dyslipidemia but the effect is not statistically significant. Conclusions: Our method enables direct prediction of residual lifetimes with a dynamic feature that accommodates data accumulated at different times. Our estimator significantly improves computational efficiency in variance estimation compared to the existing nonsmooth estimator. Analysis of the HIV/AIDS cohort study data reveals dynamic effects of CD4 cell count on the residual life to the onset of dyslipidemia. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.