2,180 results on '"Epidemiologic Methods"'
Search Results
2. The effect of disease misclassification on the ability to detect a gene-environment interaction: implications of the specificity of case definitions for research on Gulf War illness
- Author
-
Robert W. Haley, Jill A. Dever, Gerald Kramer, and John F. Teiber
- Subjects
Persian Gulf syndrome ,Epidemiologic methods ,Research design ,Sensitivity and specificity ,Statistical power ,Environmental exposure ,Medicine (General) ,R5-920 - Abstract
Abstract Background Since 1997, research on Gulf War illness (GWI) has predominantly used 3 case definitions—the original Research definition, the CDC definition, and modifications of the Kansas definition—but they have not been compared against an objective standard. Methods All 3 case definitions were measured in the U.S. Military Health Survey by a computer-assisted telephone interview in a random sample (n = 6,497) of the 1991 deployed U.S. military force. The interview asked whether participants had heard nerve agent alarms during the conflict. A random subsample (n = 1,698) provided DNA for genotyping the PON1 Q192R polymorphism. Results The CDC and the Modified Kansas definition without exclusions were satisfied by 41.7% and 39.0% of the deployed force, respectively, and were highly overlapping. The Research definition, a subset of the others, was satisfied by 13.6%. The majority of veterans meeting CDC and Modified Kansas endorsed fewer and milder symptoms; whereas, those meeting Research endorsed more symptoms of greater severity. The group meeting Research was more highly enriched with the PON1 192R risk allele than those meeting CDC and Modified Kansas, and Research had twice the power to detect the previously described gene-environment interaction between hearing alarms and RR homozygosity (adjusted relative excess risk due to interaction [aRERI] = 7.69; 95% CI 2.71–19.13) than CDC (aRERI = 2.92; 95% CI 0.96–6.38) or Modified Kansas without exclusions (aRERI = 3.84; 95% CI 1.30–8.52) or with exclusions (aRERI = 3.42; 95% CI 1.20–7.56). The lower power of CDC and Modified Kansas relative to Research was due to greater false-positive disease misclassification from lower diagnostic specificity. Conclusions The original Research case definition had greater statistical power to detect a genetic predisposition to GWI. Its greater specificity favors its use in hypothesis-driven research; whereas, the greater sensitivity of the others favor their use in clinical screening for application of future diagnostic biomarkers and clinical care.
- Published
- 2023
- Full Text
- View/download PDF
3. Exposing additional authors who suppress evidence about radiation-induced thyroid cancer in children: a Comment adding to Tsuda et al.’s response to Schüz et al. (2023)
- Author
-
Colin L. Soskolne
- Subjects
Bias ,Epidemiologic methods ,Evidence-based public health ,Fukushima ,Nuclear fallout ,Professional ethics ,Industrial medicine. Industrial hygiene ,RC963-969 ,Public aspects of medicine ,RA1-1270 - Abstract
Abstract Background The need to call out and expose authors for their persistence in improperly using epidemiology has been previously noted. Tsuda et al. have done well to expose Schüz et al.’s arguments/assertions in their recent publication in Environmental Heath. In this Comment, I point out that, also warranting being called out, are the arguments/assertions of Cléro et al. who, in their recent response to an article by Tsuda et al., reiterated the conclusions and recommendations derived from their European project, which were published in Environment International in 2021. Tsuda et al. had critiqued the Cléro et al. 2021 publication in their 2022 review article. However, in their response to it, Cléro et al. deflected by not addressing any of the key points that Tsuda et al. had made in their review regarding the aftermath of the Chernobyl and Fukushima nuclear accidents. In this Comment, I critique Cléro et al.’s inadequate response. Publication of this Comment will help in routing out the improper use of epidemiology in the formulation of public health policy and thereby reduce the influence of misinformation on both science and public policy. My critique of Cléro et al. is not dissimilar from Tsuda et al.’s critique of Schüz et al.: in as much as Schüz et al. should withdraw their work, so should Cléro et al.’s article be retracted. Main body The response by Cléro et al. consists of four paragraphs. First was their assertion that the purpose of the SHAMISEN project was to make recommendations based on scientific evidence and that it was not a systematic review of all related articles. I point out that the Cléro et al. recommendations were not based on objective scientific evidence, but on biased studies. In the second paragraph, Cléro et al. reaffirmed the SHAMISEN Consortium report, which claimed that the overdiagnosis observed in non-exposed adults was applicable to children because children are mirrors of adults. However, the authors of that report withheld statements about secondary examinations in Fukushima that provided evidence against overdiagnosis. In the third paragraph, Cléro et al. provided an explanation regarding their disclosure of conflicting interests, which was contrary to professional norms for transparency and thus was unacceptable. Finally, their insistence that the Tsuda et al. study was an ecological study susceptible to “the ecological fallacy” indicated their lack of epidemiological knowledge about ecological studies. Ironically, many of the papers cited by Cléro et al. regarding overdiagnosis were, in fact, ecological studies. Conclusion Cléro et al. and the SHAMISEN Consortium should withdraw their recommendation “not to launch a mass thyroid cancer screening after a nuclear accident, but rather to make it available (with appropriate information counselling) to those who request it.” Their recommendation is based on biased evidence and would cause confusion regarding public health measures following a nuclear accident. Those authors should, in my assessment, acquaint themselves with modern epidemiology and evidence-based public health. Like Tsuda et al. recommended of Schüz et al., Cléro et al. ought also to retract their article.
- Published
- 2023
- Full Text
- View/download PDF
4. The effect of disease misclassification on the ability to detect a gene-environment interaction: implications of the specificity of case definitions for research on Gulf War illness.
- Author
-
Haley, Robert W., Dever, Jill A., Kramer, Gerald, and Teiber, John F.
- Subjects
- *
PERSIAN Gulf syndrome , *GENOTYPE-environment interaction , *NERVE gases , *STATISTICAL power analysis , *DEFINITIONS - Abstract
Background: Since 1997, research on Gulf War illness (GWI) has predominantly used 3 case definitions—the original Research definition, the CDC definition, and modifications of the Kansas definition—but they have not been compared against an objective standard. Methods: All 3 case definitions were measured in the U.S. Military Health Survey by a computer-assisted telephone interview in a random sample (n = 6,497) of the 1991 deployed U.S. military force. The interview asked whether participants had heard nerve agent alarms during the conflict. A random subsample (n = 1,698) provided DNA for genotyping the PON1 Q192R polymorphism. Results: The CDC and the Modified Kansas definition without exclusions were satisfied by 41.7% and 39.0% of the deployed force, respectively, and were highly overlapping. The Research definition, a subset of the others, was satisfied by 13.6%. The majority of veterans meeting CDC and Modified Kansas endorsed fewer and milder symptoms; whereas, those meeting Research endorsed more symptoms of greater severity. The group meeting Research was more highly enriched with the PON1 192R risk allele than those meeting CDC and Modified Kansas, and Research had twice the power to detect the previously described gene-environment interaction between hearing alarms and RR homozygosity (adjusted relative excess risk due to interaction [aRERI] = 7.69; 95% CI 2.71–19.13) than CDC (aRERI = 2.92; 95% CI 0.96–6.38) or Modified Kansas without exclusions (aRERI = 3.84; 95% CI 1.30–8.52) or with exclusions (aRERI = 3.42; 95% CI 1.20–7.56). The lower power of CDC and Modified Kansas relative to Research was due to greater false-positive disease misclassification from lower diagnostic specificity. Conclusions: The original Research case definition had greater statistical power to detect a genetic predisposition to GWI. Its greater specificity favors its use in hypothesis-driven research; whereas, the greater sensitivity of the others favor their use in clinical screening for application of future diagnostic biomarkers and clinical care. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Exposing additional authors who suppress evidence about radiation-induced thyroid cancer in children: a Comment adding to Tsuda et al.'s response to Schüz et al. (2023).
- Author
-
Soskolne, Colin L.
- Subjects
- *
FUKUSHIMA Nuclear Accident, Fukushima, Japan, 2011 , *NUCLEAR accidents , *CHERNOBYL Nuclear Accident, Chornobyl, Ukraine, 1986 , *RADIATION carcinogenesis , *THYROID cancer , *CHILDHOOD cancer , *HEALTH policy - Abstract
Background: The need to call out and expose authors for their persistence in improperly using epidemiology has been previously noted. Tsuda et al. have done well to expose Schüz et al.'s arguments/assertions in their recent publication in Environmental Heath. In this Comment, I point out that, also warranting being called out, are the arguments/assertions of Cléro et al. who, in their recent response to an article by Tsuda et al., reiterated the conclusions and recommendations derived from their European project, which were published in Environment International in 2021. Tsuda et al. had critiqued the Cléro et al. 2021 publication in their 2022 review article. However, in their response to it, Cléro et al. deflected by not addressing any of the key points that Tsuda et al. had made in their review regarding the aftermath of the Chernobyl and Fukushima nuclear accidents. In this Comment, I critique Cléro et al.'s inadequate response. Publication of this Comment will help in routing out the improper use of epidemiology in the formulation of public health policy and thereby reduce the influence of misinformation on both science and public policy. My critique of Cléro et al. is not dissimilar from Tsuda et al.'s critique of Schüz et al.: in as much as Schüz et al. should withdraw their work, so should Cléro et al.'s article be retracted. Main body: The response by Cléro et al. consists of four paragraphs. First was their assertion that the purpose of the SHAMISEN project was to make recommendations based on scientific evidence and that it was not a systematic review of all related articles. I point out that the Cléro et al. recommendations were not based on objective scientific evidence, but on biased studies. In the second paragraph, Cléro et al. reaffirmed the SHAMISEN Consortium report, which claimed that the overdiagnosis observed in non-exposed adults was applicable to children because children are mirrors of adults. However, the authors of that report withheld statements about secondary examinations in Fukushima that provided evidence against overdiagnosis. In the third paragraph, Cléro et al. provided an explanation regarding their disclosure of conflicting interests, which was contrary to professional norms for transparency and thus was unacceptable. Finally, their insistence that the Tsuda et al. study was an ecological study susceptible to "the ecological fallacy" indicated their lack of epidemiological knowledge about ecological studies. Ironically, many of the papers cited by Cléro et al. regarding overdiagnosis were, in fact, ecological studies. Conclusion: Cléro et al. and the SHAMISEN Consortium should withdraw their recommendation "not to launch a mass thyroid cancer screening after a nuclear accident, but rather to make it available (with appropriate information counselling) to those who request it." Their recommendation is based on biased evidence and would cause confusion regarding public health measures following a nuclear accident. Those authors should, in my assessment, acquaint themselves with modern epidemiology and evidence-based public health. Like Tsuda et al. recommended of Schüz et al., Cléro et al. ought also to retract their article. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Quest markup for developing FAIR questionnaire modules for epidemiologic studies
- Author
-
Daniel E. Russ, Nicole M. Gerlanc, Brian Shen, Bhaumik Patel, Amy Berrington de González, Neal D. Freedman, Julie M. Cusack, Mia M. Gaudet, Montserrat García-Closas, and Jonas S. Almeida
- Subjects
Surveys and questionnaires ,Data collection ,Data commons ,Data science ,Epidemiologic methods ,Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Abstract Background Online questionnaires are commonly used to collect information from participants in epidemiological studies. This requires building questionnaires using machine-readable formats that can be delivered to study participants using web-based technologies such as progressive web applications. However, the paucity of open-source markup standards with support for complex logic make collaborative development of web-based questionnaire modules difficult. This often prevents interoperability and reusability of questionnaire modules across epidemiological studies. Results We developed an open-source markup language for presentation of questionnaire content and logic, Quest, within a real-time renderer that enables the user to test logic (e.g., skip patterns) and view the structure of data collection. We provide the Quest markup language, an in-browser markup rendering tool, questionnaire development tool and an example web application that embeds the renderer, developed for The Connect for Cancer Prevention Study. Conclusion A markup language can specify both the content and logic of a questionnaire as plain text. Questionnaire markup, such as Quest, can become a standard format for storing questionnaires or sharing questionnaires across the web. Quest is a step towards generation of FAIR data in epidemiological studies by facilitating reusability of questionnaires and data interoperability using open-source tools.
- Published
- 2023
- Full Text
- View/download PDF
7. Implementation of web-based respondent driven sampling in epidemiological studies
- Author
-
Pedro Ferrer-Rosende, María Feijoo-Cid, María Isabel Fernández-Cano, Sergio Salas-Nicás, Valeria Stuardo-Ávila, and Albert Navarro-Giné
- Subjects
Epidemiologic methods ,Hard-to-reach populations ,Web based sampling ,Respondent-driven sampling ,WebRDS ,Medicine (General) ,R5-920 - Abstract
Abstract Background Respondent-driven sampling (RDS) is a peer chain-recruitment method for populations without a sampling frame or that are hard-to-reach. Although RDS is usually done face-to-face, the online version (WebRDS) has drawn a lot of attention as it has many potential benefits, despite this, to date there is no clear framework for its implementation. This article aims to provide guidance for researchers who want to recruit through a WebRDS. Methods Description of the development phase: guidance is provided addressing aspects related to the formative research, the design of the questionnaire, the implementation of the coupon system using a free software and the diffusion plan, using as an example a web-based cross-sectional study conducted in Spain between April and June 2022 describing the working conditions and health status of homecare workers for dependent people. Results The application of the survey: we discuss about the monitoring strategies throughout the recruitment process and potential problems along with proposed solutions. Conclusions Under certain conditions, it is possible to obtain a sample with recruitment performance similar to that of other RDS without the need for monetary incentives and using a free access software, considerably reducing costs and allowing its use to be extended to other research groups.
- Published
- 2023
- Full Text
- View/download PDF
8. Real-world data emulating randomized controlled trials of non-vitamin K antagonist oral anticoagulants in patients with venous thromboembolism
- Author
-
Dongwon Yoon, Han Eol Jeong, Sohee Park, Seng Chan You, Soo-Mee Bang, and Ju-Young Shin
- Subjects
Epidemiologic methods ,Clinical trials ,Anticoagulants ,Factor Xa inhibitors ,Venous thromboembolism ,Medicine - Abstract
Abstract Background Emulating randomized controlled trials (RCTs) by real-world evidence (RWE) studies would benefit future clinical and regulatory decision-making by balancing the limitations of RCT. We aimed to evaluate whether the findings from RWE studies can support regulatory decisions derived from RCTs of non-vitamin K antagonist oral anticoagulants (NOACs) in patients with venous thromboembolism (VTE). Methods Five landmark trials (AMPLIFY, RE-COVER II, Hokusai-VTE, EINSTEIN-DVT, and EINSTEIN-PE) of NOACs were emulated using the South Korean nationwide claims database (January 2012 to August 2020). We applied an active comparator and new-user design to include patients who initiated oral anticoagulants within 28 days from their VTE diagnoses. The prespecified eligibility criteria, exposure (each NOAC, such as apixaban, rivaroxaban, dabigatran, and edoxaban), comparator (conventional therapy, defined as subcutaneous heparin followed by warfarin), and the definition of outcomes from RCTs were emulated as closely as possible in each separate emulation cohort. The primary outcome was identical to each trial, which was defined as recurrent VTE or VTE-related death. The safety outcome was major bleeding. Propensity score matching was conducted to balance 69 covariates between the exposure groups. Effect estimates for outcomes were estimated using the Mantel–Haenszel method and Cox proportional hazards model and subsequently compared with the corresponding RCT estimates. Results Compared to trial populations, real-world study populations were older (range: 63–69 years [RWE] vs. 54–59 years [RCT]), with more females (55–60.5% vs. 39–48.3%) and had a higher prevalence of active cancer (4.2–15.4% vs. 2.5–9.5%). The emulated estimates for effectiveness outcomes showed superior effectiveness of NOAC (AMPLIFY: relative risk 0.81, 95% confidence interval 0.70–0.94; RE-COVER II: hazard ratio [HR] 0.60, 0.37–0.96; Hokusai-VTE: 0.49, 0.31–0.78; EINSTEIN-DVT: 0.54, 0.33–0.89; EINSTEIN-PE: 0.50, 0.34–0.74), when contrasted with trials that showed non-inferiority. For safety outcomes, all emulations except for AMPLIFY and EINSTEIN-DVT yielded results consistent with their corresponding RCTs. Conclusions This study revealed the feasibility of complementing RCTs with RWE studies by using claims data in patients with VTE. Future studies to consider the different demographic characteristics between RCT and RWE populations are needed.
- Published
- 2023
- Full Text
- View/download PDF
9. Mistaken information can lead only to misguided conclusions and policies: a commentary regarding Schüz et al.’s response
- Author
-
Toshihide Tsuda, Yumiko Miyano, and Eiji Yamamoto
- Subjects
Epidemiologic methods ,Thyroid cancer ,Fukushima ,Evidence ,Nuclear accident ,Radiation health effects ,Industrial medicine. Industrial hygiene ,RC963-969 ,Public aspects of medicine ,RA1-1270 - Abstract
Abstract Background After reviewing selected scientific evidence, Schüz et al. made two recommendations in the 2018 International Agency for Research on Cancer (IARC) Technical Publication No. 46. Their first recommendation was against population thyroid screening after a nuclear accident, and the second was that consideration be given to offering a long-term thyroid monitoring program for higher-risk individuals (100–500 mGy or more radiation) after a nuclear accident. However, their review of the scientific evidence was inadequate and misrepresented the information from both Chernobyl and Fukushima. We wrote a review article published in Environmental Health in 2022 using the “Toolkit for detecting misused epidemiological methods.” Schüz et al. critiqued our 2022 review article in 2023; their critique, based also on their 2018 IARC Technical Publication No. 46, was so fraught with problems that we developed this response. Main body Schüz et al. suggest that hundreds of thyroid cancer cases in children and adolescents, detected through population thyroid examinations using ultrasound echo and conducted since October 2011 in Fukushima, were not caused by the 2011 Fukushima Daiichi Nuclear Power Plant accident. Schüz et al. compared thyroid cancers in Fukushima directly with those in Chernobyl after April 1986 and listed up to five reasons to deny a causal relationship between radiation and thyroid cancers in Fukushima; however, those reasons we dismiss based on available evidence. No new scientific evidence was presented in their response to our commentary in which we pointed out that misinformation and biased scientific evidence had formed the basis of their arguments. Their published article provided erroneous information on Fukushima. The article implied overdiagnosis in adults and suggested that overdiagnosis would apply to current Fukushima cases. The IARC report did not validate the secondary confirmatory examination in the program which obscures the fact that overdiagnosis may not have occurred as much in Fukushima. The report consequently precluded the provision of important information and measures. Conclusion Information provided in the IARC Technical Publication No. 46 was based on selected scientific evidence resulting in both public and policy-maker confusion regarding past and present nuclear accidents, especially in Japan. It should be withdrawn.
- Published
- 2023
- Full Text
- View/download PDF
10. A Statistical Definition of Epidemic Waves
- Author
-
Levente Kriston
- Subjects
COVID-19 ,SARS-CoV-2 ,epidemiologic methods ,Bayesian analysis ,computational biology ,Internal medicine ,RC31-1245 - Abstract
The timely identification of expected surges of cases during infectious disease epidemics is essential for allocating resources and preparing interventions. Failing to detect critical phases in time may lead to delayed implementation of interventions and have serious consequences. This study describes a simple way to evaluate whether an epidemic wave is likely to be present based solely on daily new case count data. The proposed measure compares two models that assume exponential or linear dynamics, respectively. The most important assumption of this approach is that epidemic waves are characterized rather by exponential than linear growth in the daily number of new cases. Technically, the coefficient of determination of two regression analyses is used to approximate a Bayes factor, which quantifies the support for the exponential over the linear model and can be used for epidemic wave detection. The trajectory of the coronavirus epidemic in three countries is analyzed and discussed for illustration. The proposed measure detects epidemic waves at an early stage, which are otherwise visible only by inspecting the development of case count data retrospectively. Major limitations include missing evidence on generalizability and performance compared to other methods. Nevertheless, the outlined approach may inform public health decision-making and serve as a starting point for scientific discussions on epidemic waves.
- Published
- 2023
- Full Text
- View/download PDF
11. Quest markup for developing FAIR questionnaire modules for epidemiologic studies.
- Author
-
Russ, Daniel E., Gerlanc, Nicole M., Shen, Brian, Patel, Bhaumik, de González, Amy Berrington, Freedman, Neal D., Cusack, Julie M., Gaudet, Mia M., García-Closas, Montserrat, and Almeida, Jonas S.
- Subjects
- *
WEB-based user interfaces , *QUESTIONNAIRES , *CANCER prevention , *ACQUISITION of data - Abstract
Background: Online questionnaires are commonly used to collect information from participants in epidemiological studies. This requires building questionnaires using machine-readable formats that can be delivered to study participants using web-based technologies such as progressive web applications. However, the paucity of open-source markup standards with support for complex logic make collaborative development of web-based questionnaire modules difficult. This often prevents interoperability and reusability of questionnaire modules across epidemiological studies. Results: We developed an open-source markup language for presentation of questionnaire content and logic, Quest, within a real-time renderer that enables the user to test logic (e.g., skip patterns) and view the structure of data collection. We provide the Quest markup language, an in-browser markup rendering tool, questionnaire development tool and an example web application that embeds the renderer, developed for The Connect for Cancer Prevention Study. Conclusion: A markup language can specify both the content and logic of a questionnaire as plain text. Questionnaire markup, such as Quest, can become a standard format for storing questionnaires or sharing questionnaires across the web. Quest is a step towards generation of FAIR data in epidemiological studies by facilitating reusability of questionnaires and data interoperability using open-source tools. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Implementation of web-based respondent driven sampling in epidemiological studies.
- Author
-
Ferrer-Rosende, Pedro, Feijoo-Cid, María, Fernández-Cano, María Isabel, Salas-Nicás, Sergio, Stuardo-Ávila, Valeria, and Navarro-Giné, Albert
- Subjects
- *
MONETARY incentives , *FREEWARE (Computer software) , *WORK environment , *RESEARCH teams , *RESEARCH personnel - Abstract
Background: Respondent-driven sampling (RDS) is a peer chain-recruitment method for populations without a sampling frame or that are hard-to-reach. Although RDS is usually done face-to-face, the online version (WebRDS) has drawn a lot of attention as it has many potential benefits, despite this, to date there is no clear framework for its implementation. This article aims to provide guidance for researchers who want to recruit through a WebRDS. Methods: Description of the development phase: guidance is provided addressing aspects related to the formative research, the design of the questionnaire, the implementation of the coupon system using a free software and the diffusion plan, using as an example a web-based cross-sectional study conducted in Spain between April and June 2022 describing the working conditions and health status of homecare workers for dependent people. Results: The application of the survey: we discuss about the monitoring strategies throughout the recruitment process and potential problems along with proposed solutions. Conclusions: Under certain conditions, it is possible to obtain a sample with recruitment performance similar to that of other RDS without the need for monetary incentives and using a free access software, considerably reducing costs and allowing its use to be extended to other research groups. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. SAS and R code for probabilistic quantitative bias analysis for misclassified binary variables and binary unmeasured confounders.
- Author
-
Fox, Matthew P, MacLehose, Richard F, and Lash, Timothy L
- Subjects
- *
STATISTICAL bias , *QUANTITATIVE research , *CONFOUNDING variables , *CONFIDENCE intervals - Abstract
Systematic error from selection bias, uncontrolled confounding, and misclassification is ubiquitous in epidemiologic research but is rarely quantified using quantitative bias analysis (QBA). This gap may in part be due to the lack of readily modifiable software to implement these methods. Our objective is to provide computing code that can be tailored to an analyst's dataset. We briefly describe the methods for implementing QBA for misclassification and uncontrolled confounding and present the reader with example code for how such bias analyses, using both summary-level data and individual record-level data, can be implemented in both SAS and R. Our examples show how adjustment for uncontrolled confounding and misclassification can be implemented. Resulting bias-adjusted point estimates can then be compared to conventional results to see the impact of this bias in terms of its direction and magnitude. Further, we show how 95% simulation intervals can be generated that can be compared to conventional 95% confidence intervals to see the impact of the bias on uncertainty. Having easy to implement code that users can apply to their own datasets will hopefully help spur more frequent use of these methods and prevent poor inferences drawn from studies that do not quantify the impact of systematic error on their results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Update and Novel Validation of a Pregnancy Physical Activity Questionnaire.
- Author
-
Chasan-Taber, Lisa, Park, Susan, Marcotte, Robert T, Staudenmayer, John, Strath, Scott, and Freedson, Patty
- Subjects
- *
RESEARCH , *STATISTICS , *RESEARCH methodology evaluation , *SELF-evaluation , *RESEARCH methodology , *ACCELEROMETERS , *COMPARATIVE studies , *ACCELEROMETRY , *PHYSICAL activity , *QUESTIONNAIRES , *EXERCISE intensity , *DESCRIPTIVE statistics , *RESEARCH funding , *STATISTICAL correlation , *DATA analysis , *LONGITUDINAL method , *PREGNANCY ,RESEARCH evaluation - Abstract
The aim of this study was to update and validate the Pregnancy Physical Activity Questionnaire (PPAQ), using novel and innovative accelerometer and wearable camera measures in a free-living setting, to improve the measurement performance of this method for self-reporting physical activity. A prospective cohort of 50 eligible pregnant women were enrolled in early pregnancy (mean = 14.9 weeks' gestation). In early, middle, and late pregnancy, participants completed the updated PPAQ and, for 7 days, wore an accelerometer (GT3X-BT; ActiGraph, Pensacola, Florida) on the nondominant wrist and a wearable camera (Autographer; OMG Life (defunct)). At the end of the 7-day period, participants repeated the PPAQ. Spearman correlations between the PPAQ and accelerometer data ranged from 0.37 to 0.44 for total activity, 0.17 to 0.53 for moderate- to vigorous-intensity activity, 0.19 to 0.42 for light-intensity activity, and 0.23 to 0.45 for sedentary behavior. Spearman correlations between the PPAQ and wearable camera data ranged from 0.52 to 0.70 for sports/exercise and from 0.26 to 0.30 for transportation activity. Reproducibility scores ranged from 0.70 to 0.92 for moderate- to vigorous-intensity activity and from 0.79 to 0.91 for sports/exercise, and were comparable across other domains of physical activity. The PPAQ is a reliable instrument and a valid measure of a broad range of physical activities during pregnancy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Metabolite Stability in Archived Neonatal Dried Blood Spots Used for Epidemiologic Research.
- Author
-
He, Di, Yan, Qi, Uppal, Karan, Walker, Douglas I, Jones, Dean P, Ritz, Beate, and Heck, Julia E
- Subjects
- *
STATISTICAL reliability , *METABOLOMICS , *NUTRITION , *LIQUID chromatography , *BLOOD collection , *HEALTH status indicators , *NICOTINE , *MASS spectrometry , *COTININE , *RESEARCH funding , *COLLECTION & preservation of biological specimens , *EPIDEMIOLOGICAL research , *METABOLITES , *CHILDREN - Abstract
Epidemiologic studies of low-frequency exposures or outcomes using metabolomics analyses of neonatal dried blood spots (DBS) often require assembly of samples with substantial differences in duration of storage. Independent assessment of stability of metabolites in archived DBS will enable improved design and interpretation of epidemiologic research utilizing DBS. Neonatal DBS routinely collected and stored as part of the California Genetic Disease Screening Program between 1983 and 2011 were used. The study population included 899 children without cancer before age 6 years, born in California. High-resolution metabolomics with liquid-chromatography mass spectrometry was performed, and the relative ion intensities of common metabolites and selected xenobiotic metabolites of nicotine (cotinine and hydroxycotinine) were evaluated. In total, we detected 26,235 mass spectral features across 2 separate chromatography methods (C18 hydrophobic reversed-phase chromatography and hydrophilic-interaction liquid chromatography). For most of the 39 metabolites related to nutrition and health status, we found no statistically significant annual trends across the years of storage. Nicotine metabolites were captured in the DBS with relatively stable intensities. This study supports the usefulness of DBS stored long-term for epidemiologic studies of the metabolome. -Omics-based information gained from DBS may also provide a valuable tool for assessing prenatal environmental exposures in child health research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Translating Predictive Analytics for Public Health Practice: A Case Study of Overdose Prevention in Rhode Island.
- Author
-
Allen, Bennett, Neill, Daniel B, Schell, Robert C, Ahern, Jennifer, Hallowell, Benjamin D, Krieger, Maxwell, Jent, Victoria A, Goedel, William C, Cartus, Abigail R, Yedinak, Jesse L, Pratty, Claire, Marshall, Brandon D L, and Cerdá, Magdalena
- Subjects
- *
CLINICAL decision support systems , *DRUG overdose , *PUBLIC health , *MACHINE learning , *DESCRIPTIVE statistics , *CASE studies , *PREDICTION models , *HEALTH equity , *DATA analytics , *POPULATION health , *HEALTH promotion - Abstract
Prior applications of machine learning to population health have relied on conventional model assessment criteria, limiting the utility of models as decision support tools for public health practitioners. To facilitate practitioners' use of machine learning as a decision support tool for area-level intervention, we developed and applied 4 practice-based predictive model evaluation criteria (implementation capacity, preventive potential, health equity, and jurisdictional practicalities). We used a case study of overdose prevention in Rhode Island to illustrate how these criteria could inform public health practice and health equity promotion. We used Rhode Island overdose mortality records from January 2016–June 2020 (n = 1,408) and neighborhood-level US Census data. We employed 2 disparate machine learning models, Gaussian process and random forest, to illustrate the comparative utility of our criteria to guide interventions. Our models predicted 7.5%–36.4% of overdose deaths during the test period, illustrating the preventive potential of overdose interventions assuming 5%–20% statewide implementation capacities for neighborhood-level resource deployment. We describe the health equity implications of use of predictive modeling to guide interventions along the lines of urbanicity, racial/ethnic composition, and poverty. We then discuss considerations to complement predictive model evaluation criteria and inform the prevention and mitigation of spatially dynamic public health problems across the breadth of practice. This article is part of a Special Collection on Mental Health. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Predicting Seasonal Influenza Hospitalizations Using an Ensemble Super Learner: A Simulation Study.
- Author
-
Gantenberg, Jason R, McConeghy, Kevin W, Howe, Chanelle J, Steingrimsson, Jon, Aalst, Robertus van, Chit, Ayman, and Zullo, Andrew R
- Subjects
- *
COMPUTER simulation , *PUBLIC health surveillance , *MACHINE learning , *PUBLIC health , *SEASONS , *HOSPITAL care , *INFLUENZA , *EPIDEMICS , *RESEARCH funding , *ALGORITHMS - Abstract
Accurate forecasts can inform response to outbreaks. Most efforts in influenza forecasting have focused on predicting influenza-like activity, with fewer on influenza-related hospitalizations. We conducted a simulation study to evaluate a super learner's predictions of 3 seasonal measures of influenza hospitalizations in the United States: peak hospitalization rate, peak hospitalization week, and cumulative hospitalization rate. We trained an ensemble machine learning algorithm on 15,000 simulated hospitalization curves and generated weekly predictions. We compared the performance of the ensemble (weighted combination of predictions from multiple prediction algorithms), the best-performing individual prediction algorithm, and a naive prediction (median of a simulated outcome distribution). Ensemble predictions performed similarly to the naive predictions early in the season but consistently improved as the season progressed for all prediction targets. The best-performing prediction algorithm in each week typically had similar predictive accuracy compared with the ensemble, but the specific prediction algorithm selected varied by week. An ensemble super learner improved predictions of influenza-related hospitalizations, relative to a naive prediction. Future work should examine the super learner's performance using additional empirical data on influenza-related predictors (e.g. influenza-like illness). The algorithm should also be tailored to produce prospective probabilistic forecasts of selected prediction targets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Real-world data emulating randomized controlled trials of non-vitamin K antagonist oral anticoagulants in patients with venous thromboembolism.
- Author
-
Yoon, Dongwon, Jeong, Han Eol, Park, Sohee, You, Seng Chan, Bang, Soo-Mee, and Shin, Ju-Young
- Subjects
- *
ORAL medication , *THROMBOEMBOLISM , *PROPORTIONAL hazards models , *PROPENSITY score matching , *DISEASE relapse - Abstract
Background: Emulating randomized controlled trials (RCTs) by real-world evidence (RWE) studies would benefit future clinical and regulatory decision-making by balancing the limitations of RCT. We aimed to evaluate whether the findings from RWE studies can support regulatory decisions derived from RCTs of non-vitamin K antagonist oral anticoagulants (NOACs) in patients with venous thromboembolism (VTE). Methods: Five landmark trials (AMPLIFY, RE-COVER II, Hokusai-VTE, EINSTEIN-DVT, and EINSTEIN-PE) of NOACs were emulated using the South Korean nationwide claims database (January 2012 to August 2020). We applied an active comparator and new-user design to include patients who initiated oral anticoagulants within 28 days from their VTE diagnoses. The prespecified eligibility criteria, exposure (each NOAC, such as apixaban, rivaroxaban, dabigatran, and edoxaban), comparator (conventional therapy, defined as subcutaneous heparin followed by warfarin), and the definition of outcomes from RCTs were emulated as closely as possible in each separate emulation cohort. The primary outcome was identical to each trial, which was defined as recurrent VTE or VTE-related death. The safety outcome was major bleeding. Propensity score matching was conducted to balance 69 covariates between the exposure groups. Effect estimates for outcomes were estimated using the Mantel–Haenszel method and Cox proportional hazards model and subsequently compared with the corresponding RCT estimates. Results: Compared to trial populations, real-world study populations were older (range: 63–69 years [RWE] vs. 54–59 years [RCT]), with more females (55–60.5% vs. 39–48.3%) and had a higher prevalence of active cancer (4.2–15.4% vs. 2.5–9.5%). The emulated estimates for effectiveness outcomes showed superior effectiveness of NOAC (AMPLIFY: relative risk 0.81, 95% confidence interval 0.70–0.94; RE-COVER II: hazard ratio [HR] 0.60, 0.37–0.96; Hokusai-VTE: 0.49, 0.31–0.78; EINSTEIN-DVT: 0.54, 0.33–0.89; EINSTEIN-PE: 0.50, 0.34–0.74), when contrasted with trials that showed non-inferiority. For safety outcomes, all emulations except for AMPLIFY and EINSTEIN-DVT yielded results consistent with their corresponding RCTs. Conclusions: This study revealed the feasibility of complementing RCTs with RWE studies by using claims data in patients with VTE. Future studies to consider the different demographic characteristics between RCT and RWE populations are needed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Challenges in Obtaining Valid Causal Effect Estimates With Machine Learning Algorithms.
- Author
-
Naimi, Ashley I, Mishler, Alan E, and Kennedy, Edward H
- Subjects
- *
STATISTICS , *MATHEMATICAL statistics , *NONPARAMETRIC statistics , *CONFOUNDING variables , *PARAMETERS (Statistics) , *MACHINE learning , *SIMULATION methods in education , *ATTRIBUTION (Social psychology) , *STATISTICAL models , *DATA analysis , *ALGORITHMS - Abstract
Unlike parametric regression, machine learning (ML) methods do not generally require precise knowledge of the true data-generating mechanisms. As such, numerous authors have advocated for ML methods to estimate causal effects. Unfortunately, ML algorithms can perform worse than parametric regression. We demonstrate the performance of ML-based singly and doubly robust estimators. We used 100 Monte Carlo samples with sample sizes of 200, 1,200, and 5,000 to investigate bias and confidence-interval coverage under several scenarios. In a simple confounding scenario, confounders were related to the treatment and the outcome via parametric models. In a complex confounding scenario, the simple confounders were transformed to induce complicated nonlinear relationships. In the simple scenario, when ML algorithms were used, double-robust estimators were superior to singly robust estimators. In the complex scenario, single-robust estimators with ML algorithms were at least as biased as estimators using misspecified parametric models. Doubly robust estimators were less biased, but coverage was well below nominal. The use of sample splitting, inclusion of confounder interactions, reliance on a richly specified ML algorithm, and use of doubly robust estimators was the only explored approach that yielded negligible bias and nominal coverage. Our results suggest that ML-based singly robust methods should be avoided. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Mistaken information can lead only to misguided conclusions and policies: a commentary regarding Schüz et al.'s response.
- Author
-
Tsuda, Toshihide, Miyano, Yumiko, and Yamamoto, Eiji
- Subjects
- *
RADIOACTIVE fallout , *NUCLEAR power plant accidents , *NUCLEAR accidents , *THYROID cancer , *ENVIRONMENTAL health , *CHILDHOOD cancer - Abstract
Background: After reviewing selected scientific evidence, Schüz et al. made two recommendations in the 2018 International Agency for Research on Cancer (IARC) Technical Publication No. 46. Their first recommendation was against population thyroid screening after a nuclear accident, and the second was that consideration be given to offering a long-term thyroid monitoring program for higher-risk individuals (100–500 mGy or more radiation) after a nuclear accident. However, their review of the scientific evidence was inadequate and misrepresented the information from both Chernobyl and Fukushima. We wrote a review article published in Environmental Health in 2022 using the "Toolkit for detecting misused epidemiological methods." Schüz et al. critiqued our 2022 review article in 2023; their critique, based also on their 2018 IARC Technical Publication No. 46, was so fraught with problems that we developed this response. Main body: Schüz et al. suggest that hundreds of thyroid cancer cases in children and adolescents, detected through population thyroid examinations using ultrasound echo and conducted since October 2011 in Fukushima, were not caused by the 2011 Fukushima Daiichi Nuclear Power Plant accident. Schüz et al. compared thyroid cancers in Fukushima directly with those in Chernobyl after April 1986 and listed up to five reasons to deny a causal relationship between radiation and thyroid cancers in Fukushima; however, those reasons we dismiss based on available evidence. No new scientific evidence was presented in their response to our commentary in which we pointed out that misinformation and biased scientific evidence had formed the basis of their arguments. Their published article provided erroneous information on Fukushima. The article implied overdiagnosis in adults and suggested that overdiagnosis would apply to current Fukushima cases. The IARC report did not validate the secondary confirmatory examination in the program which obscures the fact that overdiagnosis may not have occurred as much in Fukushima. The report consequently precluded the provision of important information and measures. Conclusion: Information provided in the IARC Technical Publication No. 46 was based on selected scientific evidence resulting in both public and policy-maker confusion regarding past and present nuclear accidents, especially in Japan. It should be withdrawn. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. A Statistical Definition of Epidemic Waves.
- Author
-
Kriston, Levente
- Subjects
EPIDEMICS ,COVID-19 pandemic ,COMMUNICABLE diseases ,REGRESSION analysis ,LEAD time (Supply chain management) - Abstract
The timely identification of expected surges of cases during infectious disease epidemics is essential for allocating resources and preparing interventions. Failing to detect critical phases in time may lead to delayed implementation of interventions and have serious consequences. This study describes a simple way to evaluate whether an epidemic wave is likely to be present based solely on daily new case count data. The proposed measure compares two models that assume exponential or linear dynamics, respectively. The most important assumption of this approach is that epidemic waves are characterized rather by exponential than linear growth in the daily number of new cases. Technically, the coefficient of determination of two regression analyses is used to approximate a Bayes factor, which quantifies the support for the exponential over the linear model and can be used for epidemic wave detection. The trajectory of the coronavirus epidemic in three countries is analyzed and discussed for illustration. The proposed measure detects epidemic waves at an early stage, which are otherwise visible only by inspecting the development of case count data retrospectively. Major limitations include missing evidence on generalizability and performance compared to other methods. Nevertheless, the outlined approach may inform public health decision-making and serve as a starting point for scientific discussions on epidemic waves. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Maternal infections and medications in pregnancy: how does self-report compare to medical records in childhood cancer case–control studies?
- Author
-
Bonaventure, Audrey, Kane, Eleanor, Simpson, Jill, and Roman, Eve
- Subjects
- *
CHILDHOOD cancer , *CASE-control method , *MEDICAL records , *DRUGS , *PREGNANCY - Abstract
Background Studies examining the potential impact of mothers' health during pregnancy on the health of their offspring often rely on self-reported information gathered several years later. To assess the validity of this approach, we analysed data from a national case–control study of childhood cancer (diagnosed <15 years) that collected health information from both interviews and medical records. Methods Mothers' interview reports of infections and medications in pregnancy were compared with primary care records. Taking clinical diagnoses and prescriptions as the reference, sensitivity and specificity of maternal recall along with kappa coefficients of agreement were calculated. Differences in the odd ratios estimated using logistic regression for each information source were assessed using the proportional change in the odds ratio (OR). Results Mothers of 1624 cases and 2524 controls were interviewed ∼6 years (range 0–18 years) after their child's birth. Most drugs and infections were underreported; in general practitioner records, antibiotic prescriptions were nearly three times higher and infections >40% higher. Decreasing with increasing time since pregnancy, sensitivity was ⩽40% for most infections and all drugs except 'anti-epileptics and barbiturates' (sensitivity 80% among controls). ORs associated with individual drug/disease categories that were based on self-reported data varied from 26% lower to 26% higher than those based on medical records; reporting differences between mothers of cases and controls were not systematically in the same direction. Conclusions The findings highlight the scale of under-reporting and poor validity of questionnaire-based studies conducted several years after pregnancy. Future research using prospectively collected data should be encouraged to minimize measurement errors. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. The Environmental Influences on Child Health Outcomes (ECHO)-Wide Cohort.
- Author
-
Knapp, Emily A, Kress, Amii M, Parker, Corette B, Page, Grier P, McArthur, Kristen, Gachigi, Kennedy K, Alshawabkeh, Akram N, Aschner, Judy L, Bastain, Theresa M, Breton, Carrie V, Bendixsen, Casper G, Brennan, Patricia A, Bush, Nicole R, Buss, Claudia, Camargo, Carlos A, Catellier, Diane, Cordero, José F, Croen, Lisa, Dabelea, Dana, and Deoni, Sean
- Subjects
- *
ASTHMA risk factors , *AUTISM risk factors , *RISK factors of attention-deficit hyperactivity disorder , *AIR pollution , *PREMATURE infants , *COVID-19 , *CHILD development , *CHILDHOOD obesity , *INTERVIEWING , *ACQUISITION of data , *MENTAL health , *DIET , *COGNITION , *GESTATIONAL age , *ENVIRONMENTAL health , *SURVEYS , *NEURAL development , *PREGNANCY outcomes , *SLEEP , *RISK assessment , *CHILDREN'S health , *QUESTIONNAIRES , *DESCRIPTIVE statistics , *MEDICAL records , *SOCIAL classes , *GENOMICS , *BIRTH weight , *RESEARCH funding , *LONGITUDINAL method , *PARENTS , *NEIGHBORHOOD characteristics , *ENVIRONMENTAL exposure , *MOTOR ability , *EPIDEMIOLOGICAL research - Abstract
The Environmental Influences on Child Health Outcomes (ECHO)-Wide Cohort Study (EWC), a collaborative research design comprising 69 cohorts in 31 consortia, was funded by the National Institutes of Health (NIH) in 2016 to improve children's health in the United States. The EWC harmonizes extant data and collects new data using a standardized protocol, the ECHO-Wide Cohort Data Collection Protocol (EWCP). EWCP visits occur at least once per life stage, but the frequency and timing of the visits vary across cohorts. As of March 4, 2022, the EWC cohorts contributed data from 60,553 children and consented 29,622 children for new EWCP data and biospecimen collection. The median (interquartile range) age of EWCP-enrolled children was 7.5 years (3.7–11.1). Surveys, interviews, standardized examinations, laboratory analyses, and medical record abstraction are used to obtain information in 5 main outcome areas: pre-, peri-, and postnatal outcomes; neurodevelopment; obesity; airways; and positive health. Exposures include factors at the level of place (e.g. air pollution, neighborhood socioeconomic status), family (e.g. parental mental health), and individuals (e.g. diet, genomics). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. A comprehensive framework to estimate the frequency, duration, and risk factors for diagnostic delays using bootstrapping-based simulation methods
- Author
-
Aaron C Miller, Joseph E Cavanaugh, Alan T Arakkal, Scott H Koeneman, and Philip M Polgreen
- Subjects
Diagnosis ,Diagnostic errors ,Delayed diagnosis ,Epidemiologic methods ,Tuberculosis ,Acute myocardial infarction ,Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Abstract Background The incidence of diagnostic delays is unknown for many diseases and specific healthcare settings. Many existing methods to identify diagnostic delays are resource intensive or difficult to apply to different diseases or settings. Administrative and other real-world data sources may offer the ability to better identify and study diagnostic delays for a range of diseases. Methods We propose a comprehensive framework to estimate the frequency of missed diagnostic opportunities for a given disease using real-world longitudinal data sources. We provide a conceptual model of the disease-diagnostic, data-generating process. We then propose a bootstrapping method to estimate measures of the frequency of missed diagnostic opportunities and duration of delays. This approach identifies diagnostic opportunities based on signs and symptoms occurring prior to an initial diagnosis, while accounting for expected patterns of healthcare that may appear as coincidental symptoms. Three different bootstrapping algorithms are described along with estimation procedures to implement the resampling. Finally, we apply our approach to the diseases of tuberculosis, acute myocardial infarction, and stroke to estimate the frequency and duration of diagnostic delays for these diseases. Results Using the IBM MarketScan Research databases from 2001 to 2017, we identified 2,073 cases of tuberculosis, 359,625 cases of AMI, and 367,768 cases of stroke. Depending on the simulation approach that was used, we estimated that 6.9–8.3% of patients with stroke, 16.0-21.3% of patients with AMI and 63.9–82.3% of patients with tuberculosis experienced a missed diagnostic opportunity. Similarly, we estimated that, on average, diagnostic delays lasted 6.7–7.6 days for stroke, 6.7–8.2 days for AMI, and 34.3–44.5 days for tuberculosis. Estimates for each of these measures was consistent with prior literature; however, specific estimates varied across the different simulation algorithms considered. Conclusions Our approach can be easily applied to study diagnostic delays using longitudinal administrative data sources. Moreover, this general approach can be customized to fit a range of diseases to account for specific clinical characteristics of a given disease. We summarize how the choice of simulation algorithm may impact the resulting estimates and provide guidance on the statistical considerations for applying our approach to future studies.
- Published
- 2023
- Full Text
- View/download PDF
25. Sources of variation in estimates of Duchenne and Becker muscular dystrophy prevalence in the United States
- Author
-
Nedra Whitehead, Stephen W. Erickson, Bo Cai, Suzanne McDermott, Holly Peay, James F. Howard, Lijing Ouyang, and the Muscular Dystrophy Surveillance, Tracking and Research Network
- Subjects
Epidemiology ,Public health surveillance ,Epidemiological monitoring ,Epidemiologic methods ,Muscular dystrophy Duchenne ,Muscular dystrophy Becker ,Medicine - Abstract
Abstract Background Direct estimates of rare disease prevalence from public health surveillance may only be available in a few catchment areas. Understanding variation among observed prevalence can inform estimates of prevalence in other locations. The Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) conducts population-based surveillance of major muscular dystrophies in selected areas of the United States. We identified sources of variation in prevalence estimates of Duchenne and Becker muscular dystrophy (DBMD) within MD STARnet from published literature and a survey of MD STARnet investigators, then developed a logic model of the relationships between the sources of variation and estimated prevalence. Results The 17 identified sources of variability fell into four categories: (1) inherent in surveillance systems, (2) particular to rare diseases, (3) particular to medical-records-based surveillance, and (4) resulting from extrapolation. For the sources of uncertainty measured by MD STARnet, we estimated each source’s contribution to the total variance in DBMD prevalence. Based on the logic model we fit a multivariable Poisson regression model to 96 age–site–race/ethnicity strata. Age accounted for 74% of the variation between strata, surveillance site for 6%, race/ethnicity for 3%, and 17% remained unexplained. Conclusion Variation in estimates derived from a non-random sample of states or counties may not be explained by demographic differences alone. Applying these estimates to other populations requires caution.
- Published
- 2023
- Full Text
- View/download PDF
26. Inverse Probability Weights for Quasicontinuous Ordinal Exposures With a Binary Outcome: Method Comparison and Case Study.
- Author
-
Sack, Daniel E, Shepherd, Bryan E, Audet, Carolyn M, Schacht, Caroline De, and Samuels, Lauren R
- Subjects
- *
HIV infections , *NONPARAMETRIC statistics , *SCIENTIFIC observation , *SIMULATION methods in education , *ATTRIBUTION (Social psychology) , *CASE studies , *PUERPERIUM , *RESEARCH bias , *LOGISTIC regression analysis , *EPIDEMIOLOGICAL research , *CONTRACEPTIVE drugs - Abstract
Inverse probability weighting (IPW), a well-established method of controlling for confounding in observational studies with binary exposures, has been extended to analyses with continuous exposures. Methods developed for continuous exposures may not apply when the exposure is quasicontinuous because of irregular exposure distributions that violate key assumptions. We used simulations and cluster-randomized clinical trial data to assess 4 approaches developed for continuous exposures—ordinary least squares (OLS), covariate balancing generalized propensity scores (CBGPS), nonparametric covariate balancing generalized propensity scores (npCBGPS), and quantile binning (QB)—and a novel method, a cumulative probability model (CPM), in quasicontinuous exposure settings. We compared IPW stability, covariate balance, bias, mean squared error, and standard error estimation across 3,000 simulations with 6 different quasicontinuous exposures, varying in skewness and granularity. In general, CBGPS and npCBGPS resulted in excellent covariate balance, and npCBGPS was the least biased but the most variable. The QB and CPM approaches had the lowest mean squared error, particularly with marginally skewed exposures. We then successfully applied the IPW approaches, together with missing-data techniques, to assess how session attendance (out of a possible 15) in a partners-based clustered intervention among pregnant couples living with human immunodeficiency virus in Mozambique (2017–2022) influenced postpartum contraceptive uptake. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Invited Commentary: Modern Epidemiology Confronts COVID-19—Reflections From Psychiatric Epidemiology.
- Author
-
Martínez-Alés, Gonzalo and Keyes, Katherine
- Subjects
- *
PUBLIC health surveillance , *DISEASE progression , *PRACTICAL politics , *PUBLIC health , *MENTAL health , *DECISION making , *COVID-19 pandemic , *MENTAL health services , *PSYCHIATRIC treatment - Abstract
Dimitris et al. (Am J Epidemiol. 2022;191(6):980–986) outline how the coronavirus disease 2019 (COVID-19) pandemic has, with mixed results, put epidemiology under the spotlight. While epidemiologic theory and methods have been critical in many successes, the ongoing global death toll from severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and the sometimes chaotic public messaging underscore that epidemiology as a field has room for improvement. Here, we use examples from psychiatric epidemiologic studies conducted during the COVID-19 era to reflect on errors driven by overlooking specific major methodological advances of modern epidemiology. We focus on: 1) use of nonrepresentative sampling in online surveys, which limits the potential knowledge to be gained from descriptive studies and amplifies collider stratification bias in causal studies; and 2) failure to acknowledge multiple versions of exposures (e.g. lockdown, school closure) and differences in prevalence of effect measure modifiers across contexts, which causes violations of the consistency assumption and lack of effect transportability. We finish by highlighting: 1) the heterogeneity of psychiatric epidemiologic results during the pandemic across place and sociodemographic groups and over time; 2) the importance of following the foundational advancements of modern epidemiology even in emergency settings; and 3) the need to limit the role of political agendas in cherry-picking and reporting epidemiologic evidence. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Longitudinal SARS-CoV-2 Nucleocapsid Antibody Kinetics, Seroreversion, and Implications for Seroepidemiologic Studies
- Author
-
Loesche, Michael, Karlson, Elizabeth W., Talabi, Opeyemi, Zhou, Guohai, Boutin, Natalie, Atchley, Rachel, Loevinsohn, Gideon, Chang, Jun Bai Park, Hasdianda, Mohammad A., Okenla, Adetoun, Sampson, Elizabeth, Schram, Haley, Magsipoc, Karen, Goodman, Kirsten, Donahue, Lauren, MacGowan, Maureen, Novack, Lewis A., Jarolim, Petr, Baden, Lindsey R., and Nilles, Eric J.
- Subjects
Epidemiologic methods ,Immune system -- Testing ,Viral proteins -- Measurement -- Physiological aspects ,Health - Abstract
Estimating the incidence of infections caused by SARS-CoV-2 that are frequently asymptomatic is challenging when using routine passive surveillance methods. Antibodies can provide a record of previous infection, whether symptomatic [...]
- Published
- 2022
- Full Text
- View/download PDF
29. State-level metabolic comorbidity prevalence and control among adults age 50-plus with diabetes: estimates from electronic health records and survey data in five states
- Author
-
Russell Mardon, Joanne Campione, Jennifer Nooney, Lori Merrill, Maurice Johnson, David Marker, Frank Jenkins, Sharon Saydah, Deborah Rolka, Xuanping Zhang, Sundar Shrestha, and Edward Gregg
- Subjects
Diabetes mellitus ,Electronic health records ,Epidemiologic methods ,High cholesterol ,Hypertension ,Health and Retirement Study ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Public aspects of medicine ,RA1-1270 - Abstract
Abstract Background Although treatment and control of diabetes can prevent complications and reduce morbidity, few data sources exist at the state level for surveillance of diabetes comorbidities and control. Surveys and electronic health records (EHRs) offer different strengths and weaknesses for surveillance of diabetes and major metabolic comorbidities. Data from self-report surveys suffer from cognitive and recall biases, and generally cannot be used for surveillance of undiagnosed cases. EHR data are becoming more readily available, but pose particular challenges for population estimation since patients are not randomly selected, not everyone has the relevant biomarker measurements, and those included tend to cluster geographically. Methods We analyzed data from the National Health and Nutritional Examination Survey, the Health and Retirement Study, and EHR data from the DARTNet Institute to create state-level adjusted estimates of the prevalence and control of diabetes, and the prevalence and control of hypertension and high cholesterol in the diabetes population, age 50 and over for five states: Alabama, California, Florida, Louisiana, and Massachusetts. Results The estimates from the two surveys generally aligned well. The EHR data were consistent with the surveys for many measures, but yielded consistently lower estimates of undiagnosed diabetes prevalence, and identified somewhat fewer comorbidities in most states. Conclusions Despite these limitations, EHRs may be a promising source for diabetes surveillance and assessment of control as the datasets are large and created during the routine delivery of health care. Trial Registration: Not applicable.
- Published
- 2022
- Full Text
- View/download PDF
30. Trends in mortality in septic patients according to the different organ failure during 15 years
- Author
-
Carolina Lorencio Cárdenas, Juan Carlos Yébenes, Emili Vela, Montserrat Clèries, Josep Mª Sirvent, Cristina Fuster-Bertolín, Clara Reina, Alejandro Rodríguez, Juan Carlos Ruiz-Rodríguez, Josep Trenado, and Elisabeth Esteban Torné
- Subjects
Sepsis ,Sepsis syndrome ,Septic shock ,Sepsis mortality ,Epidemiologic methods ,Sepsis epidemiology ,Medical emergencies. Critical care. Intensive care. First aid ,RC86-88.9 - Abstract
Abstract Background The incidence of sepsis can be estimated between 250 and 500 cases/100.000 people per year and is responsible for up to 6% of total hospital admissions. Identified as one of the most relevant global health problems, sepsis is the condition that generates the highest costs in the healthcare system. Important changes in the management of septic patients have been included in recent years; however, there is no information about how changes in the management of sepsis-associated organ failure have contributed to reduce mortality. Methods A retrospective analysis was conducted from hospital discharge records from the Minimum Basic Data Set Acute-Care Hospitals (CMBD-HA in Catalan language) for the Catalan Health System (CatSalut). CMBD-HA is a mandatory population-based register of admissions to all public and private acute-care hospitals in Catalonia. Sepsis was defined by the presence of infection and at least one organ dysfunction. Patients hospitalized with sepsis were detected, according ICD-9-CM (since 2005 to 2017) and ICD-10-CM (2018 and 2019) codes used to identify acute organ dysfunction and infectious processes. Results Of 11.916.974 discharges from all acute-care hospitals during the study period (2005–2019), 296.554 had sepsis (2.49%). The mean annual sepsis incidence in the population was 264.1 per 100.000 inhabitants/year, and it increased every year, going from 144.5 in 2005 to 410.1 in 2019. Multiorgan failure was present in 21.9% and bacteremia in 26.3% of cases. Renal was the most frequent organ failure (56.8%), followed by cardiovascular (24.2%). Hospital mortality during the study period was 19.5%, but decreases continuously from 25.7% in 2005 to 17.9% in 2019 (p
- Published
- 2022
- Full Text
- View/download PDF
31. Regional differences and temporal trend analysis of Hepatitis B in Brazil
- Author
-
Giuliano Grandi, Luis Fernandez Lopez, and Marcelo Nascimento Burattini
- Subjects
Hepatitis B virus ,Hepatitis B ,Epidemiologic methods ,Epidemiological monitoring ,Public aspects of medicine ,RA1-1270 - Abstract
Abstract Background Burden disease related to chronic HBV infection is increasing worldwide. Monitoring Hepatitis B occurrence is difficult due to intrinsic characteristics of the infection, nonetheless analyzing this information improves strategic planning towards reducing the burden related to chronic infection. In this line of thought, this study aims to analyze national and regional epidemiology of Hepatitis B and it’s temporal trends based on Brazilian reported cases. Methods Data obtained from the Brazilian National Notifiable Disease Reporting System (SINAN) from 2007 to 2018 were classified by infection status with an original classification algorithm, had their temporal trends analyzed by Joinpoint regression model and were correlated with gender, age and region. Results Of the 487,180 hepatitis B cases notified to SINAN, 97.65% had it infection status correctly classified by the new algorithm. Hepatitis B detection rate, gender and age-distribution were different among Brazilian regions. Overall, detection rates remained stable from 2007 to 2018, achieving their maximal value (56.1 cases per 100,000 inhabitants) in North region. However, there were different temporal trends related to different hepatitis B status and age. Women mean age at notification were always inferior to those of men and the difference was higher in Central-West, North and Northeast regions. Conclusion Hepatitis B affects heterogeneously different populations throughout Brazilian territory. The differences shown in its temporal trends, regional, gender and age-related distribution helps the planning and evaluation of control measures in Brazil.
- Published
- 2022
- Full Text
- View/download PDF
32. Optimizing the implementation of a participant-collected, mail-based SARS-CoV-2 serological survey in university-affiliated populations: lessons learned and practical guidance
- Author
-
Estee Y. Cramer, Teah Snyder, Johanna Ravenhurst, and Andrew A. Lover
- Subjects
Serosurveys ,Serology ,SARS-CoV-2 ,Epidemiologic methods ,field epidemiology ,Public aspects of medicine ,RA1-1270 - Abstract
Abstract The rapid spread of SARS-CoV-2 is largely driven by pre-symptomatic or mildly symptomatic individuals transmitting the virus. Serological tests to identify antibodies against SARS-CoV-2 are important tools to characterize subclinical infection exposure. During the summer of 2020, a mail-based serological survey with self-collected dried blood spot (DBS) samples was implemented among university affiliates and their household members in Massachusetts, USA. Described are challenges faced and novel procedures used during the implementation of this study to assess the prevalence of SARS-CoV-2 antibodies amid the pandemic. Important challenges included user-friendly remote and contact-minimized participant recruitment, limited availability of some commodities and laboratory capacity, a potentially biased sample population, and policy changes impacting the distribution of clinical results to study participants. Methods and lessons learned to surmount these challenges are presented to inform design and implementation of similar sero-studies. This study design highlights the feasibility and acceptability of self-collected bio-samples and has broad applicability for other serological surveys for a range of pathogens. Key lessons relate to DBS sampling, supply requirements, the logistics of packing and shipping packages, data linkages to enrolled household members, and the utility of having an on-call nurse available for participant concerns during sample collection. Future research might consider additional recruitment techniques such as conducting studies during academic semesters when recruiting in a university setting, partnerships with supply and shipping specialists, and using a stratified sampling approach to minimize potential biases in recruitment.
- Published
- 2022
- Full Text
- View/download PDF
33. A comprehensive framework to estimate the frequency, duration, and risk factors for diagnostic delays using bootstrapping-based simulation methods.
- Author
-
Miller, Aaron C, Cavanaugh, Joseph E, Arakkal, Alan T, Koeneman, Scott H, and Polgreen, Philip M
- Subjects
- *
DELAYED diagnosis , *TUBERCULOSIS , *MYOCARDIAL infarction , *SYMPTOMS , *STROKE , *PANEL analysis - Abstract
Background: The incidence of diagnostic delays is unknown for many diseases and specific healthcare settings. Many existing methods to identify diagnostic delays are resource intensive or difficult to apply to different diseases or settings. Administrative and other real-world data sources may offer the ability to better identify and study diagnostic delays for a range of diseases. Methods: We propose a comprehensive framework to estimate the frequency of missed diagnostic opportunities for a given disease using real-world longitudinal data sources. We provide a conceptual model of the disease-diagnostic, data-generating process. We then propose a bootstrapping method to estimate measures of the frequency of missed diagnostic opportunities and duration of delays. This approach identifies diagnostic opportunities based on signs and symptoms occurring prior to an initial diagnosis, while accounting for expected patterns of healthcare that may appear as coincidental symptoms. Three different bootstrapping algorithms are described along with estimation procedures to implement the resampling. Finally, we apply our approach to the diseases of tuberculosis, acute myocardial infarction, and stroke to estimate the frequency and duration of diagnostic delays for these diseases. Results: Using the IBM MarketScan Research databases from 2001 to 2017, we identified 2,073 cases of tuberculosis, 359,625 cases of AMI, and 367,768 cases of stroke. Depending on the simulation approach that was used, we estimated that 6.9–8.3% of patients with stroke, 16.0-21.3% of patients with AMI and 63.9–82.3% of patients with tuberculosis experienced a missed diagnostic opportunity. Similarly, we estimated that, on average, diagnostic delays lasted 6.7–7.6 days for stroke, 6.7–8.2 days for AMI, and 34.3–44.5 days for tuberculosis. Estimates for each of these measures was consistent with prior literature; however, specific estimates varied across the different simulation algorithms considered. Conclusions: Our approach can be easily applied to study diagnostic delays using longitudinal administrative data sources. Moreover, this general approach can be customized to fit a range of diseases to account for specific clinical characteristics of a given disease. We summarize how the choice of simulation algorithm may impact the resulting estimates and provide guidance on the statistical considerations for applying our approach to future studies. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. How to estimate heritability: a guide for genetic epidemiologists.
- Author
-
Barry, Ciarrah-Jane S, Walker, Venexia M, Cheesman, Rosa, Smith, George Davey, Morris, Tim T, Davies, Neil M, and Davey Smith, George
- Subjects
- *
HERITABILITY , *GENETIC epidemiology , *GENETIC variation , *EPIDEMIOLOGISTS , *LINKAGE disequilibrium , *PHENOTYPIC plasticity - Abstract
Background: Traditionally, heritability has been estimated using family-based methods such as twin studies. Advancements in molecular genomics have facilitated the development of methods that use large samples of (unrelated or related) genotyped individuals.Methods: Here, we provide an overview of common methods applied in genetic epidemiology to estimate heritability, i.e. the proportion of phenotypic variation explained by genetic variation. We provide a guide to key genetic concepts required to understand heritability estimation methods from family-based designs (twin and family studies), genomic designs based on unrelated individuals [linkage disequilibrium score regression, genomic relatedness restricted maximum-likelihood (GREML) estimation] and family-based genomic designs (sibling regression, GREML-kinship, trio-genome-wide complex trait analysis, maternal-genome-wide complex trait analysis, relatedness disequilibrium regression).Results: We describe how heritability is estimated for each method and the assumptions underlying its estimation, and discuss the implications when these assumptions are not met. We further discuss the benefits and limitations of estimating heritability within samples of unrelated individuals compared with samples of related individuals.Conclusions: Overall, this article is intended to help the reader determine the circumstances when each method would be appropriate and why. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
35. Comparing the Accuracy of Diagnostic Tests When Disease Is Characterized by an Ordinal Scale.
- Author
-
Obuchowski, Nancy A
- Subjects
- *
CORONARY artery stenosis , *ARTIFICIAL intelligence , *SEVERITY of illness index , *COMPARATIVE studies , *CORONARY angiography , *COMPUTER-aided diagnosis , *SENSITIVITY & specificity (Statistics) , *DIAGNOSTIC errors , *CARDIOVASCULAR disease diagnosis , *ALGORITHMS , *PROBABILITY theory , *EVALUATION - Abstract
In diagnostic medicine, the true disease status of a patient is often represented on an ordinal scale—for example, cancer stage (0, I, II, III, or IV) or coronary artery disease severity measured using the Coronary Artery Disease Reporting and Data System (CAD-RADS) scale (none, minimal, mild, moderate, severe, or occluded). With advances in quantitation of diagnostic images and in artificial intelligence (AI), both supervised and unsupervised algorithms are being developed to help physicians correctly grade disease. Most of the diagnostic accuracy literature deals with binary disease status (disease present or absent); however, tests diagnosing ordinal-scaled diseases should not be reduced to a binary status just to simplify diagnostic accuracy testing. In this paper, we propose different characterizations of ordinal-scale accuracy for different clinical use scenarios, along with methods for comparing tests. In the simplest scenario, just the proportion of correct grades is considered; other scenarios address the magnitude and direction of misgrading; and at the other extreme, a weighted accuracy measure with weights based on the relative costs of different types of misgrading is presented. The various scenarios are illustrated using a coronary artery disease example where the accuracy of AI algorithms in providing patients with the correct CAD-RADS grade is assessed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Sources of variation in estimates of Duchenne and Becker muscular dystrophy prevalence in the United States.
- Author
-
Whitehead, Nedra, Erickson, Stephen W., Cai, Bo, McDermott, Suzanne, Peay, Holly, Howard, James F., and Ouyang, Lijing
- Subjects
- *
BECKER muscular dystrophy , *DUCHENNE muscular dystrophy , *PUBLIC health surveillance , *MUSCULAR dystrophy , *DEMOGRAPHIC characteristics - Abstract
Background: Direct estimates of rare disease prevalence from public health surveillance may only be available in a few catchment areas. Understanding variation among observed prevalence can inform estimates of prevalence in other locations. The Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) conducts population-based surveillance of major muscular dystrophies in selected areas of the United States. We identified sources of variation in prevalence estimates of Duchenne and Becker muscular dystrophy (DBMD) within MD STARnet from published literature and a survey of MD STARnet investigators, then developed a logic model of the relationships between the sources of variation and estimated prevalence. Results: The 17 identified sources of variability fell into four categories: (1) inherent in surveillance systems, (2) particular to rare diseases, (3) particular to medical-records-based surveillance, and (4) resulting from extrapolation. For the sources of uncertainty measured by MD STARnet, we estimated each source's contribution to the total variance in DBMD prevalence. Based on the logic model we fit a multivariable Poisson regression model to 96 age–site–race/ethnicity strata. Age accounted for 74% of the variation between strata, surveillance site for 6%, race/ethnicity for 3%, and 17% remained unexplained. Conclusion: Variation in estimates derived from a non-random sample of states or counties may not be explained by demographic differences alone. Applying these estimates to other populations requires caution. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Explaining declining hip fracture rates in Norway: a population-based modelling studyResearch in context
- Author
-
Helena Kames Kjeldgaard, Kristin Holvik, Bo Abrahamsen, Grethe S. Tell, Haakon E. Meyer, and Martin O'Flaherty
- Subjects
Hip fractures ,Osteoporosis ,Public health ,Epidemiology ,Epidemiologic methods ,Public aspects of medicine ,RA1-1270 - Abstract
Summary: Background: Although age-standardised hip fracture incidence has declined in many countries during recent decades, the number of fractures is forecast to increase as the population ages. Understanding the drivers behind this decline is essential to inform policy for targeted preventive measures. We aimed to quantify how much of this decline could be explained by temporal trends in major risk factors and osteoporosis treatment. Methods: We developed a new modelling approach, Hip-IMPACT, based on the validated IMPACT coronary heart disease models. The model applied sex- and age stratified hip fracture numbers and prevalence of pharmacologic treatments and risk/preventive factors in 1999 and 2019, and best available evidence for independent relative risks of hip fracture associated with each treatment and risk/preventive factor. Findings: Hip-IMPACT explained 91% (2500/2756) of the declining hip fracture rates during 1999–2019. Two-thirds of the total decline was attributed to changes in risk/preventive factors and one-fifth to osteoporosis medication. Increased prevalence of total hip replacements explained 474/2756 (17%), increased body mass index 698/2756 (25%), and increased physical activity 434/2756 (16%). Reduced smoking explained 293/2756 (11%), and reduced benzodiazepine use explained (366/2756) 13%. Increased uptake of alendronate, zoledronic acid, and denosumab explained 307/2756 (11%), 104/2756 (4%) and 161/2756 (6%), respectively. The explained decline was partially offset by increased prevalence of type 2 diabetes and users of glucocorticoids, z-drugs, and opioids. Interpretation: Two-thirds of the decline in hip fractures from 1999 to 2019 was attributed to reductions in major risk factors and approximately one-fifth to osteoporosis medication. Funding: The Research Council of Norway.
- Published
- 2023
- Full Text
- View/download PDF
38. Environmental sampling for SARS-CoV-2 in long term care facilities: lessons from a pilot study [version 2; peer review: 2 approved]
- Author
-
Matthew Hickman, Nicola Yaxley, Allan Bennett, Ginny Moore, Nicola Love, Matthew Donati, Derren R Ready, Roberto Vivancos, and Rachel Kwiatkowska
- Subjects
infection control ,infectious disease transmission ,environmental exposure ,fomites ,disease outbreaks ,long-term care ,epidemiologic methods ,eng ,Medicine ,Science - Abstract
Background: The SARS-CoV-2 pandemic has highlighted the risk of infection in long-term care facilities (LTCF) and the vulnerability of residents to severe outcomes. Environmental surveillance may help detect pathogens early and inform Infection Prevention and Control (IPC) measures in these settings. Methods: Upon notification of SARS-CoV-2 outbreaks, LTCF within a local authority in South West England were approached to take part in this pilot study. Investigators visited to swab common touch-points and elevated ‘non-touch’ surfaces (>1.5m above ground level) and samples were analysed for presence of SARS-CoV-2 genetic material (RNA). Data were collected regarding LTCF infrastructure, staff behaviours, clinical and epidemiological risk factors for infection (staff and residents), and IPC measures. Criteria for success were: recruitment of three LTCF; detection of SARS-COV-2 RNA; variation in proportion of SARS-CoV-2 positive surfaces by sampling zone; and collection of clinical and epidemiological data for context. Results: Three LTCFs were recruited, ranging in size and resident demographics. Outbreaks lasted 63, 50 and 30 days with resident attack rates of 53%, 40% and 8%, respectively. The proportion of sample sites on which SARS-CoV-2 was detected was highest in rooms occupied by infected residents and varied elsewhere in the LTCF, with low levels in a facility implementing enhanced IPC measures. The heterogeneity of settings and difficulty obtaining data made it unfeasible to assess association between environmental contamination and infection. A greater proportion of elevated surfaces tested positive for SARS-CoV-2 RNA than common touch-points. Conclusions: SARS-CoV-2 RNA can be detected in a variety of LTCF outbreak settings, both on common-touch items and in elevated sites out of reach. This suggests that further work is justified, to assess feasibility and utility of environmental sampling for infection surveillance in LTCF.
- Published
- 2023
- Full Text
- View/download PDF
39. Prevalence and Risk Factors of Bruxism in a Selected Population of Iranian Children
- Author
-
Fatemeh Jahanimoghadam, Mahsa Tohidimoghadam, Hamidreza Poureslami, and Maryam Sharifi
- Subjects
Parasomnias ,Sleep Bruxism ,Pediatric Dentistry ,Epidemiologic Methods ,Dentistry ,RK1-715 - Abstract
Objective: To investigate the prevalence of bruxism in Iranian children aged 6 to 12 years. Material and Methods: This cross-sectional study was conducted on 600 schoolchildren aged 6-12 years. The questionnaire consisted of two sections: the first section included demographic information, while the second evaluated the occurrence of bruxism. Kruskal-Wallis, Chi-Square, Fisher and Multinomial logistic regression were used. A level of p
- Published
- 2023
40. Alcances de la aleatorización mendeliana para el control de confusores no observables en epidemiología
- Author
-
Mónica Ancira-Moreno, Natalia Smith, and Héctor Lamadrid-Figueroa
- Subjects
Mendelian randomization ,Epidemiologic methods ,Bias ,Confounding ,Genetics ,Instrumental variables ,Public aspects of medicine ,RA1-1270 - Abstract
Resumen: La aleatorización mendeliana es un método epidemiológico propuesto para controlar las asociaciones espurias en los estudios observacionales. Estas asociaciones suelen estar causadas por la confusión derivada de factores sociales, ambientales y de comportamiento, que pueden ser difíciles de medir. La aleatorización mendeliana se basa en la selección de variantes genéticas que se utilizan como variables instrumentales, y que influyen en los patrones de exposición o están asociadas con un fenotipo intermedio de la enfermedad. El presente trabajo pretende discutir cómo seleccionar las variantes genéticas adecuadas como variables instrumentales y presentar las potenciales herramientas metodológicas para lidiar con las limitaciones propias de este método epidemiológico. El uso de variables instrumentales para exposiciones modificables tiene el potencial de atenuar los efectos de limitaciones comunes, como la confusión, cuando se eligen variantes genéticas robustas como variables instrumentales. Abstract: The Mendelian randomization is an epidemiologic method proposed to control for spurious associations in observational studies. These associations are commonly caused by confusion derived from social, environmental, and behavioral factors, which can be difficult to measure. Mendelian randomization is based on the selection of genetic variants that are used as instrumental variables that influence exposure patterns or are associated with an intermediate phenotype of the disease. The present work aims to discuss how to select the appropriate genetic variants as instrumental variables and to present methodological tools to deal with the limitations of this epidemiological method. The use of instrumental variables for modifiable exposures has the potential to mitigate the effects of common limitations, such as confusion, when robust genetic variants are chosen as instrumental variables.
- Published
- 2022
- Full Text
- View/download PDF
41. Impact of COVID-19 on the All of Us Research Program.
- Author
-
Hedden, Sarra L, McClain, James, Mandich, Allison, Baskir, Rubin, Caulder, Mark S, Denny, Joshua C, Hamlet, Michelle R J, Das, Irene Prabhu, Ford, Nicole McNeil, Lopez-Class, Maria, Elmi, Ahmed, Wallace, Roshedah, Linkie, Amantha, and Garriock, Holly A
- Subjects
- *
CONTENT mining , *SOCIAL isolation , *WORKFLOW , *COVID-19 pandemic , *MEDICAL research - Abstract
The All of Us Research Program, a health and genetics epidemiologic data collection program, has been substantially affected by the coronavirus disease 2019 (COVID-19) pandemic. Although the program is highly digital in nature, certain aspects of the data collection require in-person interaction between staff and participants. Before the pandemic, the program was enrolling approximately 12,500 participants per month at more than 400 clinical sites. In March 2020, because of the pandemic, all in-person activity at program sites and by engagement partners was paused to develop processes and procedures for in-person activities that incorporated strict safety protocols. In addition, the program adopted new data collection methodologies to reduce the need for in-person activities. Through February 2022, a total of 224 clinical sites had reactivated in-person activity, and all enrollment and engagement partners have adopted new data collection methods that can be used remotely. As the COVID-19 pandemic persists, the program continues to require safety procedures for in-person activity and continues to generate and pilot methodologies that reduce risk and make it easier for participants to provide information. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. Prevalence and Risk Factors of Bruxism in a Selected Population of Iranian Children.
- Author
-
Jahanimoghadam, Fatemeh, Tohidimoghadam, Mahsa, Poureslami, Hamidreza, and Sharifi, Maryam
- Subjects
BRUXISM ,ORAL habits ,IRANIANS ,SLEEP bruxism ,MOUTH breathing ,BIRTH order ,PREMATURE labor - Abstract
Objective: To investigate the prevalence of bruxism in Iranian children aged 6 to 12 years. Material and Methods: This cross-sectional study was conducted on 600 schoolchildren aged 6-12 years. The questionnaire consisted of two sections: the first section included demographic information, while the second evaluated the occurrence of bruxism. Kruskal-Wallis, Chi-Square, Fisher and Multinomial logistic regression were used. A level of p<0.05 was considered statistically significant. Results: 698 questionnaires were distributed, of which 600 participants were returned. According to Multinomial logistic regression, awake bruxism was associated significantly with the following variables: age, sequence of birth, recurrent headache, gastrointestinal disease, nasal obstruction, neurological disorder, easy child crying, sleep disorders, talking in a dream and snoring and jaw disorder. Sleep bruxism was associated significantly with age, premature birth, allergy, gastrointestinal disease, drooling, mouth breathing, nasal obstruction, oral habit, nail biting, sleep disorder, jaw disorders, and family history. Conclusion: Pre-birth and post-birth factors play an important role in the prevalence of bruxism in society. It is possible to prevent complications of bruxism by informing parents and making a timely diagnosis. Parents should be aware of this occurrence to reduce possible related factors to teeth and the masticatory system. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. A practitioner's guide to geospatial analysis in a neuroimaging context.
- Author
-
Wisch, Julie K., Babulal, Ganesh M, Petersen, Kalen, Millar, Peter R., Shacham, Enbal, Scroggins, Stephen, Boerwinkle, Anna H., Flores, Shaney, Keefe, Sarah, Gordon, Brian A., Morris, John C., and Ances, Beau M.
- Subjects
MAGNETIC resonance imaging ,AMERICAN Community Survey ,BRAIN imaging ,BRAIN anatomy - Abstract
Introduction: Health disparities arise from biological‐environmental interactions. Neuroimaging cohorts are reaching sufficiently large sample sizes such that analyses could evaluate how the environment affects the brain. We present a practical guide for applying geospatial methods to a neuroimaging cohort. Methods: We estimated brain age gap (BAG) from structural magnetic resonance imaging (MRI) from 239 city‐dwelling participants in St. Louis, Missouri. We compared these participants to population‐level estimates from the American Community Survey (ACS). We used geospatial analysis to identify neighborhoods associated with patterns of altered brain structure. We also evaluated the relationship between Area Deprivation Index (ADI) and BAG. Results: We identify areas in St. Louis, Missouri that were significantly associated with higher BAG from a spatially representative cohort. We provide replication code. Conclusion: We observe a relationship between neighborhoods and brain health, which suggests that neighborhood‐based interventions could be appropriate. We encourage other studies to geocode participant information to evaluate biological‐environmental interaction. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. State-level metabolic comorbidity prevalence and control among adults age 50-plus with diabetes: estimates from electronic health records and survey data in five states.
- Author
-
Mardon, Russell, Campione, Joanne, Nooney, Jennifer, Merrill, Lori, Johnson Jr., Maurice, Marker, David, Jenkins, Frank, Saydah, Sharon, Rolka, Deborah, Zhang, Xuanping, Shrestha, Sundar, and Gregg, Edward
- Subjects
- *
HYPERTENSION epidemiology , *PUBLIC health surveillance , *CONFIDENCE intervals , *METABOLIC disorders , *SURVEYS , *RESEARCH funding , *ELECTRONIC health records , *COMORBIDITY , *EPIDEMIOLOGICAL research , *CHOLESTEROL , *DISEASE complications , *MIDDLE age - Abstract
Background: Although treatment and control of diabetes can prevent complications and reduce morbidity, few data sources exist at the state level for surveillance of diabetes comorbidities and control. Surveys and electronic health records (EHRs) offer different strengths and weaknesses for surveillance of diabetes and major metabolic comorbidities. Data from self-report surveys suffer from cognitive and recall biases, and generally cannot be used for surveillance of undiagnosed cases. EHR data are becoming more readily available, but pose particular challenges for population estimation since patients are not randomly selected, not everyone has the relevant biomarker measurements, and those included tend to cluster geographically. Methods: We analyzed data from the National Health and Nutritional Examination Survey, the Health and Retirement Study, and EHR data from the DARTNet Institute to create state-level adjusted estimates of the prevalence and control of diabetes, and the prevalence and control of hypertension and high cholesterol in the diabetes population, age 50 and over for five states: Alabama, California, Florida, Louisiana, and Massachusetts. Results: The estimates from the two surveys generally aligned well. The EHR data were consistent with the surveys for many measures, but yielded consistently lower estimates of undiagnosed diabetes prevalence, and identified somewhat fewer comorbidities in most states. Conclusions: Despite these limitations, EHRs may be a promising source for diabetes surveillance and assessment of control as the datasets are large and created during the routine delivery of health care. Trial Registration: Not applicable. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Recommendations for Using Causal Diagrams to Study Racial Health Disparities.
- Author
-
Howe, Chanelle J, Bailey, Zinzi D, Raifman, Julia R, and Jackson, John W
- Subjects
- *
RACISM , *EVALUATION of medical care , *HUMAN services programs , *HEALTH equity , *POPULATION health , *CAUSAL models - Abstract
There have been calls for race to be denounced as a biological variable and for a greater focus on racism, instead of solely race, when studying racial health disparities in the United States. These calls are grounded in extensive scholarship and the rationale that race is not a biological variable, but instead socially constructed, and that structural/institutional racism is a root cause of race-related health disparities. However, there remains a lack of clear guidance for how best to incorporate these assertions about race and racism into tools, such as causal diagrams, that are commonly used by epidemiologists to study population health. We provide clear recommendations for using causal diagrams to study racial health disparities that were informed by these calls. These recommendations consider a health disparity to be a difference in a health outcome that is related to social, environmental, or economic disadvantage. We present simplified causal diagrams to illustrate how to implement our recommendations. These diagrams can be modified based on the health outcome and hypotheses, or for other group-based differences in health also rooted in disadvantage (e.g. gender). Implementing our recommendations may lead to the publication of more rigorous and informative studies of racial health disparities. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Bespoke Instrumental Variable Approach to Correction for Exposure Measurement Error.
- Author
-
Richardson, David B, Keil, Alexander P, Edwards, Jessie K, Cole, Stephen R, and Tchetgen, Eric J Tchetgen
- Subjects
- *
MATHEMATICAL variables , *RESEARCH bias , *MEASUREMENT errors , *PROBABILITY theory - Abstract
A covariate-adjusted estimate of an exposure-outcome association may be biased if the exposure variable suffers measurement error. We propose an approach to correct for exposure measurement error in a covariate-adjusted estimate of the association between a continuous exposure variable and outcome of interest. Our proposed approach requires data for a reference population in which the exposure was a priori set to some known level (e.g. 0, and is therefore unexposed); however, our approach does not require an exposure validation study or replicate measures of exposure, which are typically needed when addressing bias due to exposure measurement error. A key condition for this method, which we refer to as "partial population exchangeability," requires that the association between a measured covariate and outcome in the reference population equals the association between that covariate and outcome in the target population in the absence of exposure. We illustrate the approach using simulations and an example. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Comparison Groups Matter in Traumatic Brain Injury Research: An Example with Dementia.
- Author
-
Albrecht, Jennifer S., Gardner, Raquel C., Wiebe, Douglas, Bahorik, Amber, Xia, Feng, and Yaffe, Kristine
- Subjects
- *
BRAIN injuries , *DEMENTIA , *WOUNDS & injuries , *ALZHEIMER'S disease , *VETERANS' health - Abstract
The association between traumatic brain injury (TBI) and risk for Alzheimer disease and related dementias (ADRD) has been investigated in multiple studies, yet reported effect sizes have varied widely. Large differences in comorbid and demographic characteristics between individuals with and without TBI could result in spurious associations between TBI and poor outcomes, even when control for confounding is attempted. Yet, inadvertent control for post-TBI exposures (e.g., psychological and physical trauma) could result in an underestimate of the effect of TBI. Choice of the unexposed or comparison group is critical to estimating total associated risk. The objective of this study was to highlight how selection of the comparison group impacts estimates of the effect of TBI on risk for ADRD. Using data on Veterans aged ≥55 years obtained from the Veterans Health Administration (VA) for years 1999–2019, we compared risk of ADRD between Veterans with incident TBI (n = 9440) and (1) the general population of Veterans who receive care at the VA (All VA) (n = 119,003); (2) Veterans who received care at a VA emergency department (VA ED) (n = 111,342); and (3) Veterans who received care at a VA ED for non-TBI trauma (VA ED NTT) (n = 65,710). In inverse probability of treatment weighted models, TBI was associated with increased risk of ADRD compared with All VA (hazard ratio [HR] 1.94; 95% confidence interval [CI] 1.84, 2.04), VA ED (HR 1.42; 95% CI 1.35, 1.50), and VA ED NTT (HR 1.12; 95% CI 1.06, 1.18). The estimated effect of TBI on incident ADRD was strongly impacted by choice of the comparison group. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. Immortal time bias for life-long conditions in retrospective observational studies using electronic health records
- Author
-
Freya Tyrer, Krishnan Bhaskaran, and Mark J. Rutherford
- Subjects
Bias (epidemiology) ,Epidemiologic methods ,Immortal time bias ,Electronic health records ,Life expectancy ,Observational studies ,Medicine (General) ,R5-920 - Abstract
Abstract Background Immortal time bias is common in observational studies but is typically described for pharmacoepidemiology studies where there is a delay between cohort entry and treatment initiation. Methods This study used the Clinical Practice Research Datalink (CPRD) and linked national mortality data in England from 2000 to 2019 to investigate immortal time bias for a specific life-long condition, intellectual disability. Life expectancy (Chiang’s abridged life table approach) was compared for 33,867 exposed and 980,586 unexposed individuals aged 10+ years using five methods: (1) treating immortal time as observation time; (2) excluding time before date of first exposure diagnosis; (3) matching cohort entry to first exposure diagnosis; (4) excluding time before proxy date of inputting first exposure diagnosis (by the physician); and (5) treating exposure as a time-dependent measure. Results When not considered in the design or analysis (Method 1), immortal time bias led to disproportionately high life expectancy for the exposed population during the first calendar period (additional years expected to live: 2000–2004: 65.6 [95% CI: 63.6,67.6]) compared to the later calendar periods (2005–2009: 59.9 [58.8,60.9]; 2010–2014: 58.0 [57.1,58.9]; 2015–2019: 58.2 [56.8,59.7]). Date of entry of diagnosis (Method 4) was unreliable in this CPRD cohort. The final methods (Method 2, 3 and 5) appeared to solve the main theoretical problem but residual bias may have remained. Conclusions We conclude that immortal time bias is a significant issue for studies of life-long conditions that use electronic health record data and requires careful consideration of how clinical diagnoses are entered onto electronic health record systems.
- Published
- 2022
- Full Text
- View/download PDF
49. Trends in the Annual Consultation Incidence and Prevalence of Low Back Pain and Osteoarthritis in England from 2000 to 2019: Comparative Estimates from Two Clinical Practice Databases
- Author
-
Yu D, Missen M, Jordan KP, Edwards JJ, Bailey J, Wilkie R, Fitzpatrick J, Ali N, Niblett P, and Peat G
- Subjects
musculoskeletal ,electronic health records ,epidemiologic methods ,osteoarthritis ,low back pain ,primary care ,Infectious and parasitic diseases ,RC109-216 - Abstract
Dahai Yu,1 Matthew Missen,1 Kelvin P Jordan,1 John J Edwards,1 James Bailey,1 Ross Wilkie,1 Justine Fitzpatrick,2 Nuzhat Ali,2 Paul Niblett,2 George Peat1 1Primary Care Centre versus Arthritis, School of Medicine, Keele University, Keele, Staffordshire, UK; 2Office for Health Improvement and Disparities, Department of Health and Social Care, London, UKCorrespondence: Dahai Yu, Email d.yu@keele.ac.ukPurpose: To compare estimates of annual person-consulting incidence and prevalence of low back pain (LBP) and osteoarthritis for two national English electronic health record databases (Clinical Practice Research Datalink (CPRD) Aurum and CPRD GOLD).Patients and Methods: Retrospective, population-based, longitudinal cohort study. LBP and osteoarthritis cases were defined using established codelists in people aged ≥ 15 and ≥ 45 years, respectively. Incident cases were new recorded cases in a given calendar year with no relevant consultation in the previous 3 years (denominator = exact person-time in the same calendar year for the at-risk population). Prevalent cases were individuals with ≥ 1 consultation for the condition of interest recorded in a given calendar year, irrespective of prior consultations for the same condition (denominator = all patients with complete registration history in the previous 3 years). We estimated age-sex standardised incidence and annual (12-month period) prevalence for both conditions in 2000– 2019, overall, and by sex, age group, and region.Results: Standardised incidence and prevalence of LBP from Aurum were lower than those from GOLD until 2014, after which estimates were similar. Both databases showed recent declines in incidence and prevalence of LBP: declines began earlier in GOLD (after 2012– 2014) than Aurum (after 2014– 2015). Standardised incidence (after 2011) and prevalence of osteoarthritis (after 2003) were higher in Aurum than GOLD and showed different trends: incidence and prevalence were stable or increasing in Aurum, decreasing in GOLD. Stratified estimates in CPRD Aurum suggested consistently higher occurrence among women, older age groups, and those living in the north of England.Conclusion: Comparative analyses of two English databases produced conflicting estimates and trends for two common musculoskeletal conditions. Aurum estimates appeared more consistent with external sources and may be useful for monitoring population musculoskeletal health and healthcare demand, but they remain sensitive to analytic decisions and data quality.Keywords: musculoskeletal, electronic health records, epidemiologic methods, osteoarthritis, low back pain, primary care
- Published
- 2022
50. Examining qualitative and quantitative features of verbal fluency tasks to investigate the mental lexicon in postpartum women: A neuropsychological approach of executive functions applied to language
- Author
-
Barral Paula Eugenia, Miranda Agustín Ramiro, Cortez Mariela Valentina, Scotta Ana Veronica, and Soria Elio Andrés
- Subjects
postpartum period ,cognition ,epidemiologic methods ,parity ,women’s health ,Oral communication. Speech ,P95-95.6 ,Psychology ,BF1-990 - Abstract
During the postpartum period, women experience neurobiological and psychosocial variations that impact language functioning. Word production in verbal fluency tasks (VFTs) is a cognitive indicator of associative (semantic categorization and phonological analysis) and executive (inhibitory control and cognitive flexibility) processes. Also, a linguistic analysis allows for understanding production strategies (e.g., orthographic and use of rhymes), with multivariate statistics facilitating cluster identification of the most common words. Considering these approaches, this study aimed to optimize semantic and phonological VFT analysis for the identification of postpartum women’s mental lexicon using quantitative and qualitative scores. These outcomes were evaluated together with sociodemographic and reproductive data of 100 postpartum women (from Argentina). Mental lexicon description was statistically improved and showed that multiparous women clustered words more concisely than primiparous women, with increased correct words and better organizational strategies. In sum, female reproductive history improved VFT outcomes. The current results also show that factor analysis can optimize the neuropsychological study of language structuring.
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.