28 results on '"Gustafson, P."'
Search Results
2. Use of instrumental variables in the analysis of generalized linear models in the presence of unmeasured confounding with applications to epidemiological research.
- Author
-
Johnston, K. M., Gustafson, P., Levy, A. R., and Grootendorst, P.
- Abstract
A major, often unstated, concern of researchers carrying out epidemiological studies of medical therapy is the potential impact on validity if estimates of treatment are biased due to unmeasured confounders. One technique for obtaining consistent estimates of treatment effects in the presence of unmeasured confounders is instrumental variables analysis (IVA). This technique has been well developed in the econometrics literature and is being increasingly used in epidemiological studies. However, the approach to IVA that is most commonly used in such studies is based on linear models, while many epidemiological applications make use of non-linear models, specifically generalized linear models (GLMs) such as logistic or Poisson regression. Here we present a simple method for applying IVA within the class of GLMs using the generalized method of moments approach. We explore some of the theoretical properties of the method and illustrate its use within both a simulation example and an epidemiological study where unmeasured confounding is suspected to be present. We estimate the effects of beta-blocker therapy on one-year all-cause mortality after an incident hospitalization for heart failure, in the absence of data describing disease severity, which is believed to be a confounder. Copyright © 2007 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
3. A Bayesian analysis of bivariate survival data from a multicentre cancer clinical trial.
- Author
-
Gustafson, Paul and Gustafson, P
- Published
- 1995
- Full Text
- View/download PDF
4. Incorporating partial adherence into the principal stratification analysis framework.
- Author
-
Sanders E, Gustafson P, and Karim ME
- Subjects
- Bias, Computer Simulation, Humans, Monte Carlo Method, Randomized Controlled Trials as Topic, Research Design
- Abstract
Participants in pragmatic clinical trials often partially adhere to treatment. However, to simplify the analysis, most studies dichotomize adherence (supposing that subjects received either full or no treatment), which can introduce biases in the results. For example, the popular approach of principal stratification is based on the concept that the population can be separated into strata based on how they will react to treatment assignment, but this framework does not include strata in which a partially adhering participant would belong. We expanded the principal stratification framework to allow partial adherers to have their own principal stratum and treatment level. The expanded approach is feasible in pragmatic settings. We have designed a Monte Carlo posterior sampling method to obtain the relevant parameter estimates. Simulations were completed under a range of settings where participants partially adhered to treatment, including a hypothetical setting from a published simulation trial on the topic of partial adherence. The inference method is additionally applied to data from a real randomized clinical trial that features partial adherence. Comparison of the simulation results indicated that our method is superior in most cases to the biased estimators obtained through standard principal stratification. Simulation results further suggest that our proposed method may lead to increased accuracy of inference in settings where study participants only partially adhere to assigned treatment., (© 2021 John Wiley & Sons Ltd.)
- Published
- 2021
- Full Text
- View/download PDF
5. STRATOS guidance document on measurement error and misclassification of variables in observational epidemiology: Part 2-More complex methods of adjustment and advanced topics.
- Author
-
Shaw PA, Gustafson P, Carroll RJ, Deffner V, Dodd KW, Keogh RH, Kipnis V, Tooze JA, Wallace MP, Küchenhoff H, and Freedman LS
- Subjects
- Bias, Humans, Bayes Theorem
- Abstract
We continue our review of issues related to measurement error and misclassification in epidemiology. We further describe methods of adjusting for biased estimation caused by measurement error in continuous covariates, covering likelihood methods, Bayesian methods, moment reconstruction, moment-adjusted imputation, and multiple imputation. We then describe which methods can also be used with misclassification of categorical covariates. Methods of adjusting estimation of distributions of continuous variables for measurement error are then reviewed. Illustrative examples are provided throughout these sections. We provide lists of available software for implementing these methods and also provide the code for implementing our examples in the Supporting Information. Next, we present several advanced topics, including data subject to both classical and Berkson error, modeling continuous exposures with measurement error, and categorical exposures with misclassification in the same model, variable selection when some of the variables are measured with error, adjusting analyses or design for error in an outcome variable, and categorizing continuous variables measured with error. Finally, we provide some advice for the often met situations where variables are known to be measured with substantial error, but there is only an external reference standard or partial (or no) information about the type or magnitude of the error., (Published 2020. This article is a U.S. Government work and is in the public domain in the USA.)
- Published
- 2020
- Full Text
- View/download PDF
6. STRATOS guidance document on measurement error and misclassification of variables in observational epidemiology: Part 1-Basic theory and simple methods of adjustment.
- Author
-
Keogh RH, Shaw PA, Gustafson P, Carroll RJ, Deffner V, Dodd KW, Küchenhoff H, Tooze JA, Wallace MP, Kipnis V, and Freedman LS
- Subjects
- Bias, Calibration, Causality, Computer Simulation, Humans, Models, Statistical, Research Design
- Abstract
Measurement error and misclassification of variables frequently occur in epidemiology and involve variables important to public health. Their presence can impact strongly on results of statistical analyses involving such variables. However, investigators commonly fail to pay attention to biases resulting from such mismeasurement. We provide, in two parts, an overview of the types of error that occur, their impacts on analytic results, and statistical methods to mitigate the biases that they cause. In this first part, we review different types of measurement error and misclassification, emphasizing the classical, linear, and Berkson models, and on the concepts of nondifferential and differential error. We describe the impacts of these types of error in covariates and in outcome variables on various analyses, including estimation and testing in regression models and estimating distributions. We outline types of ancillary studies required to provide information about such errors and discuss the implications of covariate measurement error for study design. Methods for ascertaining sample size requirements are outlined, both for ancillary studies designed to provide information about measurement error and for main studies where the exposure of interest is measured with error. We describe two of the simpler methods, regression calibration and simulation extrapolation (SIMEX), that adjust for bias in regression coefficients caused by measurement error in continuous covariates, and illustrate their use through examples drawn from the Observing Protein and Energy (OPEN) dietary validation study. Finally, we review software available for implementing these methods. The second part of the article deals with more advanced topics., (Published 2020. This article is a U.S. Government work and is in the public domain in the USA.)
- Published
- 2020
- Full Text
- View/download PDF
7. A threshold-free summary index for quantifying the capacity of covariates to yield efficient treatment rules.
- Author
-
Sadatsafavi M, Mansournia MA, and Gustafson P
- Subjects
- Causality, Humans, Research Design
- Abstract
When data on treatment assignment, outcomes, and covariates from a randomized trial are available, a question of interest is to what extent covariates can be used to optimize treatment decisions. Statistical hypothesis testing of covariate-by-treatment interaction is ill-suited for this purpose. The application of decision theory results in treatment rules that compare the expected benefit of treatment given the patient's covariates against a treatment threshold. However, determining treatment threshold is often context-specific, and any given threshold might seem arbitrary when the overall capacity towards predicting treatment benefit is of concern. We propose the Concentration of Benefit index (C
b ), a threshold-free metric that quantifies the combined performance of covariates towards finding individuals who will benefit the most from treatment. The construct of the proposed index is comparing expected treatment outcomes with and without knowledge of covariates when one of a two randomly selected patients is to be treated. We show that the resulting index can also be expressed in terms of the integrated efficiency of individualized treatment decision over the entire range of treatment thresholds. We propose parametric and semiparametric estimators, the latter being suitable for out-of-sample validation and correction for optimism. We used data from a clinical trial to demonstrate the calculations in a step-by-step fashion. The proposed index has intuitive and theoretically sound interpretation and can be estimated with relative ease for a wide class of regression models. Beyond the conceptual developments, various aspects of estimation and inference for such a metric need to be pursued in future research. R code that implements the method for a variety of regression models is provided at (https://github.com/msadatsafavi/txBenefit)., (© 2020 John Wiley & Sons, Ltd.)- Published
- 2020
- Full Text
- View/download PDF
8. Reconciling randomized trial evidence on proximal versus distal outcomes, with application to trials of influenza vaccination for healthcare workers.
- Author
-
Gustafson R, Gustafson P, and Daly P
- Subjects
- Computer Simulation, Health Personnel, Humans, Influenza Vaccines, Endpoint Determination methods, Probability, Randomized Controlled Trials as Topic methods
- Abstract
When synthesizing the body of evidence concerning a clinical intervention, impacts on both proximal and distal outcome variables may be relevant. Assessments will be more defensible if results concerning a proximal outcome align with those concerning a corresponding distal outcome. We present a method to assess the coherence of empirical clinical trial results with biologic and mathematical first principles in situations where the intervention can only plausibly impact the distal outcome indirectly via the proximal outcome. The method comprises a probabilistic sensitivity analysis, where plausible ranges for key parameters are specified, resulting in a constellation of plausible pairs of estimated intervention effects, for the proximal and distal outcomes, respectively. Both outcome misclassification and sampling variability are reflected in the method. We apply our methodology in the context of cluster randomized trials to evaluate the impacts of vaccinating healthcare workers on the health of elderly patients, where the proximal outcome is suspected influenza and the distal outcome is death. However, there is scope to apply the method for other interventions in other disease areas., (© 2019 John Wiley & Sons, Ltd.)
- Published
- 2019
- Full Text
- View/download PDF
9. Adjusting for differential misclassification in matched case-control studies utilizing health administrative data.
- Author
-
Högg T, Zhao Y, Gustafson P, Petkau J, Fisk J, Marrie RA, and Tremlett H
- Subjects
- Case-Control Studies, Clinical Coding, Databases, Factual, Hospitals, Humans, Models, Statistical, Bayes Theorem, Bias, Data Accuracy, Diagnostic Errors
- Abstract
In epidemiological studies of secondary data sources, lack of accurate disease classifications often requires investigators to rely on diagnostic codes generated by physicians or hospital systems to identify case and control groups, resulting in a less-than-perfect assessment of the disease under investigation. Moreover, because of differences in coding practices by physicians, it is hard to determine the factors that affect the chance of an incorrectly assigned disease status. What results is a dilemma where assumptions of non-differential misclassification are questionable but, at the same time, necessary to proceed with statistical analyses. This paper develops an approach to adjust exposure-disease association estimates for disease misclassification, without the need of simplifying non-differentiality assumptions, or prior information about a complicated classification mechanism. We propose to leverage rich temporal information on disease-specific healthcare utilization to estimate each participant's probability of being a true case and to use these estimates as weights in a Bayesian analysis of matched case-control data. The approach is applied to data from a recent observational study into the early symptoms of multiple sclerosis (MS), where MS cases were identified from Canadian health administrative databases and matched to population controls that are assumed to be correctly classified. A comparison of our results with those from non-differentially adjusted analyses reveals conflicting inferences and highlights that ill-suited assumptions of non-differential misclassification can exacerbate biases in association estimates., (© 2019 John Wiley & Sons, Ltd.)
- Published
- 2019
- Full Text
- View/download PDF
10. Bayesian inference for unidirectional misclassification of a binary response trait.
- Author
-
Xia M and Gustafson P
- Subjects
- Computer Simulation, Humans, Models, Statistical, Poisson Distribution, Bayes Theorem, Bias, Regression Analysis
- Abstract
When assessing association between a binary trait and some covariates, the binary response may be subject to unidirectional misclassification. Unidirectional misclassification can occur when revealing a particular level of the trait is associated with a type of cost, such as a social desirability or financial cost. The feasibility of addressing misclassification is commonly obscured by model identification issues. The current paper attempts to study the efficacy of inference when the binary response variable is subject to unidirectional misclassification. From a theoretical perspective, we demonstrate that the key model parameters possess identifiability, except for the case with a single binary covariate. From a practical standpoint, the logistic model with quantitative covariates can be weakly identified, in the sense that the Fisher information matrix may be near singular. This can make learning some parameters difficult under certain parameter settings, even with quite large samples. In other cases, the stronger identification enables the model to provide more effective adjustment for unidirectional misclassification. An extension to the Poisson approximation of the binomial model reveals the identifiability of the Poisson and zero-inflated Poisson models. For fully identified models, the proposed method adjusts for misclassification based on learning from data. For binary models where there is difficulty in identification, the method is useful for sensitivity analyses on the potential impact from unidirectional misclassification., (Copyright © 2017 John Wiley & Sons, Ltd.)
- Published
- 2018
- Full Text
- View/download PDF
11. Bayesian analysis of pair-matched case-control studies subject to outcome misclassification.
- Author
-
Högg T, Petkau J, Zhao Y, Gustafson P, Wijnands JM, and Tremlett H
- Subjects
- British Columbia, Comorbidity, Computer Simulation, Data Interpretation, Statistical, Humans, Multiple Sclerosis complications, Odds Ratio, Bayes Theorem, Bias, Case-Control Studies, Databases, Factual
- Abstract
We examine the impact of nondifferential outcome misclassification on odds ratios estimated from pair-matched case-control studies and propose a Bayesian model to adjust these estimates for misclassification bias. The model relies on access to a validation subgroup with confirmed outcome status for all case-control pairs as well as prior knowledge about the positive and negative predictive value of the classification mechanism. We illustrate the model's performance on simulated data and apply it to a database study examining the presence of ten morbidities in the prodromal phase of multiple sclerosis., (Copyright © 2017 John Wiley & Sons, Ltd.)
- Published
- 2017
- Full Text
- View/download PDF
12. A comparison of Bayesian and Monte Carlo sensitivity analysis for unmeasured confounding.
- Author
-
McCandless LC and Gustafson P
- Subjects
- Bias, Biostatistics, Causality, Computer Simulation, Confounding Factors, Epidemiologic, Databases, Factual statistics & numerical data, Heart Failure drug therapy, Heart Failure mortality, Humans, Models, Statistical, Sensitivity and Specificity, Bayes Theorem, Monte Carlo Method, Observational Studies as Topic statistics & numerical data
- Abstract
Bias from unmeasured confounding is a persistent concern in observational studies, and sensitivity analysis has been proposed as a solution. In the recent years, probabilistic sensitivity analysis using either Monte Carlo sensitivity analysis (MCSA) or Bayesian sensitivity analysis (BSA) has emerged as a practical analytic strategy when there are multiple bias parameters inputs. BSA uses Bayes theorem to formally combine evidence from the prior distribution and the data. In contrast, MCSA samples bias parameters directly from the prior distribution. Intuitively, one would think that BSA and MCSA ought to give similar results. Both methods use similar models and the same (prior) probability distributions for the bias parameters. In this paper, we illustrate the surprising finding that BSA and MCSA can give very different results. Specifically, we demonstrate that MCSA can give inaccurate uncertainty assessments (e.g. 95% intervals) that do not reflect the data's influence on uncertainty about unmeasured confounding. Using a data example from epidemiology and simulation studies, we show that certain combinations of data and prior distributions can result in dramatic prior-to-posterior changes in uncertainty about the bias parameters. This occurs because the application of Bayes theorem in a non-identifiable model can sometimes rule out certain patterns of unmeasured confounding that are not compatible with the data. Consequently, the MCSA approach may give 95% intervals that are either too wide or too narrow and that do not have 95% frequentist coverage probability. Based on our findings, we recommend that analysts use BSA for probabilistic sensitivity analysis. Copyright © 2017 John Wiley & Sons, Ltd., (Copyright © 2017 John Wiley & Sons, Ltd.)
- Published
- 2017
- Full Text
- View/download PDF
13. Bayesian adjustment for the misclassification in both dependent and independent variables with application to a breast cancer study.
- Author
-
Liu J, Gustafson P, and Huo D
- Subjects
- Female, Humans, Logistic Models, Sensitivity and Specificity, Treatment Failure, Bayes Theorem, Breast Neoplasms therapy
- Abstract
In this paper, we propose a Bayesian method to address misclassification errors in both independent and dependent variables. Our work is motivated by a study of women who have experienced new breast cancers on two separate occasions. We call both cancers primary, because the second is usually not considered as the result of a metastasis spreading from the first. Hormone receptors (HRs) are important in breast cancer biology, and it is well recognized that the measurement of HR status is subject to errors. This discordance in HR status for two primary breast cancers is of concern and might be an important reason for treatment failure. To sort out the information on true concordance rate from the observed concordance rate, we consider a logistic regression model for the association between the HR status of the two cancers and introduce the misclassification parameters (i.e., sensitivity and specificity) accounting for the misclassification in HR status. The prior distribution for sensitivity and specificity is based on how HR status is actually assessed in laboratory procedures. To account for the nonlinear effect of one error-free covariate, we introduce the B-spline terms in the logistic regression model. Our findings indicate that the true concordance rate of HR status between two primary cancers is greater than the observed value. Copyright © 2016 John Wiley & Sons, Ltd., (Copyright © 2016 John Wiley & Sons, Ltd.)
- Published
- 2016
- Full Text
- View/download PDF
14. A comparison of Bayesian hierarchical modeling with group-based exposure assessment in occupational epidemiology.
- Author
-
Xing L, Burstyn I, Richardson DB, and Gustafson P
- Subjects
- Cohort Studies, Computer Simulation, Humans, Leukemia mortality, Male, Radiation, Bayes Theorem, Data Interpretation, Statistical, Models, Statistical, Occupational Exposure adverse effects
- Abstract
We build a Bayesian hierarchical model for relating disease to a potentially harmful exposure, by using data from studies in occupational epidemiology, and compare our method with the traditional group-based exposure assessment method through simulation studies, a real data application, and theoretical calculation. We focus on cohort studies where a logistic disease model is appropriate and where group means can be treated as fixed effects. The results show a variety of advantages of the fully Bayesian approach and provide recommendations on situations where the traditional group-based exposure assessment method may not be suitable to use., (Copyright © 2013 John Wiley & Sons, Ltd.)
- Published
- 2013
- Full Text
- View/download PDF
15. A Bayesian method for estimating prevalence in the presence of a hidden sub-population.
- Author
-
Xia M and Gustafson P
- Subjects
- Adolescent, Adult, Arthritis epidemiology, Asthma epidemiology, Computer Simulation, Female, Humans, Male, Markov Chains, Monte Carlo Method, Neoplasms epidemiology, Prevalence, Sexually Transmitted Diseases epidemiology, Young Adult, Bayes Theorem, Models, Genetic, Models, Statistical, Quantitative Trait, Heritable
- Abstract
When estimating the prevalence of a binary trait in a population, the presence of a hidden sub-population that cannot be sampled will lead to nonidentifiability and potentially biased estimation. We propose a Bayesian model of trait prevalence for a weighted sample from the non-hidden portion of the population, by modeling the relationship between prevalence and sampling probability. We studied the behavior of the posterior distribution on population prevalence, with the large-sample limits of posterior distributions obtained in simple analytical forms that give intuitively expected properties. We performed MCMC simulations on finite samples to evaluate the effectiveness of statistical learning. We applied the model and the results to two illustrative datasets arising from weighted sampling. Our work confirms that sensible results can be obtained using Bayesian analysis, despite the nonidentifiability in this situation., (Copyright © 2012 John Wiley & Sons, Ltd.)
- Published
- 2012
- Full Text
- View/download PDF
16. Hierarchical priors for bias parameters in Bayesian sensitivity analysis for unmeasured confounding.
- Author
-
McCandless LC, Gustafson P, Levy AR, and Richardson S
- Subjects
- Adrenergic beta-Antagonists therapeutic use, Angiotensin-Converting Enzyme Inhibitors therapeutic use, Cardiotonic Agents therapeutic use, Digoxin therapeutic use, Diuretics therapeutic use, Female, Heart Failure drug therapy, Heart Failure mortality, Humans, Hydroxymethylglutaryl-CoA Reductase Inhibitors therapeutic use, Male, Pharmacoepidemiology statistics & numerical data, Regression Analysis, Bayes Theorem, Bias, Confounding Factors, Epidemiologic, Data Interpretation, Statistical
- Abstract
Recent years have witnessed new innovation in Bayesian techniques to adjust for unmeasured confounding. A challenge with existing methods is that the user is often required to elicit prior distributions for high-dimensional parameters that model competing bias scenarios. This can render the methods unwieldy. In this paper, we propose a novel methodology to adjust for unmeasured confounding that derives default priors for bias parameters for observational studies with binary covariates. The confounding effects of measured and unmeasured variables are treated as exchangeable within a Bayesian framework. We model the joint distribution of covariates by using a log-linear model with pairwise interaction terms. Hierarchical priors constrain the magnitude and direction of bias parameters. An appealing property of the method is that the conditional distribution of the unmeasured confounder follows a logistic model, giving a simple equivalence with previously proposed methods. We apply the method in a data example from pharmacoepidemiology and explore the impact of different priors for bias parameters on the analysis results., (Copyright © 2011 John Wiley & Sons, Ltd.)
- Published
- 2012
- Full Text
- View/download PDF
17. Bayesian inference of gene-environment interaction from incomplete data: what happens when information on environment is disjoint from data on gene and disease?
- Author
-
Gustafson P and Burstyn I
- Subjects
- Arylamine N-Acetyltransferase genetics, Biostatistics, Data Interpretation, Statistical, Genetic Predisposition to Disease, Genotype, Humans, Models, Statistical, Random Allocation, Retrospective Studies, Risk Factors, Smoking adverse effects, Urinary Bladder Neoplasms etiology, Urinary Bladder Neoplasms genetics, Bayes Theorem, Environmental Exposure adverse effects, Models, Genetic
- Abstract
Inference in gene-environment studies can sometimes exploit the assumption of mendelian randomization that genotype and environmental exposure are independent in the population under study. Moreover, in some such problems it is reasonable to assume that the disease risk for subjects without environmental exposure will not vary with genotype. When both assumptions can be invoked, we consider the prospects for inferring the dependence of disease risk on genotype and environmental exposure (and particularly the extent of any gene-environment interaction), without detailed data on environmental exposure. The data structure envisioned involves data on disease and genotype jointly, but only external information about the distribution of the environmental exposure in the population. This is relevant as for many environmental exposures individual-level measurements are costly and/or highly error-prone. Working in the setting where all relevant variables are binary, we examine the extent to which such data are informative about the interaction, via determination of the large-sample limit of the posterior distribution. The ideas are illustrated using data from a case-control study for bladder cancer involving smoking behaviour and the NAT2 genotype., (Copyright © 2011 John Wiley & Sons, Ltd.)
- Published
- 2011
- Full Text
- View/download PDF
18. Bayesian adjustment for exposure misclassification in case-control studies.
- Author
-
Chu R, Gustafson P, and Le N
- Subjects
- Anti-Bacterial Agents adverse effects, Biostatistics, Computer Simulation, Female, Herpesvirus 2, Human pathogenicity, Humans, Infant, Logistic Models, Markov Chains, Models, Statistical, Monte Carlo Method, Odds Ratio, Pregnancy, Prenatal Exposure Delayed Effects, Sample Size, Sudden Infant Death etiology, Uterine Cervical Neoplasms etiology, Bayes Theorem, Case-Control Studies
- Abstract
Poor measurement of explanatory variables occurs frequently in observational studies. Error-prone observations may lead to biased estimation and loss of power in detecting the impact of explanatory variables on the response. We consider misclassified binary exposure in the context of case-control studies, assuming the availability of validation data to inform the magnitude of the misclassification. A Bayesian adjustment to correct the misclassification is investigated. Simulation studies show that the Bayesian method can have advantages over non-Bayesian counterparts, particularly in the face of a rare exposure, small validation sample sizes, and uncertainty about whether exposure misclassification is differential or non-differential. The method is illustrated via application to several real studies., (2010 John Wiley & Sons, Ltd.)
- Published
- 2010
- Full Text
- View/download PDF
19. Bayesian analysis of a matched case-control study with expert prior information on both the misclassification of exposure and the exposure-disease association.
- Author
-
Liu J, Gustafson P, Cherry N, and Burstyn I
- Subjects
- Allergens immunology, Asthma chemically induced, Female, Humans, Male, Bayes Theorem, Case-Control Studies, Models, Statistical, Occupational Exposure
- Abstract
We propose a Bayesian adjustment for the misclassification of a binary exposure variable in a matched case-control study. The method admits a priori knowledge about both the misclassification parameters and the exposure-disease association. The standard Dirichlet prior distribution for a multinomial model is extended to allow separation of prior assertions about the exposure-disease association from assertions about other parameters. The method is applied to a study of occupational risk factors for new-onset adult asthma.
- Published
- 2009
- Full Text
- View/download PDF
20. Bayesian adjustment for covariate measurement errors: a flexible parametric approach.
- Author
-
Hossain S and Gustafson P
- Subjects
- Algorithms, Bias, Cholesterol blood, Computer Simulation, Coronary Disease blood, Coronary Disease etiology, Humans, Likelihood Functions, Normal Distribution, Regression Analysis, Risk Factors, Sensitivity and Specificity, Bayes Theorem, Linear Models, Markov Chains, Monte Carlo Method, Statistical Distributions
- Abstract
In most epidemiological investigations, the study units are people, the outcome variable (or the response) is a health-related event, and the explanatory variables are usually environmental and/or socio-demographic factors. The fundamental task in such investigations is to quantify the association between the explanatory variables (covariates/exposures) and the outcome variable through a suitable regression model. The accuracy of such quantification depends on how precisely the relevant covariates are measured. In many instances, we cannot measure some of the covariates accurately. Rather, we can measure noisy (mismeasured) versions of them. In statistical terminology, mismeasurement in continuous covariates is known as measurement errors or errors-in-variables. Regression analyses based on mismeasured covariates lead to biased inference about the true underlying response-covariate associations. In this paper, we suggest a flexible parametric approach for avoiding this bias when estimating the response-covariate relationship through a logistic regression model. More specifically, we consider the flexible generalized skew-normal and the flexible generalized skew-t distributions for modeling the unobserved true exposure. For inference and computational purposes, we use Bayesian Markov chain Monte Carlo techniques. We investigate the performance of the proposed flexible parametric approach in comparison with a common flexible parametric approach through extensive simulation studies. We also compare the proposed method with the competing flexible parametric method on a real-life data set. Though emphasis is put on the logistic regression model, the proposed method is unified and is applicable to the other generalized linear models, and to other types of non-linear regression models as well., ((c) 2009 John Wiley & Sons, Ltd.)
- Published
- 2009
- Full Text
- View/download PDF
21. Bayesian propensity score analysis for observational data.
- Author
-
McCandless LC, Gustafson P, and Austin PC
- Subjects
- Aged, Bias, Confidence Intervals, Confounding Factors, Epidemiologic, Female, Humans, Male, Markov Chains, Middle Aged, Monte Carlo Method, Observation, Ontario epidemiology, Bayes Theorem, Hydroxymethylglutaryl-CoA Reductase Inhibitors administration & dosage, Myocardial Infarction mortality, Myocardial Infarction prevention & control
- Abstract
In the analysis of observational data, stratifying patients on the estimated propensity scores reduces confounding from measured variables. Confidence intervals for the treatment effect are typically calculated without acknowledging uncertainty in the estimated propensity scores, and intuitively this may yield inferences, which are falsely precise. In this paper, we describe a Bayesian method that models the propensity score as a latent variable. We consider observational studies with a dichotomous treatment, dichotomous outcome, and measured confounders where the log odds ratio is the measure of effect. Markov chain Monte Carlo is used for posterior simulation. We study the impact of modelling uncertainty in the propensity scores in a case study investigating the effect of statin therapy on mortality in Ontario patients discharged from hospital following acute myocardial infarction. Our analysis reveals that the Bayesian credible interval for the treatment effect is 10 per cent wider compared with a conventional propensity score analysis. Using simulations, we show that when the association between treatment and confounders is weak, then this increases uncertainty in the estimated propensity scores. Bayesian interval estimates for the treatment effect are longer on average, though there is little improvement in coverage probability. A novel feature of the proposed method is that it fits models for the treatment and outcome simultaneously rather than one at a time. The method uses the outcome variable to inform the fit of the propensity model. We explore the performance of the estimated propensity scores using cross-validation., (Copyright (c) 2008 John Wiley & Sons, Ltd.)
- Published
- 2009
- Full Text
- View/download PDF
22. A Bayesian multilevel model for estimating the diet/disease relationship in a multicenter study with exposures measured with error: the EPIC study.
- Author
-
Ferrari P, Carroll RJ, Gustafson P, and Riboli E
- Subjects
- Biometry, Breast Neoplasms epidemiology, Dietary Fats administration & dosage, Dietary Fats adverse effects, Energy Intake, Epidemiologic Methods, Europe epidemiology, Female, Humans, Markov Chains, Multicenter Studies as Topic statistics & numerical data, Prospective Studies, Risk Factors, Bayes Theorem, Breast Neoplasms etiology, Diet adverse effects, Models, Statistical
- Abstract
In a multicenter study, the overall relationship between diet and cancer risk can be broken down into: (a) within-center relationships, which reflect the relationships at the individual level in each of the centers, and (b) a between-center relationship, which captures the association between exposure and disease risk at the aggregate level. In this work, we propose the use of a Bayesian multilevel model that takes into account the within- and between-center levels of evidence, using information at the individual and aggregate level. Correction for measurement error is performed in order to correct for systematic between-center measurement error in dietary exposure, and for attenuation biases in relative risk estimates within centers. The estimation of the parameters is carried out in a Bayesian framework using Gibbs sampling. The model entails a measurement, an exposure, and a disease component. Within the European Prospective Investigation into Cancer and Nutrition (EPIC) the association between lipid intake, assessed through dietary questionnaire and 24-hour dietary recall, and breast cancer incidence was evaluated. This analysis involved 21 534 women and 334 incident breast cancer cases from the EPIC calibration study. In this study, total energy intake was positively associated with breast cancer incidence at the aggregate level, whereas no effect was observed for fat. At the individual level, height was positively related to breast cancer incidence, whereas a weaker association was observed for fat. The use of multilevel models, which constitute a very powerful approach to estimating individual vs aggregate levels of evidence should be considered in multicenter studies.
- Published
- 2008
- Full Text
- View/download PDF
23. Regression B-spline smoothing in Bayesian disease mapping: with an application to patient safety surveillance.
- Author
-
MacNab YC and Gustafson P
- Subjects
- Aged, British Columbia epidemiology, Humans, Iatrogenic Disease epidemiology, Linear Models, Male, Markov Chains, Safety, Wounds and Injuries epidemiology, Bayes Theorem, Population Surveillance, Regression Analysis
- Abstract
In the context of Bayesian disease mapping, recent literature presents generalized linear mixed models that engender spatial smoothing. The methods assume spatially varying random effects as a route to partially pooling data and 'borrowing strength' in small-area estimation. When spatiotemporal disease rates are available for sequential risk mapping of several time periods, the 'smoothing' issue may be explored by considering spatial smoothing, temporal smoothing and spatiotemporal interaction. In this paper, these considerations are motivated and explored through development of a Bayesian semiparametric disease mapping model framework which facilitates temporal smoothing of rates and relative risks via regression B-splines with mixed-effect representation of coefficients. Specifically, we develop spatial priors such as multivariate Gaussian Markov random fields and non-spatial priors such as unstructured multivariate Gaussian distributions and illustrate how time trends in small-area relative risks may be explored by splines which vary in either a spatially structured or unstructured manner. In particular, we show that with suitable prior specifications for the random effects ensemble, small-area relative risk trends may be fit by 'spatially varying' or randomly varying B-splines. A recently developed Bayesian hierarchical model selection criterion, the deviance information criterion, is used to assess the trade-off between goodness-of-fit and smoothness and to select the number of knots. The methodological development aims to provide reliable information about the patterns (both over space and time) of disease risks and to quantify uncertainty. The study offers a disease and health outcome surveillance methodology for flexible and efficient exploration and assessment of emerging risk trends and clustering. The methods are motivated and illustrated through a Bayesian analysis of adverse medical events (also known as iatrogenic injuries) among hospitalized elderly patients in British Columbia, Canada., (Copyright (c) 2007 John Wiley & Sons, Ltd.)
- Published
- 2007
- Full Text
- View/download PDF
24. Bayesian sensitivity analysis for unmeasured confounding in observational studies.
- Author
-
McCandless LC, Gustafson P, and Levy A
- Subjects
- Adrenergic beta-Antagonists therapeutic use, Aged, Aged, 80 and over, Bias, British Columbia, Cardiac Output, Low drug therapy, Female, Humans, Male, Sensitivity and Specificity, Treatment Outcome, Bayes Theorem, Confounding Factors, Epidemiologic
- Abstract
We consider Bayesian sensitivity analysis for unmeasured confounding in observational studies where the association between a binary exposure, binary response, measured confounders and a single binary unmeasured confounder can be formulated using logistic regression models. A model for unmeasured confounding is presented along with a family of prior distributions that model beliefs about a possible unknown unmeasured confounder. Simulation from the posterior distribution is accomplished using Markov chain Monte Carlo. Because the model for unmeasured confounding is not identifiable, standard large-sample theory for Bayesian analysis is not applicable. Consequently, the impact of different choices of prior distributions on the coverage probability of credible intervals is unknown. Using simulations, we investigate the coverage probability when averaged with respect to various distributions over the parameter space. The results indicate that credible intervals will have approximately nominal coverage probability, on average, when the prior distribution used for sensitivity analysis approximates the sampling distribution of model parameters in a hypothetical sequence of observational studies. We motivate the method in a study of the effectiveness of beta blocker therapy for treatment of heart failure., (Copyright 2006 John Wiley & Sons, Ltd.)
- Published
- 2007
- Full Text
- View/download PDF
25. An innovative application of Bayesian disease mapping methods to patient safety research: a Canadian adverse medical event study.
- Author
-
MacNab YC, Kmetic A, Gustafson P, and Sheps S
- Subjects
- Adolescent, Adult, British Columbia epidemiology, Child, Child, Preschool, Female, Humans, Infant, Male, Bayes Theorem, Data Interpretation, Statistical, Iatrogenic Disease epidemiology, Models, Statistical, Patient Care adverse effects
- Abstract
Recently developed disease mapping and ecological regression methods have become important techniques in studies of disease epidemiology and in health services research. This increase in importance is partially a result of the development of Bayesian statistical methodologies that make it possible to study associations between health problems and risk factors at an aggregate (i.e. areal) level while taking into account such matters as unmeasured confounding and spatial relationships. In this paper we present a demonstration of the joint use of empirical Bayes (EB) and full Bayesian inferential techniques in a small area study of adverse medical events (also known as 'iatrogenic injury') in British Columbia, Canada. In particular, we illustrate a unified Bayesian hierarchical spatial modelling framework that enables simultaneous examinations of potential associations between adverse medical event occurrence and regional characteristics, age effects, residual variation and spatial autocorrelation. We propose an analytic strategy for complementary use of EB and FB inferential techniques for risk assessment and model selection, presenting an EB-FB combined approach that draws on the strengths of each method while minimizing inherent weaknesses. The work was motivated by the need to explore relatively efficient ways to analyse regional variations of health services outcomes and resource utilization when a considerable amount of statistical modelling and inference are required.
- Published
- 2006
- Full Text
- View/download PDF
26. Curious phenomena in Bayesian adjustment for exposure misclassification.
- Author
-
Gustafson P and Greenland S
- Subjects
- Anti-Bacterial Agents therapeutic use, Computer Simulation, Female, Humans, Infant, Pregnancy, Sudden Infant Death etiology, Bayes Theorem, Bias, Case-Control Studies, Data Interpretation, Statistical
- Abstract
Many epidemiologic investigations involve some discussion of exposure misclassification, but rarely is there an attempt to adjust for misclassification formally in the statistical analysis. Rather, investigators tend to rely on intuition to comment qualitatively on how misclassification might impact their findings. We point out several ways in which intuition might fail, in the context of unmatched case-control analysis with non-differential exposure misclassification. Particularly, we focus on how intuition can conflict with the results of a Bayesian analysis that accounts for the various uncertainties at hand. First, the Bayesian adjustment for misclassification can weaken the evidence about the direction of an exposure-disease association. Second, admitting uncertainty about the misclassification parameters can lead to narrower interval estimates concerning the association. We focus on the simple setting of unmatched case-control analysis with binary exposure and without adjustment for confounders, though much of our discussion should be relevant more generally., (Copyright 2005 John Wiley & Sons, Ltd.)
- Published
- 2006
- Full Text
- View/download PDF
27. Extending logistic regression to model diffuse interactions.
- Author
-
Gustafson P, Kazi AM, and Levy AR
- Subjects
- Analysis of Variance, Bayes Theorem, Heart Diseases etiology, Humans, Models, Statistical, Sensitivity and Specificity, South Africa, Logistic Models
- Abstract
In an observational study focussed on association between a health outcome and numerous explanatory variables, the question of interactions can be problematic. Commonly, logistic regression of the outcome on the explanatory variables might be employed. Such modelling often includes an attempt to select some pairwise product interaction terms, from amongst the many such possible pairs. For several reasons, however, this can be unsatisfying. Here we consider a different approach based on a parsimonious extension of a logistic regression model without interaction terms. This extension permits an overall synergism or antagonism in how the explanatory variables combine to associate with the outcome, without any attempt to identify specific variables which give rise to interactive behaviour. We call this diffuse interaction. We elucidate some simple properties of the diffuse interaction model, and give an example of its application to epidemiological data. We also consider asymptotic behaviour in a restricted case of the model, to gain some insight into how well this kind of interaction can be detected from data., (Copyright (c) 2005 John Wiley & Sons, Ltd.)
- Published
- 2005
- Full Text
- View/download PDF
28. The utility of prior information and stratification for parameter estimation with two screening tests but no gold standard.
- Author
-
Gustafson P
- Subjects
- Bayes Theorem, Data Interpretation, Statistical, Epidemiologic Methods, Humans, Markov Chains, Models, Statistical, Monte Carlo Method, Sensitivity and Specificity, Biometry
- Abstract
When a gold standard screening or diagnostic test is not routinely available, it is common to apply two different imperfect tests to subjects from a study population. There is a considerable literature on estimating relevant parameters from the resultant data. In the situation that test sensitivities and specificities are unknown, several inferential strategies have been proposed. One suggestion is to use rough knowledge about the unknown test characteristics as prior information in a Bayesian analysis. Another suggestion is to obtain the statistical advantage of an identified model by splitting the population into two strata with differing disease prevalences. There is some division of opinion in the epidemiological literature on the relative merits of these two approaches. This article aims to shed light on the issue, by applying some recently developed theory on the performance of Bayesian inference in non-identified statistical models., (Copyright 2004 John Wiley & Sons, Ltd.)
- Published
- 2005
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.