30 results on '"Dean CB"'
Search Results
2. A comparison of classification algorithms for the identification of smoke plumes from satellite images
- Author
-
Wan, V., primary, Braun, WJ, additional, Dean, CB, additional, and Henderson, S., additional
- Published
- 2010
- Full Text
- View/download PDF
3. Comparing variational Bayes with Markov chain Monte Carlo for Bayesian computation in neuroimaging.
- Author
-
Nathoo, FS, Lesperance, ML, Lawson, AB, and Dean, CB
- Subjects
BRAIN imaging ,NEURAL circuitry ,BAYES' estimation ,MARKOV chain Monte Carlo ,INVERSE functions ,ELECTROENCEPHALOGRAPHY - Abstract
In this article, we consider methods for Bayesian computation within the context of brain imaging studies. In such studies, the complexity of the resulting data often necessitates the use of sophisticated statistical models; however, the large size of these data can pose significant challenges for model fitting. We focus specifically on the neuroelectromagnetic inverse problem in electroencephalography, which involves estimating the neural activity within the brain from electrode-level data measured across the scalp. The relationship between the observed scalp-level data and the unobserved neural activity can be represented through an underdetermined dynamic linear model, and we discuss Bayesian computation for such models, where parameters represent the unknown neural sources of interest. We review the inverse problem and discuss variational approximations for fitting hierarchical models in this context. While variational methods have been widely adopted for model fitting in neuroimaging, they have received very little attention in the statistical literature, where Markov chain Monte Carlo is often used. We derive variational approximations for fitting two models: a simple distributed source model and a more complex spatiotemporal mixture model. We compare the approximations to Markov chain Monte Carlo using both synthetic data as well as through the analysis of a real electroencephalography dataset examining the evoked response related to face perception. The computational advantages of the variational method are demonstrated and the accuracy associated with the resulting approximations are clarified. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
4. Neonatal intensive care unit characteristics affect the incidence of severe intraventricular hemorrhage.
- Author
-
Synnes AR, MacNab YC, Qiu Z, Ohlsson A, Gustafson P, Dean CB, Lee SK, and Canadian Neonatal Network
- Abstract
OBJECTIVES: The incidence of intraventricular hemorrhage (IVH), adjusted for known risk factors, varies across neonatal intensive care units (NICU)s. The effect of NICU characteristics on this variation is unknown. The objective was to assess IVH attributable risks at both patient and NICU levels. STUDY DESIGN: Subjects were <33 weeks' gestation, <4 days old on admission in the Canadian Neonatal Network database (all infants admitted in 1996-97 to 17 NICUs). The variation in severe IVH rates was analyzed using Bayesian hierarchical modeling for patient level and NICU level factors. RESULTS: Of 3772 eligible subjects, the overall crude incidence rates of grade 3-4 IVH was 8.3% (NICU range 2.0-20.5%). Male gender, extreme preterm birth, low Apgar score, vaginal birth, outborn birth, and high admission severity of illness accounted for 30% of the severe IVH rate variation; admission day therapy-related variables (treatment of acidosis and hypotension) accounted for an additional 14%. NICU characteristics, independent of patient level risk factors, accounted for 31% of the variation. NICUs with high patient volume and high neonatologist/staff ratio had lower rates of severe IVH. CONCLUSIONS: The incidence of severe IVH is affected by NICU characteristics, suggesting important new strategies to reduce this important adverse outcome. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
5. An exploration of the relationship between wastewater viral signals and COVID-19 hospitalizations in Ottawa, Canada.
- Author
-
Peng KK, Renouf EM, Dean CB, Hu XJ, Delatolla R, and Manuel DG
- Abstract
Monitoring of viral signal in wastewater is considered a useful tool for monitoring the burden of COVID-19, especially during times of limited availability in testing. Studies have shown that COVID-19 hospitalizations are highly correlated with wastewater viral signals and the increases in wastewater viral signals can provide an early warning for increasing hospital admissions. The association is likely nonlinear and time-varying. This project employs a distributed lag nonlinear model (DLNM) (Gasparrini et al., 2010) to study the nonlinear exposure-response delayed association of the COVID-19 hospitalizations and SARS-CoV-2 wastewater viral signals using relevant data from Ottawa, Canada. We consider up to a 15-day time lag from the average of SARS-CoV N1 and N2 gene concentrations to COVID-19 hospitalizations. The expected reduction in hospitalization is adjusted for vaccination efforts. A correlation analysis of the data verifies that COVID-19 hospitalizations are highly correlated with wastewater viral signals with a time-varying relationship. Our DLNM based analysis yields a reasonable estimate of COVID-19 hospitalizations and enhances our understanding of the association of COVID-19 hospitalizations with wastewater viral signals., Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (© 2023 The Authors.)
- Published
- 2023
- Full Text
- View/download PDF
6. A discrete-time susceptible-infectious-recovered-susceptible model for the analysis of influenza data.
- Author
-
Bucyibaruta G, Dean CB, and Torabi M
- Abstract
We develop a discrete time compartmental model to describe the spread of seasonal influenza virus. As time and disease state variables are assumed to be discrete, this model is considered to be a discrete time, stochastic, Susceptible-Infectious-Recovered-Susceptible (DT-SIRS) model, where weekly counts of disease are assumed to follow a Poisson distribution. We allow the disease transmission rate to also vary over time, and the disease can only be reintroduced after extinction if there is a contact with infected individuals from other host populations. To capture the variability of influenza activities from one season to the next, we define the seasonality with a 4-week period effect that may change over years. We examine three different transmission rates and compare their performance to that of existing approaches. Even though there is limited information for susceptible and recovered individuals, we demonstrate that the simple models for transmission rates effectively capture the behaviour of the disease dynamics. We use a Bayesian approach for inference. The framework is applied in an analysis of the temporal spread of influenza in the province of Manitoba, Canada, 2012-2015., Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (© 2023 The Authors.)
- Published
- 2023
- Full Text
- View/download PDF
7. Assessing dependence between frequency and severity through shared random effects.
- Author
-
Becker DG, Woolford DG, and Dean CB
- Subjects
- Human Activities, Humans, Weather, Fires, Lightning, Wildfires
- Abstract
Research on the occurrence and the final size of wildland fires typically models these two events as two separate processes. In this work, we develop and apply a compound process framework for jointly modelling the frequency and the severity of wildland fires. Separate modelling structures for the frequency and the size of fires are linked through a shared random effect. This allows us to fit an appropriate model for frequency and an appropriate model for size of fires while still having a method to estimate the direction and strength of the relationship (e.g., whether days with more fires are associated with days with large fires). The joint estimation of this random effect shares information between the models without assuming a causal structure. We explore spatial and temporal autocorrelation of the random effects to identify additional variation not explained by the inclusion of weather related covariates. The dependence between frequency and size of lightning-caused fires is found to be negative, indicating that an increase in the number of expected fires is associated with a decrease in the expected size of those fires, possibly due to the rainy conditions necessary for an increase in lightning. Person-caused fires were found to be positively dependent, possibly due to dry weather increasing human activity as well as the amount of dry few. For a test for independence, we perform a power study and find that simply checking whether zero is in the credible interval of the posterior of the linking parameter is as powerful as more complicated tests., Competing Interests: The authors have declared that no competing interests exist.
- Published
- 2022
- Full Text
- View/download PDF
8. Spatial statistical tools for genome-wide mutation cluster detection under a microarray probe sampling system.
- Author
-
Luo B, Edge AK, Tolg C, Turley EA, Dean CB, Hill KA, and Kulperger RJ
- Subjects
- Algorithms, Animals, Chromosomes, Mammalian genetics, Cluster Analysis, Computer Simulation, Genetic Variation, Genotype, Mice, Polymorphism, Single Nucleotide genetics, DNA Probes metabolism, Genome, Mutation genetics, Oligonucleotide Array Sequence Analysis, Statistics as Topic
- Abstract
Mutation cluster analysis is critical for understanding certain mutational mechanisms relevant to genetic disease, diversity, and evolution. Yet, whole genome sequencing for detection of mutation clusters is prohibitive with high cost for most organisms and population surveys. Single nucleotide polymorphism (SNP) genotyping arrays, like the Mouse Diversity Genotyping Array, offer an alternative low-cost, screening for mutations at hundreds of thousands of loci across the genome using experimental designs that permit capture of de novo mutations in any tissue. Formal statistical tools for genome-wide detection of mutation clusters under a microarray probe sampling system are yet to be established. A challenge in the development of statistical methods is that microarray detection of mutation clusters is constrained to select SNP loci captured by probes on the array. This paper develops a Monte Carlo framework for cluster testing and assesses test statistics for capturing potential deviations from spatial randomness which are motivated by, and incorporate, the array design. While null distributions of the test statistics are established under spatial randomness via the homogeneous Poisson process, power performance of the test statistics is evaluated under postulated types of Neyman-Scott clustering processes through Monte Carlo simulation. A new statistic is developed and recommended as a screening tool for mutation cluster detection. The statistic is demonstrated to be excellent in terms of its robustness and power performance, and useful for cluster analysis in settings of missing data. The test statistic can also be generalized to any one dimensional system where every site is observed, such as DNA sequencing data. The paper illustrates how the informal graphical tools for detecting clusters may be misleading. The statistic is used for finding clusters of putative SNP differences in a mixture of different mouse genetic backgrounds and clusters of de novo SNP differences arising between tissues with development and carcinogenesis., Competing Interests: The authors have declared that no competing interests exist.
- Published
- 2018
- Full Text
- View/download PDF
9. Joint modeling of zero-inflated panel count and severity outcomes.
- Author
-
Juarez-Colunga E, Silva GL, and Dean CB
- Subjects
- Female, Hormone Replacement Therapy, Humans, Markov Chains, Monte Carlo Method, Treatment Outcome, Longitudinal Studies, Models, Statistical, Severity of Illness Index
- Abstract
Panel counts are often encountered in longitudinal, such as diary, studies where individuals are followed over time and the number of events occurring in time intervals, or panels, is recorded. This article develops methods for situations where, in addition to the counts of events, a mark, denoting a measure of severity of the events, is recorded. In many situations there is an association between the panel counts and their marks. This is the case for our motivating application that studies the effect of two hormone therapy treatments in reducing counts and severities of vasomotor symptoms in women after hysterectomy/ovariectomy. We model the event counts and their severities jointly through the use of shared random effects. We also compare, through simulation, the power of testing for the treatment effect based on such joint modeling and an alternative scoring approach, which is commonly employed. The scoring approach analyzes the compound outcome of counts times weighted severity. We discuss this approach and quantify challenges which may arise in isolating the treatment effect when such a scoring approach is used. We also show that the power of detecting a treatment effect is higher when using the joint model than analysis using the scoring approach. Inference is performed via Markov chain Monte Carlo methods., (© 2017, The International Biometric Society.)
- Published
- 2017
- Full Text
- View/download PDF
10. Classification of Large-Scale Remote Sensing Images for Automatic Identification of Health Hazards: Smoke Detection Using an Autologistic Regression Classifier.
- Author
-
Wolters MA and Dean CB
- Abstract
Remote sensing images from Earth-orbiting satellites are a potentially rich data source for monitoring and cataloguing atmospheric health hazards that cover large geographic regions. A method is proposed for classifying such images into hazard and nonhazard regions using the autologistic regression model, which may be viewed as a spatial extension of logistic regression. The method includes a novel and simple approach to parameter estimation that makes it well suited to handling the large and high-dimensional datasets arising from satellite-borne instruments. The methodology is demonstrated on both simulated images and a real application to the identification of forest fire smoke.
- Published
- 2017
- Full Text
- View/download PDF
11. A multi-state model for the analysis of changes in cognitive scores over a fixed time interval.
- Author
-
Mitnitski AB, Fallah N, Dean CB, and Rockwood K
- Subjects
- Aged, Aging psychology, Canada, Cognition Disorders mortality, Female, Humans, Linear Models, Longitudinal Studies, Male, Multivariate Analysis, Poisson Distribution, Cognition, Cognition Disorders diagnosis, Cognition Disorders psychology, Neuropsychological Tests
- Abstract
In this article, we present the novel approach of using a multi-state model to describe longitudinal changes in cognitive test scores. Scores are modelled according to a truncated Poisson distribution, conditional on survival to a fixed endpoint, with the Poisson mean dependent upon the baseline score and covariates. The model provides a unified treatment of the distribution of cognitive scores, taking into account baseline scores and survival. It offers a simple framework for the simultaneous estimation of the effect of covariates modulating these distributions, over different baseline scores. A distinguishing feature is that this approach permits estimation of the probabilities of transitions in different directions: improvements, declines and death. The basic model is characterised by four parameters, two of which represent cognitive transitions in survivors, both for individuals with no cognitive errors at baseline and for those with non-zero errors, within the range of test scores. The two other parameters represent corresponding likelihoods of death. The model is applied to an analysis of data from the Canadian Study of Health and Aging (1991-2001) to identify the risk of death, and of changes in cognitive function as assessed by errors in the Modified Mini-Mental State Examination. The model performance is compared with more conventional approaches, such as multivariate linear and polytomous regressions. This model can also be readily applied to a wide variety of other cognitive test scores and phenomena which change with age., (© The Author(s) 2011 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.)
- Published
- 2014
- Full Text
- View/download PDF
12. Efficient panel designs for longitudinal recurrent event studies recording panel counts.
- Author
-
Juarez-Colunga E, Dean CB, and Balshaw R
- Subjects
- Humans, Poisson Distribution, Recurrence, Time Factors, Clinical Trials as Topic standards, Models, Statistical, Research Design standards, Treatment Outcome
- Abstract
Many clinical trials are designed to study outcome measures recorded as the number of events occurring during specific intervals, called panel data. In such data, the intervals are specified by a planned set of follow-up times. As the collection of panel data results in a partial loss of information relative to a record of the actual event times, it is important to gain a thorough understanding of the impact of panel study designs on the efficiency of the estimates of treatment effects and covariates. This understanding can then be used as a base from which to formulate appropriate designs by layering in other concerns, e.g. clinical constraints, or other practical considerations. We compare the efficiency of the analysis of panel data with respect to the analysis of data recorded precisely as times of recurrences, and articulate conditions for efficient panel designs where the focus is on estimation of a treatment effect when adjusting for other covariates. We build from the efficiency comparisons to optimize the design of panel follow-up times. We model the recurrent intensity through the common proportional intensity framework, with the treatment effect modeled flexibly as piecewise constant over panels, or groups of panels. We provide some important considerations for the design of efficient panel studies, and illustrate the methods through analysis of designs of studies of adenomas.
- Published
- 2014
- Full Text
- View/download PDF
13. The consequences of proportional hazards based model selection.
- Author
-
Campbell H and Dean CB
- Subjects
- Bias, Biostatistics, Clinical Trials as Topic statistics & numerical data, Computer Simulation, Humans, Survival Analysis, Proportional Hazards Models
- Abstract
For testing the efficacy of a treatment in a clinical trial with survival data, the Cox proportional hazards (PH) model is the well-accepted, conventional tool. When using this model, one typically proceeds by confirming that the required PH assumption holds true. If the PH assumption fails to hold, there are many options available, proposed as alternatives to the Cox PH model. An important question which arises is whether the potential bias introduced by this sequential model fitting procedure merits concern and, if so, what are effective mechanisms for correction. We investigate by means of simulation study and draw attention to the considerable drawbacks, with regard to power, of a simple resampling technique, the permutation adjustment, a natural recourse for addressing such challenges. We also consider a recently proposed two-stage testing strategy (2008) for ameliorating these effects., (Copyright © 2013 John Wiley & Sons, Ltd.)
- Published
- 2014
- Full Text
- View/download PDF
14. Who is stressed? Comparing cortisol levels between individuals.
- Author
-
Nepomnaschy PA, Lee TC, Zeng L, and Dean CB
- Subjects
- Adolescent, Adult, Female, Guatemala, Humans, Hydrocortisone metabolism, Immunoassay methods, Indians, Central American, Reference Values, Statistics, Nonparametric, Stress, Physiological, Young Adult, Hydrocortisone urine, Luminescent Measurements methods
- Abstract
Unlabelled: Cortisol is the most commonly used biomarker to compare physiological stress between individuals. Its use, however, is frequently inappropriate. Basal cortisol production varies markedly between individuals. Yet, in naturalistic studies that variation is often ignored, potentially leading to important biases., Objectives: Identify appropriate analytical tools to compare cortisol across individuals and outline simple simulation procedures for determining the number of measurements required to apply those methods., Methods: We evaluate and compare three alternative methods (raw values, Z-scores, and sample percentiles) to rank individuals according to their cortisol levels. We apply each of these methods to first morning urinary cortisol data collected thrice weekly from 14 cycling Mayan Kaqchiquel women. We also outline a simple simulation to estimate appropriate sample sizes., Results: Cortisol values varied substantially across women (ranges: means: 1.9-2.7; medians: 1.9-2.8; SD: 0.26-0.49) as did their individual distributions. Cortisol values within women were uncorrelated. The accuracy of the rankings obtained using the Z-scores and sample percentiles was similar, and both were superior to those obtained using the cross-sectional cortisol values. Given the interindividual variation observed in our population, 10-15 cortisol measurements per participant provide an acceptable degree of accuracy for across-women comparisons., Conclusions: The use of single raw cortisol values is inadequate to compare physiological stress levels across individuals. If the distributions of individuals' cortisol values are approximately normal, then the standardized ranking method is most appropriate; otherwise, the sample percentile method is advised. These methods may be applied to compare stress levels across individuals in other populations and species., (Copyright © 2012 Wiley Periodicals, Inc.)
- Published
- 2012
- Full Text
- View/download PDF
15. Short-term cancer mortality projections: a comparative study of prediction methods.
- Author
-
Lee TC, Dean CB, and Semenciw R
- Subjects
- Canada epidemiology, Female, Forecasting methods, Humans, Incidence, Male, Models, Statistical, Neoplasms mortality
- Abstract
This paper provides a systematic comparison of cancer mortality and incidence projection methods used at major national health agencies. These methods include Poisson regression using an age-period-cohort model as well as a simple log-linear trend, a joinpoint technique, which accounts for sharp changes, autoregressive time series and state-space models. We assess and compare the reliability of these projection methods by using Canadian cancer mortality data for 12 cancer sites at both the national and regional levels. Cancer sites were chosen to provide a wide range of mortality frequencies. We explore specific techniques for small case counts and for overall national-level projections based on regional-level data. No single method is omnibus in terms of superior performance across a wide range of cancer sites and for all sizes of populations. However, the procedures based on age-period-cohort models used by the Association of the Nordic Cancer Registries tend to provide better performance than the other methods considered. The exception is when case counts are small, where the average of the observed counts over the recent 5-year period yields better predictions., (Copyright © 2011 John Wiley & Sons, Ltd.)
- Published
- 2011
- Full Text
- View/download PDF
16. A comparison of classification algorithms for the identification of smoke plumes from satellite images.
- Author
-
Wan V, Braun W, Dean C, and Henderson S
- Subjects
- British Columbia, Data Interpretation, Statistical, Databases, Factual, Environmental Exposure, Fires, Humans, Models, Statistical, Spacecraft, Trees, Algorithms, Particulate Matter adverse effects, Smoke
- Abstract
Obtaining accurate measures of exposure to forest fire smoke is important for the assessment of health risk. Estimating exposure from air quality monitors is challenging because of the sparseness of the monitoring networks in remote areas. However, satellite imagery offers a novel and data-rich tool to provide visual information on smoke plumes. We will discuss statistical techniques for obtaining estimates of forest fire smoke plumes using classification algorithms on data from satellite imagery in order to develop automated processes for identifying exposure. The aim is to identify whether such methods may offer a high-resolution approach that provides a reliable estimate of smoke and a more thorough caption of the spatial distribution of smoke from fires than is currently available.
- Published
- 2011
- Full Text
- View/download PDF
17. Clustered mixed nonhomogeneous Poisson process spline models for the analysis of recurrent event panel data.
- Author
-
Nielsen JD and Dean CB
- Subjects
- Animals, Cluster Analysis, Data Interpretation, Statistical, Female, Humans, Likelihood Functions, Longitudinal Studies, Male, Moths drug effects, Moths physiology, Poisson Distribution, Sex Attractants pharmacology, Sex Attractants physiology, Biometry methods, Models, Statistical
- Abstract
A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.
- Published
- 2008
- Full Text
- View/download PDF
18. Hierarchical Bayesian spatiotemporal analysis of revascularization odds using smoothing splines.
- Author
-
Silva GL, Dean CB, Niyonsenga T, and Vanasse A
- Subjects
- Acute Coronary Syndrome surgery, Female, Humans, Male, Markov Chains, Monte Carlo Method, Myocardial Revascularization, Small-Area Analysis, Bayes Theorem, Data Interpretation, Statistical, Models, Cardiovascular, Models, Statistical
- Abstract
Hierarchical Bayesian models are proposed for over-dispersed longitudinal spatially correlated binomial data. This class of models accounts for correlation among regions by using random effects and allows a flexible modelling of spatiotemporal odds by using smoothing splines. The aim is (i) to develop models which will identify temporal trends of odds and produce smoothed maps including regional effects, (ii) to specify Markov chain Monte Carlo (MCMC) inference for fitting such models, (iii) to study the sensitivity of such Bayesian binomial spline spatiotemporal analyses to prior assumptions, and (iv) to compare mechanisms for assessing goodness of fit. An analysis of regional variation for revascularization odds of patients hospitalized for acute coronary syndrome in Quebec motivates and illustrates the methods developed., ((c) 2007 John Wiley & Sons, Ltd.)
- Published
- 2008
- Full Text
- View/download PDF
19. Spatial multistate transitional models for longitudinal event data.
- Author
-
Nathoo FS and Dean CB
- Subjects
- Computer Simulation, Markov Chains, Algorithms, Biometry methods, Data Interpretation, Statistical, Longitudinal Studies, Models, Biological, Models, Statistical
- Abstract
Follow-up medical studies often collect longitudinal data on patients. Multistate transitional models are useful for analysis in such studies where at any point in time, individuals may be said to occupy one of a discrete set of states and interest centers on the transition process between states. For example, states may refer to the number of recurrences of an event, or the stage of a disease. We develop a hierarchical modeling framework for the analysis of such longitudinal data when the processes corresponding to different subjects may be correlated spatially over a region. Continuous-time Markov chains incorporating spatially correlated random effects are introduced. Here, joint modeling of both spatial dependence as well as dependence between different transition rates is required and a multivariate spatial approach is employed. A proportional intensities frailty model is developed where baseline intensity functions are modeled using parametric Weibull forms, piecewise-exponential formulations, and flexible representations based on cubic B-splines. The methodology is developed within the context of a study examining invasive cardiac procedures in Quebec. We consider patients admitted for acute coronary syndrome throughout the 139 local health units of the province and examine readmission and mortality rates over a 4-year period.
- Published
- 2008
- Full Text
- View/download PDF
20. Generalized linear mixed models: a review and some extensions.
- Author
-
Dean CB and Nielsen JD
- Subjects
- Epidemiologic Methods, Humans, Longitudinal Studies, Software, Linear Models
- Abstract
Breslow and Clayton (J Am Stat Assoc 88:9-25,1993) was, and still is, a highly influential paper mobilizing the use of generalized linear mixed models in epidemiology and a wide variety of fields. An important aspect is the feasibility in implementation through the ready availability of related software in SAS (SAS Institute, PROC GLIMMIX, SAS Institute Inc., URL http://www.sas.com , 2007), S-plus (Insightful Corporation, S-PLUS 8, Insightful Corporation, Seattle, WA, URL http://www.insightful.com , 2007), and R (R Development Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria, URL http://www.R-project.org , 2006) for example, facilitating its broad usage. This paper reviews background to generalized linear mixed models and the inferential techniques which have been developed for them. To provide the reader with a flavor of the utility and wide applicability of this fundamental methodology we consider a few extensions including additive models, models for zero-heavy data, and models accommodating latent clusters.
- Published
- 2007
- Full Text
- View/download PDF
21. A mixed mover-stayer model for spatiotemporal two-state processes.
- Author
-
Nathoo F and Dean CB
- Subjects
- Computer Simulation, Humans, Monte Carlo Method, Recurrence, Risk Factors, Algorithms, Chronic Disease epidemiology, Communicable Diseases epidemiology, Data Interpretation, Statistical, Models, Biological, Models, Statistical, Risk Assessment methods
- Abstract
Studies of recurring infection or chronic disease often collect longitudinal data on the disease status of subjects. Two-state transitional models are useful for analysis in such studies where, at any point in time, an individual may be said to occupy either a diseased or disease-free state and interest centers on the transition process between states. Here, two additional features are present. The data are spatially arranged and it is important to account for spatial correlation in the transitional processes corresponding to different subjects. In addition there are subgroups of individuals with different mechanisms of transitions. These subgroups are not known a priori and hence group membership must be estimated. Covariates modulating transitions are included in a logistic additive framework. Inference for the resulting mixture spatial Markov regression model is not straightforward. We develop here a Monte Carlo expectation maximization algorithm for maximum likelihood estimation and a Markov chain Monte Carlo sampling scheme for summarizing the posterior distribution in a Bayesian analysis. The methodology is applied to a study of recurrent weevil infestation in British Columbia forests.
- Published
- 2007
- Full Text
- View/download PDF
22. Medroxyprogesterone and conjugated oestrogen are equivalent for hot flushes: a 1-year randomized double-blind trial following premenopausal ovariectomy.
- Author
-
Prior JC, Nielsen JD, Hitchcock CL, Williams LA, Vigna YM, and Dean CB
- Subjects
- Adult, Dose-Response Relationship, Drug, Double-Blind Method, Female, Humans, Linear Models, Middle Aged, Ovariectomy, Premenopause, Contraceptives, Oral, Synthetic therapeutic use, Estrogens, Conjugated (USP) therapeutic use, Hot Flashes prevention & control, Medroxyprogesterone therapeutic use
- Abstract
Oestrogen therapy is the gold standard treatment for hot flushes/night sweats, but it and oestrogen/progestin are not suitable for all women. MPA (medroxyprogesterone acetate) reduces hot flushes, but its effectiveness compared with oestrogen is unknown. In the present study, oral oestrogen [CEE (conjugated equine oestrogen)] and MPA were compared for their effects on hot flushes in a planned analysis of a secondary outcome for a 1-year randomized double-blind parallel group controlled trial in an urban academic medical centre. Participants were healthy menstruating women prior to hysterectomy/ovariectomy for benign disease. A total of 41 women {age, 45 (5) years [value is mean (S.D.)]} were enrolled; 38 women were included in this analysis of daily identical capsules containing CEE (0.6 mg/day) or MPA (10 mg/day). Demographic variables did not differ at baseline. Daily data provided the number of night and day flushes compared by group. The vasomotor symptom day-to-day intensity change was assessed by therapy assignment. Hot flushes/night sweats were well controlled in both groups, one occurred on average every third day and every fourth night. Mean/day daytime occurrences were 0.363 and 0.187 with CEE and MPA respectively, but were not significantly different (P=0.156). Night sweats also did not differ significantly (P=0.766). Therapies were statistically equivalent (within one event/24 h) in the control of vasomotor symptoms. Day-to-day hot flush intensity decreased with MPA and tended to remain stable with CEE (P<0.001). In conclusion, this analysis demonstrates that MPA and CEE are equivalent and effective in the control of the number of hot flushes/night sweats immediately following premenopausal ovariectomy.
- Published
- 2007
- Full Text
- View/download PDF
23. Modeling the contribution of speeding and impaired driving to insurance claim counts and costs when contributing factors are unknown.
- Author
-
Zheng YY, Cooper PJ, and Dean CB
- Subjects
- Accidents, Traffic economics, British Columbia epidemiology, Computer Simulation, Costs and Cost Analysis, Female, Humans, Insurance Claim Review, Insurance, Liability statistics & numerical data, Logistic Models, Male, Police, Risk-Taking, Substance-Related Disorders economics, Acceleration adverse effects, Accidents, Traffic statistics & numerical data, Insurance, Liability economics, Models, Econometric, Substance-Related Disorders complications
- Abstract
Problem: There are no specific indicators for distinguishing insurance claims related to speeding and impaired driving in the information warehouse at the Insurance Corporation of British Columbia. Contributing factors are only recorded for that part of the claim data that is also reported by the police. Most published statistics on crashes that are related to alcohol or speeding are based on police-reported data, but this represents only a fraction of all incidents., Method: This paper proposes surrogate models to estimate the counts and the average costs associated with speeding and impaired driving to insurance claims when contributing factors are unknown. Using police-reported data, classification rules and logistic regression models are developed to form such estimates. One approach applies classification rules to categorize insurance claims into those related to speeding, impaired driving, and other factors. The counts and the costs of insurance claims for each of these strata and overall are then estimated. A second method models the probability that an insurance claim is related to speeding or impaired driving using logistic regression and uses this to estimate the overall counts and the average costs of the claims. The two methods are compared and evaluated using simulation studies., Results: The logistic regression model was found to be superior to the classification model for predicting insurance claim counts by category, but less efficient at predicting average claim costs., Impact: Having estimates of counts and costs of insurance claims related to impaired driving or speeding for all reported crash events provides a more accurate basis for policy-makers to plan changes and benefits of road safety programs.
- Published
- 2007
- Full Text
- View/download PDF
24. A semiparametric model for the analysis of recurrent-event panel data.
- Author
-
Balshaw RF and Dean CB
- Subjects
- Animals, Biometry, Data Interpretation, Statistical, Female, Humans, Likelihood Functions, Male, Models, Biological, Moths drug effects, Moths physiology, Neoplasm Recurrence, Local drug therapy, Sex Attractants pharmacology, Urinary Bladder Neoplasms drug therapy, Models, Statistical
- Abstract
In many longitudinal studies, interest focuses on the occurrence rate of some phenomenon for the subjects in the study. When the phenomenon is nonterminating and possibly recurring, the result is a recurrent-event data set. Examples include epileptic seizures and recurrent cancers. When the recurring event is detectable only by an expensive or invasive examination, only the number of events occurring between follow-up times may be available. This article presents a semiparametric model for such data, based on a multiplicative intensity model paired with a fully flexible nonparametric baseline intensity function. A random subject-specific effect is included in the intensity model to account for the overdispersion frequently displayed in count data. Estimators are determined from quasi-likelihood estimating functions. Because only first- and second-moment assumptions are required for quasi-likelihood, the method is more robust than those based on the specification of a full parametric likelihood. Consistency of the estimators depends only on the assumption of the proportional intensity model. The semiparametric estimators are shown to be highly efficient compared with the usual parametric estimators. As with semiparametric methods in survival analysis, the method provides useful diagnostics for specific parametric models, including a quasi-score statistic for testing specific baseline intensity functions. The techniques are used to analyze cancer recurrences and a pheromone-based mating disruption experiment in moths. A simulation study confirms that, for many practical situations, the estimators possess appropriate small-sample characteristics.
- Published
- 2002
- Full Text
- View/download PDF
25. Spatio-temporal modelling of rates for the construction of disease maps.
- Author
-
MacNab YC and Dean CB
- Subjects
- British Columbia epidemiology, Humans, Infant, Infant Mortality, Infant, Newborn, Linear Models, Poisson Distribution, Rural Population, Urban Population, Epidemiologic Methods, Models, Statistical
- Abstract
There have been significant developments in disease mapping in the past few decades. The continual development of statistical methodology in this area is responsible for the growing popularity of disease mapping because of its potential usefulness in regional health planning, disease surveillance and intervention, and allocating health funding. Here we review the area of disease mapping where relative risks pertain to an event such as incidence or mortality over space and time. In particular we briefly discuss the use of generalized additive mixed models, an additive extension of generalized linear mixed models, for spatio-temporal analysis of disease rates. To illustrate the procedures, we present an in-depth analysis of infant mortality data in the province of British Columbia, Canada. The goals of the analysis are to produce more reliable small-area estimates of mortality rates, assess spatial patterns over time, and examine risk trends at both global (provincial) and local (local health area) levels., (Copyright 2002 John Wiley & Sons, Ltd.)
- Published
- 2002
- Full Text
- View/download PDF
26. Autoregressive spatial smoothing and temporal spline smoothing for mapping rates.
- Author
-
MacNab YC and Dean CB
- Subjects
- British Columbia epidemiology, Humans, Infant, Infant Mortality, Infant, Newborn, Models, Statistical, Biometry, Regression Analysis
- Abstract
This article proposes generalized additive mixed models for the analysis of geographic and temporal variability of mortality rates. This class of models accommodates random spatial effects and fixed and random temporal components. Spatiotemporal models that use autoregressive local smoothing across the spatial dimension and B-spline smoothing over the temporal dimension are developed. The objective is the identification of temporal treads and the production of a series of smoothed maps from which spatial patterns of mortality risks can be monitored over time. Regions with consistently high rate estimates may be followed for further investigation. The methodology is illustrated by analysis of British Columbia infant mortality data.
- Published
- 2001
- Full Text
- View/download PDF
27. The use of mixture models for identifying high risks in disease mapping.
- Author
-
Militino AF, Ugarte MD, and Dean CB
- Subjects
- Adult, Algorithms, Bayes Theorem, Breast Neoplasms epidemiology, British Columbia epidemiology, Computer Simulation, Female, Humans, Infant, Infant Mortality, Infant, Newborn, Italy epidemiology, Lip Neoplasms epidemiology, Poisson Distribution, Scotland epidemiology, Epidemiologic Methods, Models, Biological
- Abstract
Conventional approaches for estimating risks in disease mapping or mortality studies are based on Poisson inference. Frequently, overdispersion is present and this extra variability is modelled by introducing random effects. In this paper we compare two computationally simple approaches for incorporating random effects: one based on a non-parametric mixture model assuming that the population arises from a discrete mixture of Poisson distributions, and the second using a Poisson-normal mixture model which allows for spatial autocorrelation. The comparison is focused on how well each of these methods identify the regions which have high risks. Such identification is important because policy makers may wish to target regions associated with such extreme risks for financial assistance while epidemiologists may wish to target such regions for further study. The Poisson-normal mixture model is presented from both a frequentist, or empirical Bayes, and a fully Bayesian point of view. We compare results obtained with the parametric and non-parametric models specifically in terms of detecting extreme mortality risks, using infant mortality data of British Columbia, Canada, for the period 1981-1985, breast cancer data from Sardinia, for the period 1983-1987, and Scottish lip cancer data for 1975-1980. However, we also investigate the performance of these models in a simulation study. The key finding is that discrete mixture models seem to be able to locate regions which experience high risks; normal mixture models also work well in this regard, and perform substantially better when spatial autocorrelation is present., (Copyright 2001 John Wiley & Sons, Ltd.)
- Published
- 2001
- Full Text
- View/download PDF
28. Simultaneous modelling of operative mortality and long-term survival after coronary artery bypass surgery.
- Author
-
Ghahramani M, Dean CB, and Spinelli JJ
- Subjects
- Age Factors, British Columbia, Epidemiologic Methods, Female, Humans, Likelihood Functions, Male, Poisson Distribution, Risk Factors, Sex Factors, Coronary Artery Bypass mortality, Models, Statistical, Survival Analysis
- Abstract
Typical analyses of lifetime data treat the time to death or failure as the response variable and use a variety of modelling strategies such as proportional hazards or fully parametric, to investigate the relationship between the response and covariates. In certain circumstances it may be more natural to view the distribution of the response variable as consisting of two or more parts since the survival curve appears segmented. This article addresses such a scenario and we propose a model for simultaneously investigating the effects of covariates over the two segments. The model is an analogue of that proposed by Lambert for zero-inflated Poisson regression. The application is central to the model development and is concerned with survival after coronary artery bypass surgery. Here operative mortality, defined as death within 30 days after surgery, and long-term mortality, are viewed as distinct outcomes. For the application considered, the survivor function displays much steeper descent during the first 30 days after surgery, that is, for operative mortality, than after this period. An investigation of the effects of covariates on operative and long-term mortality after coronary artery bypass surgery illustrates the usefulness of the proposed model., (Copyright 2001 John Wiley & Sons, Ltd.)
- Published
- 2001
- Full Text
- View/download PDF
29. Detecting interaction between random region and fixed age effects in disease mapping.
- Author
-
Dean CB, Ugarte MD, and Militino AF
- Subjects
- Age Factors, British Columbia epidemiology, Humans, Mortality, Poisson Distribution, Biometry, Disease, Models, Statistical
- Abstract
The purpose of this article is to draw attention to the possible need for inclusion of interaction effects between regions and age groups in mapping studies. We propose a simple model for including such an interaction in order to develop a test for its significance. The assumption of an absence of such interaction effects is a helpful simplifying one. The measure of relative risk related to a particular region becomes easily and neatly summarized. Indeed, such a test seems warranted because it is anticipated that the simple model, which ignores such interaction, as is in common use, may at times be adequate. The test proposed is a score test and hence only requires fitting the simpler model. We illustrate our approaches using mortality data from British Columbia, Canada, over the 5-year period 1985-1989. For this data, the interaction effect between age groups and regions is quite large and significant.
- Published
- 2001
- Full Text
- View/download PDF
30. Parametric bootstrap and penalized quasi-likelihood inference in conditional autoregressive models.
- Author
-
MacNab YC and Dean CB
- Subjects
- Algorithms, British Columbia epidemiology, Humans, Infant Mortality, Infant, Newborn, Likelihood Functions, Maps as Topic, Regression Analysis, Models, Statistical, Small-Area Analysis
- Abstract
This paper discusses a variety of conditional autoregressive (CAR) models for mapping disease rates, beyond the usual first-order intrinsic CAR model. We illustrate the utility and scope of such models for handling different types of data structures. To encourage their routine use for map production at statistical and health agencies, a simple algorithm for fitting such models is presented. This is derived from penalized quasi-likelihood (PQL) inference which uses an analogue of best-linear unbiased estimation for the regional risk ratios and restricted maximum likelihood for the variance components. We offer the practitioner here the use of the parametric bootstrap for inference. It is more reliable than standard maximum likelihood asymptotics for inference purposes since relevant hypotheses for the mapping of rates lie on the boundary of the parameter space. We illustrate the parametric bootstrap test of the practically relevant and important simplifying hypothesis that there is no spatial autocorrelation. Although the parametric bootstrap requires computational effort, it is straightforward to implement and offers a wealth of information relating to the estimators and their properties. The proposed methodology is illustrated by analysing infant mortality in the province of British Columbia in Canada., (Copyright 2000 John Wiley & Sons, Ltd.)
- Published
- 2000
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.