24 results on '"Ditlevsen, Susanne Dalager"'
Search Results
2. Reduction of Controls in Preclinical Clamp Studies using a Nonlinear Mixed-Effects Model
- Author
-
Ditlevsen, Susanne Dalager, Andersen, Søren, Nielsen, Emilie Prang, Ditlevsen, Susanne Dalager, Andersen, Søren, and Nielsen, Emilie Prang
- Abstract
Dette speciale undersøger hvor mange kontroldyr der kræves, hvis historisk information fra tidligere forsøg anvendes. Først præsenteres en ikke-lineær logistisk kurve for dosis-respons forholdet, der benyttes som fundament for forskellige modelleringstilgange. Efterfølgende bruges denne til at analysere to udvalgte historiske forsøg med fokus på resultaterne for den relative styrke af teststo et (en insulin analog) i forhold til kontrolsto et (human insulin). Her inkluderes kun data fra det aktuelle forsøg for at klargøre hvordan forsøgene normalt analyseres, den såkaldte normale tilgang, hvorefter vi fortsætter med at konstruere en varianskomponentmodel kun for human insulin, hvor det undersøges omhyggeligt hvilke faste og tilfældige e ekter samt hvilke transformationer, der skal inkluderes. Derefter konstateres det, at resultaterne ikke ændrer sig sønderligt når analogerne inkorporeres i mod- ellen, og for de to udvalgte dybdegående analyserede analoger, erfares det, at den normale tilgang samt varianskomponenttilgangen giver enslydende resultater for den relative styrke. Dog falder standardafvigelsen for denne parameter i forhold til den normale metode, og disse resultater udfordres yderligere i et simula- tionseksperiment. Dette eksperiment viser generelt at ved at inkludere historisk information i form af den foreslåede varianskomponentmodel, er vi i stand til at fjerne mindst 50% af kontrol rotterne i hvert af forsøgene undersøgt i dybden for at opnå samme niveau af usikkerhed på den relative styrke som i den normale metode. Efterfølgende konstateres det, at dette er i overensstemmelse med det præsenterede teoretiske fundament, ud fra hvilket vi beregner en eksplicit reduktion i antallet af kontrolrotter på henholdsvis 61.8% og 52.5% for to af de historiske forsøg. Endelig diskuteres muligheden for at inkorporere tidligere information vha. en varianskomponentmodel, hvor både en metatilgang såvel som en Bayesiansk tilgang efterprøves. Ovenstående konklu, This thesis examines how many control animals are really needed if we make use of historical information on past studies. First, a nonlinear logistic curve for the dose-response relationship is presented, and used as the foundation for the di erent modelling approaches. Then it is applied to two chosen historical studies with a focus on the results of the relative potency of the test drug under investigation (an insulin analogue) compared to the control drug, human insulin. Here we include data only from the study under investigation to clarify how the studies are normally analysed, the so-called common way, whereupon we continue by building a mixed-e ects model only for human insulin, investigating thoroughly which fixed and random e ects as well as di erent transformations to include. Next, we find that the results do not change dramatically when incorporating the analogues in the model, and for the two chosen analogues analysed in depth, we find that the common and mixed-e ects analysis yield similar results for the estimate of the relative potency. However, the standard errors for these decrease noticeably compared to the benchmark results from the common method, and these results are challenged further in a simulation experiment. This experiment suggests overall that by including historical information in the form of the mixed-e ects model proposed, we are able to remove at least 50% of the control rats in each of the studies looked closely upon to get the same level of uncertainty on the relative potency as in the common analysis. Thereafter, we find that this is in compliance with the theoretical foundation presented, from which we calculate an explicit reduction in the number of control rats of 61.8% and 52.5% respectively for two of the studies. Ultimately, how to incorporate the past information in the form of the mixed-e ects model is discussed, where both a meta approach as well as a Bayesian approach are suggested. The above conclusions are foun
- Published
- 2018
3. Reduction of Controls in Preclinical Clamp Studies using a Nonlinear Mixed-Effects Model
- Author
-
Ditlevsen, Susanne Dalager, Andersen, Søren, Nielsen, Emilie Prang, Ditlevsen, Susanne Dalager, Andersen, Søren, and Nielsen, Emilie Prang
- Abstract
Dette speciale undersøger hvor mange kontroldyr der kræves, hvis historisk information fra tidligere forsøg anvendes. Først præsenteres en ikke-lineær logistisk kurve for dosis-respons forholdet, der benyttes som fundament for forskellige modelleringstilgange. Efterfølgende bruges denne til at analysere to udvalgte historiske forsøg med fokus på resultaterne for den relative styrke af teststo et (en insulin analog) i forhold til kontrolsto et (human insulin). Her inkluderes kun data fra det aktuelle forsøg for at klargøre hvordan forsøgene normalt analyseres, den såkaldte normale tilgang, hvorefter vi fortsætter med at konstruere en varianskomponentmodel kun for human insulin, hvor det undersøges omhyggeligt hvilke faste og tilfældige e ekter samt hvilke transformationer, der skal inkluderes. Derefter konstateres det, at resultaterne ikke ændrer sig sønderligt når analogerne inkorporeres i mod- ellen, og for de to udvalgte dybdegående analyserede analoger, erfares det, at den normale tilgang samt varianskomponenttilgangen giver enslydende resultater for den relative styrke. Dog falder standardafvigelsen for denne parameter i forhold til den normale metode, og disse resultater udfordres yderligere i et simula- tionseksperiment. Dette eksperiment viser generelt at ved at inkludere historisk information i form af den foreslåede varianskomponentmodel, er vi i stand til at fjerne mindst 50% af kontrol rotterne i hvert af forsøgene undersøgt i dybden for at opnå samme niveau af usikkerhed på den relative styrke som i den normale metode. Efterfølgende konstateres det, at dette er i overensstemmelse med det præsenterede teoretiske fundament, ud fra hvilket vi beregner en eksplicit reduktion i antallet af kontrolrotter på henholdsvis 61.8% og 52.5% for to af de historiske forsøg. Endelig diskuteres muligheden for at inkorporere tidligere information vha. en varianskomponentmodel, hvor både en metatilgang såvel som en Bayesiansk tilgang efterprøves. Ovenstående konklu, This thesis examines how many control animals are really needed if we make use of historical information on past studies. First, a nonlinear logistic curve for the dose-response relationship is presented, and used as the foundation for the di erent modelling approaches. Then it is applied to two chosen historical studies with a focus on the results of the relative potency of the test drug under investigation (an insulin analogue) compared to the control drug, human insulin. Here we include data only from the study under investigation to clarify how the studies are normally analysed, the so-called common way, whereupon we continue by building a mixed-e ects model only for human insulin, investigating thoroughly which fixed and random e ects as well as di erent transformations to include. Next, we find that the results do not change dramatically when incorporating the analogues in the model, and for the two chosen analogues analysed in depth, we find that the common and mixed-e ects analysis yield similar results for the estimate of the relative potency. However, the standard errors for these decrease noticeably compared to the benchmark results from the common method, and these results are challenged further in a simulation experiment. This experiment suggests overall that by including historical information in the form of the mixed-e ects model proposed, we are able to remove at least 50% of the control rats in each of the studies looked closely upon to get the same level of uncertainty on the relative potency as in the common analysis. Thereafter, we find that this is in compliance with the theoretical foundation presented, from which we calculate an explicit reduction in the number of control rats of 61.8% and 52.5% respectively for two of the studies. Ultimately, how to incorporate the past information in the form of the mixed-e ects model is discussed, where both a meta approach as well as a Bayesian approach are suggested. The above conclusions are foun
- Published
- 2018
4. Describing the sound producing behaviour of narwhals based on spatial and temporal covariates: Independence vs. dependence
- Author
-
Ditlevsen, Susanne Dalager, Søltoft-Jensen, Aleksander, Ditlevsen, Susanne Dalager, and Søltoft-Jensen, Aleksander
- Published
- 2018
5. Statistical Inference in Functional Networks: a Statistical Approach with Application in Neuroscience
- Author
-
Ditlevsen, Susanne Dalager, Zhang, Fayi, Ditlevsen, Susanne Dalager, and Zhang, Fayi
- Abstract
The brain is a complex network of connected components, whose interactions evolve dynamically to cooperatively perform specific functions. Building the functional network can help us understand how our brain works and conduct certain tasks. There are many methods that can help build the network. In this project, I will introduce a statistically principled approach to build the functional network. This analysis is based on the data recorded from 64 sensors during a repeated behavior task. In this project, I will first simulate EEG data with different signal-to-noise ratios (SNRs), correlation ratios and show that this method successfully identifies functional networks and edge densities of confidence for these data. After that, I will employ a principled technique to establish functional networks based on predetermined regions of interest using canonical correlation and analyze the dynamic functional networks associated with it. Finally, I will apply the use of these methods on the real data and build the functional networks for the visual cortex and the whole brain, I will also analyze how the functional networks differ for different stimulus.
- Published
- 2017
6. Statistical Inference in Functional Networks: a Statistical Approach with Application in Neuroscience
- Author
-
Ditlevsen, Susanne Dalager, Zhang, Fayi, Ditlevsen, Susanne Dalager, and Zhang, Fayi
- Abstract
The brain is a complex network of connected components, whose interactions evolve dynamically to cooperatively perform specific functions. Building the functional network can help us understand how our brain works and conduct certain tasks. There are many methods that can help build the network. In this project, I will introduce a statistically principled approach to build the functional network. This analysis is based on the data recorded from 64 sensors during a repeated behavior task. In this project, I will first simulate EEG data with different signal-to-noise ratios (SNRs), correlation ratios and show that this method successfully identifies functional networks and edge densities of confidence for these data. After that, I will employ a principled technique to establish functional networks based on predetermined regions of interest using canonical correlation and analyze the dynamic functional networks associated with it. Finally, I will apply the use of these methods on the real data and build the functional networks for the visual cortex and the whole brain, I will also analyze how the functional networks differ for different stimulus.
- Published
- 2017
7. Inference in Stochastic Differential Equations With Random Effects
- Author
-
Ditlevsen, Susanne Dalager, Thygesen, Uffe H., Thorsen, Nicklas Myrthue, Ditlevsen, Susanne Dalager, Thygesen, Uffe H., and Thorsen, Nicklas Myrthue
- Published
- 2017
8. Flexible testing of treatment equivalence in a survival setting
- Author
-
Ditlevsen, Susanne Dalager, Scheike, Thomas Harder, Pipper, Christian Bressen, Furberg, Julie Kjærulff, Ditlevsen, Susanne Dalager, Scheike, Thomas Harder, Pipper, Christian Bressen, and Furberg, Julie Kjærulff
- Abstract
Equivalence testing is performed to assess whether or not there is a neglible difference, in terms of clinical relevance, between the effects of two treatments. This thesis will focus on equivalence tests carried out for time to event data as seen in survival analysis. The traditional approaches used for conducting equivalence tests for survival data utilizes properties of the Cox model. The advantage of the Cox model is that the treatment effect is summarized in an one-dimensional parameter, namely the hazard ratio. The tradional approaches directly use the asymptotic behaviour of the maximum partial likehood estimator of the regression coefficent derived from the Cox model. This regression coefficient has a direct link to the hazard ratio, through the exponential function. However, other summary measures derived from the distributions of the two treatment populations could be of relevance and perhaps be easier to interpret. This thesis will consider equivalence testing on alternative scales and with other measures to summarize time to event data. Examples include a restricted survival probability scale and a restricted mean scale. Moreover, it will not always be appropriate to assume an underlying Cox model, as the proportional hazards assumption can be violated. An example of this is when a treatment effect fades over time. This can be remedied by focusing on non-parametric approaches to modelling survival data instead. This thesis will develop more flexible equivalence tests based on the Aalen model. If equivalence tests are performed under a Cox model, where the model assumptions are violated, it will have grave consequences on the errors of testing. We avoid this by using tests derived under the flexible Aalen model. As an extension to the developed theory and methods, the thesis will consider how to adjust for potential confounders when performing equivalence tests in observatio
- Published
- 2017
9. Metapopulation models for infectious diseases with applications to the Copenhagen cholera epidemic
- Author
-
Ditlevsen, Susanne Dalager, Gerds, Thomas Alexander, Sørensen, Anne Lyngholm, Ditlevsen, Susanne Dalager, Gerds, Thomas Alexander, and Sørensen, Anne Lyngholm
- Abstract
Mathematical modelling of infectious diseases is about describing the transition of individuals between mutually exclusive infection states. In an analysis of the spread of an infectious disease the transition from the susceptible state to the infected state is the most important transition and the focus of modelling. The data of infectious diseases are of a non-experimental observation nature. This means that the statistical framework for the testing of a specific hypothesis is usually motivated by the characteristics of the infectious disease, the population at risk and the geographical situation. This thesis is about the statistical modelling of the cholera epidemic in Copenhagen in 1853. To understand the transitions from susceptible to infected we develop a metapopulation framework. The models are based on a general SIR (Susceptible-Infected-Removed) model, which dynamics are described in a counting process framework. The metapopulations considered are based on a set of distinct quarters in Copenhagen in 1853. The models aim to distinct the transmission of cholera within the quarters from possible interactions with other quarters. The structure of interaction is based on quarters sharing a border and the direction of the water flow within the city. The data available for the statistical inference are weekly counts of newly infected on quarter level during the 15 weeks of the epidemic. We introduce internal and external autoregressive covariates. In this we added a temporal dimension to the models. We focus on models in which we can test the effect of water contamination. In a first simple model we describe the transmission of cholera within the quarters by a single parameter which we assume to be the same for all quarters. We also describe the effect of transmission of cholera between quarters sharing a border by a single parameter in order to estimate the effect of water contamination.
- Published
- 2017
10. Flexible testing of treatment equivalence in a survival setting
- Author
-
Ditlevsen, Susanne Dalager, Scheike, Thomas Harder, Pipper, Christian Bressen, Furberg, Julie Kjærulff, Ditlevsen, Susanne Dalager, Scheike, Thomas Harder, Pipper, Christian Bressen, and Furberg, Julie Kjærulff
- Abstract
Equivalence testing is performed to assess whether or not there is a neglible difference, in terms of clinical relevance, between the effects of two treatments. This thesis will focus on equivalence tests carried out for time to event data as seen in survival analysis. The traditional approaches used for conducting equivalence tests for survival data utilizes properties of the Cox model. The advantage of the Cox model is that the treatment effect is summarized in an one-dimensional parameter, namely the hazard ratio. The tradional approaches directly use the asymptotic behaviour of the maximum partial likehood estimator of the regression coefficent derived from the Cox model. This regression coefficient has a direct link to the hazard ratio, through the exponential function. However, other summary measures derived from the distributions of the two treatment populations could be of relevance and perhaps be easier to interpret. This thesis will consider equivalence testing on alternative scales and with other measures to summarize time to event data. Examples include a restricted survival probability scale and a restricted mean scale. Moreover, it will not always be appropriate to assume an underlying Cox model, as the proportional hazards assumption can be violated. An example of this is when a treatment effect fades over time. This can be remedied by focusing on non-parametric approaches to modelling survival data instead. This thesis will develop more flexible equivalence tests based on the Aalen model. If equivalence tests are performed under a Cox model, where the model assumptions are violated, it will have grave consequences on the errors of testing. We avoid this by using tests derived under the flexible Aalen model. As an extension to the developed theory and methods, the thesis will consider how to adjust for potential confounders when performing equivalence tests in observatio
- Published
- 2017
11. Metapopulation models for infectious diseases with applications to the Copenhagen cholera epidemic
- Author
-
Ditlevsen, Susanne Dalager, Gerds, Thomas Alexander, Sørensen, Anne Lyngholm, Ditlevsen, Susanne Dalager, Gerds, Thomas Alexander, and Sørensen, Anne Lyngholm
- Abstract
Mathematical modelling of infectious diseases is about describing the transition of individuals between mutually exclusive infection states. In an analysis of the spread of an infectious disease the transition from the susceptible state to the infected state is the most important transition and the focus of modelling. The data of infectious diseases are of a non-experimental observation nature. This means that the statistical framework for the testing of a specific hypothesis is usually motivated by the characteristics of the infectious disease, the population at risk and the geographical situation. This thesis is about the statistical modelling of the cholera epidemic in Copenhagen in 1853. To understand the transitions from susceptible to infected we develop a metapopulation framework. The models are based on a general SIR (Susceptible-Infected-Removed) model, which dynamics are described in a counting process framework. The metapopulations considered are based on a set of distinct quarters in Copenhagen in 1853. The models aim to distinct the transmission of cholera within the quarters from possible interactions with other quarters. The structure of interaction is based on quarters sharing a border and the direction of the water flow within the city. The data available for the statistical inference are weekly counts of newly infected on quarter level during the 15 weeks of the epidemic. We introduce internal and external autoregressive covariates. In this we added a temporal dimension to the models. We focus on models in which we can test the effect of water contamination. In a first simple model we describe the transmission of cholera within the quarters by a single parameter which we assume to be the same for all quarters. We also describe the effect of transmission of cholera between quarters sharing a border by a single parameter in order to estimate the effect of water contamination.
- Published
- 2017
12. Inference in Stochastic Differential Equations With Random Effects
- Author
-
Ditlevsen, Susanne Dalager, Thygesen, Uffe H., Thorsen, Nicklas Myrthue, Ditlevsen, Susanne Dalager, Thygesen, Uffe H., and Thorsen, Nicklas Myrthue
- Published
- 2017
13. Patient Reported Outcomes in Exercise Oncology : analysis using Longitudinal PRO's in Latent Variable Models
- Author
-
Ditlevsen, Susanne Dalager, Christensen, Karl Bang, Sparre, Gry, Ditlevsen, Susanne Dalager, Christensen, Karl Bang, and Sparre, Gry
- Published
- 2016
14. Cluster Analysis and Longitudinal Latent Variable Models: a Case Study for Patients with Acute Leukaemia
- Author
-
Ditlevsen, Susanne Dalager, Ekstrøm, Claus Thorn, Christensen, Karl Bang, Buchardt, Ann-Sophie, Ditlevsen, Susanne Dalager, Ekstrøm, Claus Thorn, Christensen, Karl Bang, and Buchardt, Ann-Sophie
- Abstract
We consider data from a randomised controlled trial at Copenhagen University Hospital that aims to determine if patients with acute leukaemia can benefit by a structured and supervised counselling and exercise programme. Intervention and control groups are followed over 12 weeks in order to evaluate the effect of exercise interventions and the M.D. Anderson Symptom Inventory (\textsc{mdasi}) is administered once a week. Based on methods such as hierarchical clustering and Mokken scaling we examine whether the symptom responses may be clustered and given different such clusterings we form models of varying statistical quality. The longitudinal nature of data gives rise to different dependence structures which are assessed. Thus, we compare results from the ordinal regression model which assumes that responses are independent, from the generalised linear mixed effects model which can be quite sensitive to variance structure specification, and from the generalised estimating equation which is used to estimate the parameters of a generalised linear model with a possibly unknown correlation structure between responses. To examine whether the patients benefit by the programme we analyse the data with a primary hypothesis about the contrast between the intervention/control group and we test the effect of additional covariates at hand. As well as composing models based on a priori determined clusters we endeavour to uncover clusters from the modelling: Modelling interactions between time and item allows for simultaneous estimation of effects and identification of clusters.
- Published
- 2016
15. Cluster Analysis and Longitudinal Latent Variable Models: a Case Study for Patients with Acute Leukaemia
- Author
-
Ditlevsen, Susanne Dalager, Ekstrøm, Claus Thorn, Christensen, Karl Bang, Buchardt, Ann-Sophie, Ditlevsen, Susanne Dalager, Ekstrøm, Claus Thorn, Christensen, Karl Bang, and Buchardt, Ann-Sophie
- Abstract
We consider data from a randomised controlled trial at Copenhagen University Hospital that aims to determine if patients with acute leukaemia can benefit by a structured and supervised counselling and exercise programme. Intervention and control groups are followed over 12 weeks in order to evaluate the effect of exercise interventions and the M.D. Anderson Symptom Inventory (\textsc{mdasi}) is administered once a week. Based on methods such as hierarchical clustering and Mokken scaling we examine whether the symptom responses may be clustered and given different such clusterings we form models of varying statistical quality. The longitudinal nature of data gives rise to different dependence structures which are assessed. Thus, we compare results from the ordinal regression model which assumes that responses are independent, from the generalised linear mixed effects model which can be quite sensitive to variance structure specification, and from the generalised estimating equation which is used to estimate the parameters of a generalised linear model with a possibly unknown correlation structure between responses. To examine whether the patients benefit by the programme we analyse the data with a primary hypothesis about the contrast between the intervention/control group and we test the effect of additional covariates at hand. As well as composing models based on a priori determined clusters we endeavour to uncover clusters from the modelling: Modelling interactions between time and item allows for simultaneous estimation of effects and identification of clusters.
- Published
- 2016
16. Patient Reported Outcomes in Exercise Oncology : analysis using Longitudinal PRO's in Latent Variable Models
- Author
-
Ditlevsen, Susanne Dalager, Christensen, Karl Bang, Sparre, Gry, Ditlevsen, Susanne Dalager, Christensen, Karl Bang, and Sparre, Gry
- Published
- 2016
17. Vaccines, hospitalization and mortality in urban Guinea-Bissau: a statistical analysis
- Author
-
Ditlevsen, Susanne Dalager, Ravn, Henrik, Larsen, Marie Torstholm, Ditlevsen, Susanne Dalager, Ravn, Henrik, and Larsen, Marie Torstholm
- Published
- 2015
18. Vaccines, hospitalization and mortality in urban Guinea-Bissau: a statistical analysis
- Author
-
Ditlevsen, Susanne Dalager, Ravn, Henrik, Larsen, Marie Torstholm, Ditlevsen, Susanne Dalager, Ravn, Henrik, and Larsen, Marie Torstholm
- Published
- 2015
19. Parameterestimation ved brug af en syntetisk likelihood: En udvidet stokastisk FitzHugh-Nagumo model
- Author
-
Ditlevsen, Susanne Dalager, Maltesen, Asbjørn Thomas, Ditlevsen, Susanne Dalager, and Maltesen, Asbjørn Thomas
- Abstract
Dette speciale handler om parameter estimering. Vi anvender metoden introduceret i artiklen af Wood, S. N.; Statistical inference for noisy nonlinear ecological dynamic systems på en udvidet stokastisk Fitzhugh-Nagumo model, som introduceres i artiklen af Jensen, AC et al.; Markov chain Monte Carlo approach to parameter estimation in the FitzHugh-Nagumo model. Vi betragter først modellen og ser på egenskaberne for både den deterministiske model og den stokastiske model og vi studerer eksempler på de to versioner. I den næste del af specialet undersøger vi estimeringsproceduren der er beskrevet i artiklen af Wood. Til dette formål anvender vi den logaritmisk syntetiske likelihood, som vi skaber ved at bruge statistikker med helt specifikke egenskaber; vi indser, at likelihooden ligner likelihooden for normalfordelingen, hvis statistikkerne er approksimativt normalfordelt. Valget af statistikker er en iterativ proces, som grundlæggende består i at prøve sig frem. Efter at have set på likelihooden, introducerer vi Metropolis-Hasting metoden - en accepter/forkast algoritme. Vi etablerer den teoretiske ramme fra hvilken metoden er udviklet og vi undersøger den faktiske algoritme, som vi bruger til at simulere data. Tilsidst kombinerer vi modellen og metoden idet vi estimerer parametrene i modellen ved at bruge tre modifikationer af metoden samt data simuleret fra modellen. Resultaterne indikerer at det ikke er praktisk gennemførligt at estimere parametrene i Fitzhugh-Nagumo modellen. Denne konklusion opnåede vi efter at have simuleret næsten en tredjedel billion normalfordelte variable henover en periode på 512 timer (21 dage og 8 timer)., This master thesis is about parameter estimation. We use the method introduced in the article by Wood, S. N.; Statistical inference for noisy nonlinear ecological dynamic systems on an extended stochastic FitzHugh-Nagumo model, which is introduced in the paper by Jensen, A. C. et al.; Markov chain Monte Carlo approach to parameter estimation in the FitzHugh-Nagumo model. First we study the model and look at features for both the deterministic model and the stochastic model and we consider some examples of the two versions. In the next part of the thesis we study the estimation procedure described in the article by Wood. To this end we utilize the log synthetic likelihood which we create using statistics with very specific properties; We realize that the likelihood resembles the likelihood of a normal distribution if the statistics are approximately normally distributed. Choosing the statistics is an iterative process based on the fundamental trial and error approach. After studying the likelihood we introduce the Metropolis-Hasting method - an accept-reject algorithm. We establish the theoretical framework from which the method is developed and study the actual algorithm that we use to simulate data. Finally we combine the model and the method as we estimate the parameters in the model using three modifications of the method and data simulated from the model. The results indicate that using the Wood method is not a feasible way to estimate the parameters in the FitzHugh-Nagumo model. This conclusion was reached after having simulated almost a third of a trillion normally distributed variables over a period of 512 hours (21 days and 8 hours).
- Published
- 2015
20. Maintenance Therapy of Childhood ALL : longitudinal Profiles of Blood Counts and Their Association
- Author
-
Ditlevsen, Susanne Dalager, Rosthøj, Susanne, Jensen, Katrine Lykke, Ditlevsen, Susanne Dalager, Rosthøj, Susanne, and Jensen, Katrine Lykke
- Abstract
Acute lymphoblastic leukemia is the most common cancer among children aged 1 to 15 years. The treatment consists of three parts, where the maintenance therapy (the last part) is the longest part of the treatment. During maintenance therapy, doses of oral chemotherapy are adjusted in response to frequent measurements of white blood counts (WBC), absolute neutrophile count (ANC) and absolute lymphocyte count (ALC). Currently there exists no general guidelines for how the doses of the drugs are adjusted optimally. In the Nordic countries doses are adjusted according to white blood count (WBC). However, this is not associated with the risk of relapse, but instead ANC has turned out to be associated with the risk of relapse. Due to this finding it is currently discussed whether ANC should be used for dose adjustment instead of WBC. In this thesis, we use a large data set to examine the longitudinal profiles for WBC, ANC and ALC during maintenance therapy. The purpose is to see whether these blood counts share some of the same features as well as to determine their association. In the context of linear mixed models, we consider univariate models for each outcome to examine each of the profiles. To investigate the correlation and to compare profiles, the univariate mixed models are combined in a multivariate model. With several outcomes, the parameters in the multivariate model cannot be estimated because of a large number of random effects, but instead a pairwise modelling strategy based on pseudo-likelihoods is implemented. We find some differences in subgroups with respect to their longitudinal profiles. Furthermore, the profiles for the blood counts share some, but not all, features. In particular we find that the processes do not run in parallel. ANC is constant in the maintenance therapy, whereas WBC and ALC decreases. Moreover, we find that WBC and ANC are highly correlated with a correlation of approximately 0.8, the correlation between WBC
- Published
- 2015
21. Logistic regression analysis with missing values
- Author
-
Ditlevsen, Susanne Dalager, Gerds, Thomas Alexander, Starkopf, Liis, Ditlevsen, Susanne Dalager, Gerds, Thomas Alexander, and Starkopf, Liis
- Abstract
The overall scope of the thesis is to estimate logistic regression parame- ters in the presence of missing covariate data. We consider the missing data which is missing at random. There is a well developed asymptotic theory which suggests using observed data likelihood based inference and weighted full data estimating equations, where the weights are equal to the inverse of the probability of being observed. First, we describe the estimating equations of both methods. We show that for the maximum likelihood method we need to specify the conditional distribu- tion of the partially observed covariates given the fully observed covariates in order to get consistent estimates. Similarly, distributional requirements have to be satisfied for inverse probability weighted method, but in this case the distribution that needs to be correctly specified is the probability of missing- ness. We also introduce an augmented inverse probability weighted estimator, which is robust to the misspecification. That is, it only requires that one of the distributions mentioned is correctly specified. We apply the methods to a real data set and do a small simulation study to compare the properties of the estimators.
- Published
- 2015
22. Parameterestimation ved brug af en syntetisk likelihood: En udvidet stokastisk FitzHugh-Nagumo model
- Author
-
Ditlevsen, Susanne Dalager, Maltesen, Asbjørn Thomas, Ditlevsen, Susanne Dalager, and Maltesen, Asbjørn Thomas
- Abstract
Dette speciale handler om parameter estimering. Vi anvender metoden introduceret i artiklen af Wood, S. N.; Statistical inference for noisy nonlinear ecological dynamic systems på en udvidet stokastisk Fitzhugh-Nagumo model, som introduceres i artiklen af Jensen, AC et al.; Markov chain Monte Carlo approach to parameter estimation in the FitzHugh-Nagumo model. Vi betragter først modellen og ser på egenskaberne for både den deterministiske model og den stokastiske model og vi studerer eksempler på de to versioner. I den næste del af specialet undersøger vi estimeringsproceduren der er beskrevet i artiklen af Wood. Til dette formål anvender vi den logaritmisk syntetiske likelihood, som vi skaber ved at bruge statistikker med helt specifikke egenskaber; vi indser, at likelihooden ligner likelihooden for normalfordelingen, hvis statistikkerne er approksimativt normalfordelt. Valget af statistikker er en iterativ proces, som grundlæggende består i at prøve sig frem. Efter at have set på likelihooden, introducerer vi Metropolis-Hasting metoden - en accepter/forkast algoritme. Vi etablerer den teoretiske ramme fra hvilken metoden er udviklet og vi undersøger den faktiske algoritme, som vi bruger til at simulere data. Tilsidst kombinerer vi modellen og metoden idet vi estimerer parametrene i modellen ved at bruge tre modifikationer af metoden samt data simuleret fra modellen. Resultaterne indikerer at det ikke er praktisk gennemførligt at estimere parametrene i Fitzhugh-Nagumo modellen. Denne konklusion opnåede vi efter at have simuleret næsten en tredjedel billion normalfordelte variable henover en periode på 512 timer (21 dage og 8 timer)., This master thesis is about parameter estimation. We use the method introduced in the article by Wood, S. N.; Statistical inference for noisy nonlinear ecological dynamic systems on an extended stochastic FitzHugh-Nagumo model, which is introduced in the paper by Jensen, A. C. et al.; Markov chain Monte Carlo approach to parameter estimation in the FitzHugh-Nagumo model. First we study the model and look at features for both the deterministic model and the stochastic model and we consider some examples of the two versions. In the next part of the thesis we study the estimation procedure described in the article by Wood. To this end we utilize the log synthetic likelihood which we create using statistics with very specific properties; We realize that the likelihood resembles the likelihood of a normal distribution if the statistics are approximately normally distributed. Choosing the statistics is an iterative process based on the fundamental trial and error approach. After studying the likelihood we introduce the Metropolis-Hasting method - an accept-reject algorithm. We establish the theoretical framework from which the method is developed and study the actual algorithm that we use to simulate data. Finally we combine the model and the method as we estimate the parameters in the model using three modifications of the method and data simulated from the model. The results indicate that using the Wood method is not a feasible way to estimate the parameters in the FitzHugh-Nagumo model. This conclusion was reached after having simulated almost a third of a trillion normally distributed variables over a period of 512 hours (21 days and 8 hours).
- Published
- 2015
23. Maintenance Therapy of Childhood ALL : longitudinal Profiles of Blood Counts and Their Association
- Author
-
Ditlevsen, Susanne Dalager, Rosthøj, Susanne, Jensen, Katrine Lykke, Ditlevsen, Susanne Dalager, Rosthøj, Susanne, and Jensen, Katrine Lykke
- Abstract
Acute lymphoblastic leukemia is the most common cancer among children aged 1 to 15 years. The treatment consists of three parts, where the maintenance therapy (the last part) is the longest part of the treatment. During maintenance therapy, doses of oral chemotherapy are adjusted in response to frequent measurements of white blood counts (WBC), absolute neutrophile count (ANC) and absolute lymphocyte count (ALC). Currently there exists no general guidelines for how the doses of the drugs are adjusted optimally. In the Nordic countries doses are adjusted according to white blood count (WBC). However, this is not associated with the risk of relapse, but instead ANC has turned out to be associated with the risk of relapse. Due to this finding it is currently discussed whether ANC should be used for dose adjustment instead of WBC. In this thesis, we use a large data set to examine the longitudinal profiles for WBC, ANC and ALC during maintenance therapy. The purpose is to see whether these blood counts share some of the same features as well as to determine their association. In the context of linear mixed models, we consider univariate models for each outcome to examine each of the profiles. To investigate the correlation and to compare profiles, the univariate mixed models are combined in a multivariate model. With several outcomes, the parameters in the multivariate model cannot be estimated because of a large number of random effects, but instead a pairwise modelling strategy based on pseudo-likelihoods is implemented. We find some differences in subgroups with respect to their longitudinal profiles. Furthermore, the profiles for the blood counts share some, but not all, features. In particular we find that the processes do not run in parallel. ANC is constant in the maintenance therapy, whereas WBC and ALC decreases. Moreover, we find that WBC and ANC are highly correlated with a correlation of approximately 0.8, the correlation between WBC
- Published
- 2015
24. Logistic regression analysis with missing values
- Author
-
Ditlevsen, Susanne Dalager, Gerds, Thomas Alexander, Starkopf, Liis, Ditlevsen, Susanne Dalager, Gerds, Thomas Alexander, and Starkopf, Liis
- Abstract
The overall scope of the thesis is to estimate logistic regression parame- ters in the presence of missing covariate data. We consider the missing data which is missing at random. There is a well developed asymptotic theory which suggests using observed data likelihood based inference and weighted full data estimating equations, where the weights are equal to the inverse of the probability of being observed. First, we describe the estimating equations of both methods. We show that for the maximum likelihood method we need to specify the conditional distribu- tion of the partially observed covariates given the fully observed covariates in order to get consistent estimates. Similarly, distributional requirements have to be satisfied for inverse probability weighted method, but in this case the distribution that needs to be correctly specified is the probability of missing- ness. We also introduce an augmented inverse probability weighted estimator, which is robust to the misspecification. That is, it only requires that one of the distributions mentioned is correctly specified. We apply the methods to a real data set and do a small simulation study to compare the properties of the estimators.
- Published
- 2015
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.