11 results on '"Khue-Dung Dang"'
Search Results
2. The Block-Poisson Estimator for Optimally Tuned Exact Subsampling MCMC.
- Author
-
Matias Quiroz, Minh-Ngoc Tran, Mattias Villani, Robert Kohn, and Khue-Dung Dang
- Published
- 2021
- Full Text
- View/download PDF
3. A dose-response analysis of the effects of prenatal alcohol exposure on cognitive development.
- Author
-
Jacobson, Joseph L., Akkaya-Hocagil, Tugba, Jacobson, Sandra W., Coles, Claire D., Richardson, Gale A., Olson, Heather Carmichael, Day, Nancy L., Carter, R. Colin, Dodge, Neil C., Khue-Dung Dang, Cook, Richard J., and Ryan, Louise M.
- Subjects
READING ,INFANT development ,PRENATAL exposure delayed effects ,COGNITIVE testing ,SECONDARY analysis ,MATHEMATICS ,RESEARCH funding ,MULTIPLE regression analysis ,PROBABILITY theory ,CHILD health services ,EXECUTIVE function ,PARENTING ,LEARNING ,DESCRIPTIVE statistics ,DOSE-response relationship in biochemistry ,FETAL alcohol syndrome ,ACHIEVEMENT tests ,ALCOHOL drinking ,SUBSTANCE abuse in pregnancy ,ALCOHOLISM ,FACTOR analysis ,MOTHERHOOD ,PSYCHOLOGICAL tests ,NONPARAMETRIC statistics ,PREGNANCY - Abstract
Background: Most studies of the effects of prenatal alcohol exposure (PAE) on cognitive function have assumed that the dose-response curve is linear. However, data from a few animal and human studies suggest that there may be an inflection point in the dose-response curve above which PAE effects are markedly stronger and that there may be differences associated with pattern of exposure, assessed in terms of alcohol dose per drinking occasion and drinking frequency. Methods: We performed second-order confirmatory factor analysis on data obtained at school age, adolescence, and early adulthood from 2227 participants in six US longitudinal cohorts to derive a composite measure of cognitive function. Regression models were constructed to examine effects of PAE on cognitive function, adjusted for propensity scores. Analyses based on a single predictor (absolute alcohol (AA)/day) were compared with analyses based on two predictors (dose/occasion and drinking frequency), using (1) linear models and (2) nonparametric general additive models (GAM) that allow for both linear and nonlinear effects. Results: The single-predictor GAM model showed virtually no nonlinearity in the effect of AA/day on cognitive function. However, the two-predictor GAM model revealed differential effects of maternal drinking pattern. Among offspring of infrequent drinkers, PAE effects on cognitive function were markedly stronger in those whose mothers drank more than ~3 drinks/occasion, and the effect of dose/occasion was strongest among the very frequent drinkers. Frequency of drinking did not appear to alter the PAE effect on cognitive function among participants born to mothers who limited their drinking to ~1 drink/occasion or less. Conclusions: These findings suggest that linear models based on total AA/day are appropriate for assessing whether PAE affects a given cognitive outcome. However, examination of alcohol dose/occasion and drinking frequency is needed to fully characterize the impact of different levels of alcohol intake on cognitive impairment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Subsampling sequential Monte Carlo for static Bayesian models.
- Author
-
David Gunawan, Khue-Dung Dang, Matias Quiroz, Robert Kohn, and Minh-Ngoc Tran
- Published
- 2020
- Full Text
- View/download PDF
5. Hamiltonian Monte Carlo with Energy Conserving Subsampling.
- Author
-
Khue-Dung Dang, Matias Quiroz, Robert Kohn, Minh-Ngoc Tran, and Mattias Villani
- Published
- 2019
6. Bayesian outcome selection modeling
- Author
-
Khue‐Dung Dang, Louise M. Ryan, Richard J. Cook, Tugba Akkaya Hocagil, Sandra W. Jacobson, and Joseph L. Jacobson
- Subjects
Statistics and Probability ,Statistics, Probability and Uncertainty - Published
- 2023
7. Subsampling sequential Monte Carlo for static Bayesian models
- Author
-
Khue-Dung Dang, Minh-Ngoc Tran, David Gunawan, Robert Kohn, and Matias Quiroz
- Subjects
Statistics and Probability ,Markov kernel ,Computer science ,Bayesian probability ,Posterior probability ,010103 numerical & computational mathematics ,Bayesian inference ,01 natural sciences ,Statistics::Computation ,Theoretical Computer Science ,Hybrid Monte Carlo ,010104 statistics & probability ,Computational Theory and Mathematics ,Resampling ,Kernel (statistics) ,0101 mathematics ,Statistics, Probability and Uncertainty ,Particle filter ,Algorithm - Abstract
We show how to speed up sequential Monte Carlo (SMC) for Bayesian inference in large data problems by data subsampling. SMC sequentially updates a cloud of particles through a sequence of distributions, beginning with a distribution that is easy to sample from such as the prior and ending with the posterior distribution. Each update of the particle cloud consists of three steps: reweighting, resampling, and moving. In the move step, each particle is moved using a Markov kernel; this is typically the most computationally expensive part, particularly when the dataset is large. It is crucial to have an efficient move step to ensure particle diversity. Our article makes two important contributions. First, in order to speed up the SMC computation, we use an approximately unbiased and efficient annealed likelihood estimator based on data subsampling. The subsampling approach is more memory efficient than the corresponding full data SMC, which is an advantage for parallel computation. Second, we use a Metropolis within Gibbs kernel with two conditional updates. A Hamiltonian Monte Carlo update makes distant moves for the model parameters, and a block pseudo-marginal proposal is used for the particles corresponding to the auxiliary variables for the data subsampling. We demonstrate both the usefulness and limitations of the methodology for estimating four generalized linear models and a generalized additive model with large datasets.
- Published
- 2020
8. Fitting Structural Equation Models via Variational Approximations
- Author
-
Khue-Dung Dang and Luca Maestrini
- Subjects
Methodology (stat.ME) ,FOS: Computer and information sciences ,Sociology and Political Science ,Modeling and Simulation ,General Decision Sciences ,General Economics, Econometrics and Finance ,Statistics - Computation ,Computation (stat.CO) ,Statistics - Methodology - Abstract
Structural equation models are commonly used to capture the relationship between sets of observed and unobservable variables. Traditionally these models are fitted using frequentist approaches but recently researchers and practitioners have developed increasing interest in Bayesian inference. In Bayesian settings, inference for these models is typically performed via Markov chain Monte Carlo methods, which may be computationally intensive for models with a large number of manifest variables or complex structures. Variational approximations can be a fast alternative; however, they have not been adequately explored for this class of models. We develop a mean field variational Bayes approach for fitting elemental structural equation models and demonstrate how bootstrap can considerably improve the variational approximation quality. We show that this variational approximation method can provide reliable inference while being significantly faster than Markov chain Monte Carlo.
- Published
- 2021
9. Subsampling MCMC - an Introduction for the Survey Statistician
- Author
-
Matias Quiroz, Robert Kohn, Minh-Ngoc Tran, Khue-Dung Dang, and Mattias Villani
- Subjects
Statistics and Probability ,Computer science ,Inference ,Survey sampling ,Astrophysics::Cosmology and Extragalactic Astrophysics ,Machine learning ,computer.software_genre ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,0502 economics and business ,Statistics::Methodology ,0101 mathematics ,050205 econometrics ,business.industry ,05 social sciences ,Estimator ,Sampling (statistics) ,Markov chain Monte Carlo ,Statistics::Computation ,Bayesian statistics ,ComputingMethodologies_PATTERNRECOGNITION ,Scalability ,symbols ,Artificial intelligence ,Statistics, Probability and Uncertainty ,business ,computer ,Statistician - Abstract
The rapid development of computing power and efficient Markov Chain Monte Carlo (MCMC) simulation algorithms have revolutionized Bayesian statistics, making it a highly practical inference method in applied work. However, MCMC algorithms tend to be computationally demanding, and are particularly slow for large datasets. Data subsampling has recently been suggested as a way to make MCMC methods scalable on massively large data, utilizing efficient sampling schemes and estimators from the survey sampling literature. These developments tend to be unknown by many survey statisticians who traditionally work with non-Bayesian methods, and rarely use MCMC. Our article explains the idea of data subsampling in MCMC by reviewing one strand of work, Subsampling MCMC, a so called Pseudo-Marginal MCMC approach to speeding up MCMC through data subsampling. The review is written for a survey statistician without previous knowledge of MCMC methods since our aim is to motivate survey sampling experts to contribute to the growing Subsampling MCMC literature.
- Published
- 2018
10. The block-Poisson estimator for optimally tuned exact subsampling MCMC
- Author
-
Matias Quiroz, Khue-Dung Dang, Mattias Villani, Minh-Ngoc Tran, and Robert Kohn
- Subjects
Statistics and Probability ,FOS: Computer and information sciences ,Computer science ,Statistics & Probability ,Machine Learning (stat.ML) ,Poisson distribution ,Control variates ,Bayesian inference ,01 natural sciences ,Statistics - Computation ,Methodology (stat.ME) ,010104 statistics & probability ,symbols.namesake ,Statistics - Machine Learning ,Block (telecommunications) ,0502 economics and business ,Discrete Mathematics and Combinatorics ,Statistics::Methodology ,0101 mathematics ,Statistics - Methodology ,Computation (stat.CO) ,050205 econometrics ,05 social sciences ,Estimator ,Markov chain Monte Carlo ,0104 Statistics, 1403 Econometrics ,Statistics::Computation ,symbols ,Statistics, Probability and Uncertainty ,Algorithm - Abstract
Speeding up Markov Chain Monte Carlo (MCMC) for datasets with many observations by data subsampling has recently received considerable attention. A pseudo-marginal MCMC method is proposed that estimates the likelihood by data subsampling using a block-Poisson estimator. The estimator is a product of Poisson estimators, allowing us to update a single block of subsample indicators in each MCMC iteration so that a desired correlation is achieved between the logs of successive likelihood estimates. This is important since pseudo-marginal MCMC with positively correlated likelihood estimates can use substantially smaller subsamples without adversely affecting the sampling efficiency. The block-Poisson estimator is unbiased but not necessarily positive, so the algorithm runs the MCMC on the absolute value of the likelihood estimator and uses an importance sampling correction to obtain consistent estimates of the posterior mean of any function of the parameters. Our article derives guidelines to select the optimal tuning parameters for our method and shows that it compares very favourably to regular MCMC without subsampling, and to two other recently proposed exact subsampling approaches in the literature., Comment: The main paper is 28 pages. The supplementary material is 28 pages
- Published
- 2016
- Full Text
- View/download PDF
11. Efficient Hamiltonian Monte Carlo for large data sets by data subsampling.
- Author
-
Doan Khue Dung Dang
- Subjects
- *
BIG data , *MARKOV chain Monte Carlo - Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.