6 results on '"Ajay Jasra"'
Search Results
2. Unbiased Inference for Discretely Observed Hidden Markov Model Diffusions
- Author
-
Jordan Franks, Neil K. Chada, Ajay Jasra, Kody J. H. Law, and Matti Vihola
- Subjects
FOS: Computer and information sciences ,Statistics and Probability ,Discretization ,Computer science ,Markovin ketjut ,Inference ,010103 numerical & computational mathematics ,sequential Monte Carlo ,Bayesian inference ,Statistics - Computation ,01 natural sciences ,Methodology (stat.ME) ,010104 statistics & probability ,symbols.namesake ,diffuusio (fysikaaliset ilmiöt) ,FOS: Mathematics ,Discrete Mathematics and Combinatorics ,0101 mathematics ,Hidden Markov model ,Computation (stat.CO) ,Statistics - Methodology ,matematiikka ,bayesilainen menetelmä ,Applied Mathematics ,Probability (math.PR) ,diffusion ,matemaattiset menetelmät ,Markov chain Monte Carlo ,Monte Carlo -menetelmät ,Noise ,importance sampling ,65C05 (primary), 60H35, 65C35, 65C40 (secondary) ,Modeling and Simulation ,symbols ,matemaattiset mallit ,Statistics, Probability and Uncertainty ,multilevel Monte Carlo ,Particle filter ,Algorithm ,Mathematics - Probability ,Importance sampling - Abstract
We develop a Bayesian inference method for diffusions observed discretely and with noise, which is free of discretisation bias. Unlike existing unbiased inference methods, our method does not rely on exact simulation techniques. Instead, our method uses standard time-discretised approximations of diffusions, such as the Euler--Maruyama scheme. Our approach is based on particle marginal Metropolis--Hastings, a particle filter, randomised multilevel Monte Carlo, and importance sampling type correction of approximate Markov chain Monte Carlo. The resulting estimator leads to inference without a bias from the time-discretisation as the number of Markov chain iterations increases. We give convergence results and recommend allocations for algorithm inputs. Our method admits a straightforward parallelisation, and can be computationally efficient. The user-friendly approach is illustrated on three examples, where the underlying diffusion is an Ornstein--Uhlenbeck process, a geometric Brownian motion, and a 2d non-reversible Langevin equation., 33 pages, 5 figures
- Published
- 2021
- Full Text
- View/download PDF
3. Multilevel Sequential Monte Carlo with Dimension-Independent Likelihood-Informed Proposals
- Author
-
Youssef M. Marzouk, Yan Zhou, Alexandros Beskos, Ajay Jasra, and Kody J. H. Law
- Subjects
FOS: Computer and information sciences ,Statistics and Probability ,Bayesian inverse problem ,Mathematical optimization ,Applied Mathematics ,Monte Carlo method ,Sequential monte carlo methods ,Statistics - Computation ,01 natural sciences ,010101 applied mathematics ,Hybrid Monte Carlo ,010104 statistics & probability ,Dimension (vector space) ,Modeling and Simulation ,Discrete Mathematics and Combinatorics ,Statistical physics ,0101 mathematics ,Statistics, Probability and Uncertainty ,Monte carlo estimation ,Uncertainty quantification ,multilevel Monte Carlo ,Sequential Monte Carlo ,Particle filter ,Computation (stat.CO) ,Mathematics - Abstract
In this article we develop a new sequential Monte Carlo method for multilevel Monte Carlo estimation. In particular, the method can be used to estimate expectations with respect to a target probability distribution over an infinite-dimensional and noncompact space---as produced, for example, by a Bayesian inverse problem with a Gaussian random field prior. Under suitable assumptions the MLSMC method has the optimal $\mathcal{O}(\varepsilon^{-2})$ bound on the cost to obtain a mean-square error of $\mathcal{O}(\varepsilon^2)$. The algorithm is accelerated by dimension-independent likelihood-informed proposals [T. Cui, K. J. Law, and Y. M. Marzouk, (2016), J. Comput. Phys., 304, pp. 109--137] designed for Gaussian priors, leveraging a novel variation which uses empirical covariance information in lieu of Hessian information, hence eliminating the requirement for gradient evaluations. The efficiency of the algorithm is illustrated on two examples: (i) inversion of noisy pressure measurements in a PDE model of Darcy flow to recover the posterior distribution of the permeability field and (ii) inversion of noisy measurements of the solution of an SDE to recover the posterior path measure
- Published
- 2018
- Full Text
- View/download PDF
4. Particle Filtering for Stochastic Navier--Stokes Signal Observed with Linear Additive Noise
- Author
-
Alexandros Beskos, Francesc Pons Llopis, Ajay Jasra, and Nikolas Kantas
- Subjects
FOS: Computer and information sciences ,0103 Numerical And Computational Mathematics ,MCMC ,STRATEGIES ,particle filters ,MODELS ,Mathematics, Applied ,Mathematics::Analysis of PDEs ,Numerical & Computational Mathematics ,010103 numerical & computational mathematics ,Statistics - Computation ,01 natural sciences ,Signal ,Physics::Fluid Dynamics ,010104 statistics & probability ,symbols.namesake ,DATA ASSIMILATION ,Data assimilation ,0102 Applied Mathematics ,SEQUENTIAL MONTE-CARLO ,Navier stokes ,0101 mathematics ,EQUATIONS ,Computation (stat.CO) ,Physics::Atmospheric and Oceanic Physics ,preconditioned Crank-Nicolson Markov chain Monte Carlo ,Mathematics ,stat.CO ,0802 Computation Theory And Mathematics ,Science & Technology ,Noise (signal processing) ,Applied Mathematics ,Mathematical analysis ,Markov chain Monte Carlo ,Inverse problem ,Linear map ,stochastic Navier-Stokes ,Computational Mathematics ,stochastic filtering ,Physical Sciences ,symbols ,INFERENCE ,Particle filter ,INVERSE PROBLEMS - Abstract
We consider a non-linear filtering problem, whereby the signal obeys the stochastic Navier-Stokes equations and is observed through a linear mapping with additive noise. The setup is relevant to data assimilation for numerical weather prediction and climate modelling, where similar models are used for unknown ocean or wind velocities. We present a particle filtering methodology that uses likelihood informed importance proposals, adaptive tempering, and a small number of appropriate Markov Chain Monte Carlo steps. We provide a detailed design for each of these steps and show in our numerical examples that they are all crucial in terms of achieving good performance and efficiency.
- Published
- 2018
- Full Text
- View/download PDF
5. On large lag smoothing for hidden markov models
- Author
-
Jeremie Houssineau, Ajay Jasra, Sumeetpal S. Singh, and Apollo - University of Cambridge Repository
- Subjects
Discrete mathematics ,FOS: Computer and information sciences ,Numerical Analysis ,Applied Mathematics ,Lag ,smoothing ,Methodology (stat.ME) ,Computational Mathematics ,Mathematics::Probability ,optimal transport ,QA ,Hidden Markov model ,multilevel Monte Carlo ,Smoothing ,Statistics - Methodology ,Mathematics - Abstract
In this article we consider the smoothing problem for hidden Markov models. Given a hidden Markov chain $\{X_n\}_{n\geq 0}$ and observations $\{Y_n\}_{n\geq 0}$, our objective is to compute $\mathbb{E}[\varphi(X_0,\dots,X_k)|y_{0},\dots,y_n]$ for some real-valued, integrable functional $\varphi$ and $k$ fixed, $k \ll n$ and for some realization $(y_0,\dots,y_n)$ of $(Y_0,\dots,Y_n)$. We introduce a novel application of the multilevel Monte Carlo method with a coupling based on the Knothe--Rosenblatt rearrangement. We prove that this method can approximate the aforementioned quantity with a mean square error (MSE) of $\mathcal{O}(\epsilon^2)$ for arbitrary $\epsilon>0$ with a cost of $\mathcal{O}(\epsilon^{-2})$. This is in contrast to the same direct Monte Carlo method, which requires a cost of $\mathcal{O}(n\epsilon^{-2})$ for the same MSE. The approach we suggest is, in general, not possible to implement, so the optimal transport methodology of [A. Spantini, D. Bigoni, and Y. Marzouk, J. Mach. Learn. Res., 19 (2018), pp. 2639--2709; M. Parno, T. Moselhy, and Y. Marzouk, SIAM/ASA J. Uncertain. Quantif., 4 (2016), pp. 1160--1190] is used, which directly approximates our strategy. We show that our theoretical improvements are achieved, even under approximation, in several numerical examples.\ud \ud \ud \ud \ud
- Published
- 2019
6. Sequential Monte Carlo Methods for High-Dimensional Inverse Problems: A Case Study for the Navier--Stokes Equations
- Author
-
Ajay Jasra, Nikolas Kantas, and Alexandros Beskos
- Subjects
FOS: Computer and information sciences ,Statistics and Probability ,Mathematical optimization ,math.NA ,Sequential monte carlo methods ,010103 numerical & computational mathematics ,Statistics - Computation ,01 natural sciences ,010104 statistics & probability ,Data assimilation ,FOS: Mathematics ,Discrete Mathematics and Combinatorics ,Initial value problem ,Applied mathematics ,Mathematics - Numerical Analysis ,0101 mathematics ,Navier–Stokes equations ,Computation (stat.CO) ,Mathematics ,stat.CO ,Partial differential equation ,Applied Mathematics ,Numerical Analysis (math.NA) ,Inverse problem ,Discrete time and continuous time ,Modeling and Simulation ,Statistics, Probability and Uncertainty ,Particle filter - Abstract
We consider the inverse problem of estimating the initial condition of a partial differential equation, which is only observed through noisy measurements at discrete time intervals. In particular, we focus on the case where Eulerian measurements are obtained from the time and space evolving vector field, whose evolution obeys the two-dimensional Navier-Stokes equations defined on a torus. This context is particularly relevant to the area of numerical weather forecasting and data assimilation. We will adopt a Bayesian formulation resulting from a particular regularization that ensures the problem is well posed. In the context of Monte Carlo based inference, it is a challenging task to obtain samples from the resulting high dimensional posterior on the initial condition. In real data assimilation applications it is common for computational methods to invoke the use of heuristics and Gaussian approximations. The resulting inferences are biased and not well-justified in the presence of non-linear dynamics and observations. On the other hand, Monte Carlo methods can be used to assimilate data in a principled manner, but are often perceived as inefficient in this context due to the high-dimensionality of the problem. In this work we will propose a generic Sequential Monte Carlo (SMC) sampling approach for high dimensional inverse problems that overcomes these difficulties. The method builds upon Markov chain Monte Carlo (MCMC) techniques, which are currently considered as benchmarks for evaluating data assimilation algorithms used in practice. In our numerical examples, the proposed SMC approach achieves the same accuracy as MCMC but in a much more efficient manner., Comment: 31 pages, 14 figures
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.