70 results on '"Michael Ludkovski"'
Search Results
2. Adaptive batching for Gaussian process surrogates with application in noisy level set estimation
- Author
-
Xiong Lyu and Michael Ludkovski
- Subjects
Mathematical optimization ,021103 operations research ,Computer science ,Design of experiments ,Mathematical finance ,0211 other engineering and technologies ,02 engineering and technology ,01 natural sciences ,Computer Science Applications ,Metamodeling ,010104 statistics & probability ,symbols.namesake ,Sequential analysis ,Stochastic simulation ,symbols ,Overhead (computing) ,0101 mathematics ,Heuristics ,Gaussian process ,Analysis ,Information Systems - Abstract
We develop adaptive replicated designs for Gaussian process metamodels of stochastic experiments. Adaptive batching is a natural extension of sequential design heuristics with the benefit of replication growing as response features are learned, inputs concentrate, and the metamodeling overhead rises. Motivated by the problem of learning the level set of the mean simulator response we develop four novel schemes: Multi-Level Batching (MLB), Ratchet Batching (RB), Adaptive Batched Stepwise Uncertainty Reduction (ABSUR), Adaptive Design with Stepwise Allocation (ADSA) and Deterministic Design with Stepwise Allocation (DDSA). Our algorithms simultaneously (MLB, RB and ABSUR) or sequentially (ADSA and DDSA) determine the sequential design inputs and the respective number of replicates. Illustrations using synthetic examples and an application in quantitative finance (Bermudan option pricing via Regression Monte Carlo) show that adaptive batching brings significant computational speed-ups with minimal loss of modeling fidelity.
- Published
- 2021
3. Multi-output Gaussian processes for multi-population longevity modelling
- Author
-
Nhan Huynh and Michael Ludkovski
- Subjects
Statistics and Probability ,Economics and Econometrics ,Covariance function ,Stochastic modelling ,Computer science ,05 social sciences ,Bayesian probability ,Covariance ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,0502 economics and business ,Covariate ,Econometrics ,symbols ,Leverage (statistics) ,050207 economics ,0101 mathematics ,Statistics, Probability and Uncertainty ,Uncertainty quantification ,Gaussian process - Abstract
We investigate joint modelling of longevity trends using the spatial statistical framework of Gaussian process (GP) regression. Our analysis is motivated by the Human Mortality Database (HMD) that provides unified raw mortality tables for nearly 40 countries. Yet few stochastic models exist for handling more than two populations at a time. To bridge this gap, we leverage a spatial covariance framework from machine learning that treats populations as distinct levels of a factor covariate, explicitly capturing the cross-population dependence. The proposed multi-output GP models straightforwardly scale up to a dozen populations and moreover intrinsically generate coherent joint longevity scenarios. In our numerous case studies, we investigate predictive gains from aggregating mortality experience across nations and genders, including by borrowing the most recently available “foreign” data. We show that in our approach, information fusion leads to more precise (and statistically more credible) forecasts. We implement our models in R, as well as a Bayesian version in Stan that provides further uncertainty quantification regarding the estimated mortality covariance structure. All examples utilise public HMD datasets.
- Published
- 2021
4. A Machine Learning Approach to Adaptive Robust Utility Maximization and Hedging
- Author
-
Michael Ludkovski and Tao Chen
- Subjects
Numerical Analysis ,Mathematical optimization ,q-fin.CP ,math.OC ,Computer science ,Applied Mathematics ,Utility maximization ,Mathematics::Optimization and Control ,Computational Finance (q-fin.CP) ,FOS: Economics and business ,Quantitative Finance - Computational Finance ,Optimization and Control (math.OC) ,Computer Science::Computational Engineering, Finance, and Science ,FOS: Mathematics ,Volatility (finance) ,Portfolio optimization ,Robust control ,Mathematics - Optimization and Control ,Finance - Abstract
We investigate the adaptive robust control framework for portfolio optimization and loss-based hedging under drift and volatility uncertainty. Adaptive robust problems offer many advantages but require handling a double optimization problem (infimum over market measures, supremum over the control) at each instance. Moreover, the underlying Bellman equations are intrinsically multi-dimensional. We propose a novel machine learning approach that solves for the local saddle-point at a chosen set of inputs and then uses a nonparametric (Gaussian process) regression to obtain a functional representation of the value function. Our algorithm resembles control randomization and regression Monte Carlo techniques but also brings multiple innovations, including adaptive experimental design, separate surrogates for optimal control and the local worst-case measure, and computational speed-ups for the sup-inf optimization. Thanks to the new scheme we are able to consider settings that have been previously computationally intractable and provide several new financial insights about learning and optimal trading under unknown market parameters. In particular, we demonstrate the financial advantages of adaptive robust framework compared to adaptive and static robust alternatives., Comment: 33 pages, 24 figures
- Published
- 2021
5. Probabilistic bisection with spatial metamodels
- Author
-
Sergio Rodriguez and Michael Ludkovski
- Subjects
Operations Research ,Polynomial ,Information Systems and Management ,General Computer Science ,Computer science ,cs.LG ,Bayesian probability ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,Logistic regression ,Industrial and Manufacturing Engineering ,Oracle ,symbols.namesake ,Simulation metamodeling ,0502 economics and business ,Leverage (statistics) ,Uncertainty quantification ,Gaussian process ,050210 logistics & transportation ,021103 operations research ,05 social sciences ,Probabilistic logic ,Sampling (statistics) ,stat.ML ,Spline (mathematics) ,Stochastic root-finding ,Valuation of options ,Modeling and Simulation ,symbols ,Root-finding algorithm ,Algorithm ,Simulation - Abstract
Probabilistic Bisection Algorithm performs root finding based on knowledge acquired from noisy oracle responses. We consider the generalized PBA setting (G-PBA) where the statistical distribution of the oracle is unknown and location-dependent, so that model inference and Bayesian knowledge updating must be performed simultaneously. To this end, we propose to leverage the spatial structure of a typical oracle by constructing a statistical surrogate for the underlying logistic regression step. We investigate several non-parametric surrogates, including Binomial Gaussian Processes (B-GP), Polynomial, Kernel, and Spline Logistic Regression. In parallel, we develop sampling policies that adaptively balance learning the oracle distribution and learning the root. One of our proposals mimics active learning with B-GPs and provides a novel look-ahead predictive variance formula. The resulting gains of our Spatial PBA algorithm relative to earlier G-PBA models are illustrated with synthetic examples and a challenging stochastic root finding problem from Bermudan option pricing.
- Published
- 2020
6. Information directed sampling for stochastic root finding.
- Author
-
Sergio Rodriguez and Michael Ludkovski
- Published
- 2015
- Full Text
- View/download PDF
7. Generalized Probabilistic Bisection for Stochastic Root Finding
- Author
-
Michael Ludkovski and Sergio Rodriguez
- Subjects
021103 operations research ,Computer science ,Bayesian probability ,0211 other engineering and technologies ,Sampling (statistics) ,02 engineering and technology ,Bayesian inference ,01 natural sciences ,Computer Science Applications ,010104 statistics & probability ,Sampling distribution ,Frequentist inference ,Modeling and Simulation ,Bisection method ,0101 mathematics ,Algorithm ,Thompson sampling ,Quantile - Abstract
We consider numerical schemes for root finding of noisy responses through generalizing the Probabilistic Bisection Algorithm (PBA) to the more practical context where the sampling distribution is unknown and location dependent. As in standard PBA, we rely on a knowledge state for the approximate posterior of the root location. To implement the corresponding Bayesian updating, we also carry out inference of oracle accuracy, namely learning the probability of the correct response. To this end we utilize batched querying in combination with a variety of frequentist and Bayesian estimators based on majority vote, as well as the underlying functional responses, if available. For guiding sampling selection we investigate both entropy-directed sampling and quantile sampling. Our numerical experiments show that these strategies perform quite differently; in particular, we demonstrate the efficiency of randomized quantile sampling, which is reminiscent of Thompson sampling. Our work is motivated by the root-finding subroutine in pricing of Bermudan financial derivatives, illustrated in the last section of the article.
- Published
- 2020
8. Evaluating Gaussian process metamodels and sequential designs for noisy level set estimation
- Author
-
Mickaël Binois, Xiong Lyu, Michael Ludkovski, University of California [Santa Barbara] (UCSB), University of California, Analysis and Control of Unsteady Models for Engineering Sciences (ACUMES), Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), University of California [Santa Barbara] (UC Santa Barbara), and University of California (UC)
- Subjects
Statistics and Probability ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer science ,Statistics & Probability ,cs.LG ,Context (language use) ,Machine Learning (stat.ML) ,010103 numerical & computational mathematics ,01 natural sciences ,Theoretical Computer Science ,Machine Learning (cs.LG) ,010104 statistics & probability ,symbols.namesake ,Statistics - Machine Learning ,Sequential updating formulas ,0101 mathematics ,Gaussian process ,Uncertainty reduction theory ,Stochastic contour-finding ,Level set (data structures) ,Student-t process ,Design of experiments ,Statistics ,Computation Theory and Mathematics ,Function (mathematics) ,stat.ML ,Noise ,Computational Theory and Mathematics ,Simulation noise ,symbols ,[MATH.MATH-OC]Mathematics [math]/Optimization and Control [math.OC] ,Statistics, Probability and Uncertainty ,Gaussian Process ,Algorithm - Abstract
We consider the problem of learning the level set for which a noisy black-box function exceeds a given threshold. To efficiently reconstruct the level set, we investigate Gaussian process (GP) metamodels. Our focus is on strongly stochastic samplers, in particular with heavy-tailed simulation noise and low signal-to-noise ratio. To guard against noise misspecification, we assess the performance of three variants: (i) GPs with Student-$t$ observations; (ii) Student-$t$ processes (TPs); and (iii) classification GPs modeling the sign of the response. In conjunction with these metamodels, we analyze several acquisition functions for guiding the sequential experimental designs, extending existing stepwise uncertainty reduction criteria to the stochastic contour-finding context. This also motivates our development of (approximate) updating formulas to efficiently compute such acquisition functions. Our schemes are benchmarked by using a variety of synthetic experiments in 1--6 dimensions. We also consider an application of level set estimation for determining the optimal exercise policy of Bermudan options in finance., Comment: 8 figures. Major update compared to v1 including multiple new sections and new plots. All Tables have been re-done
- Published
- 2021
9. Simulation methods for stochastic storage problems: a statistical learning perspective
- Author
-
Aditya Maheshwari and Michael Ludkovski
- Subjects
Economics and Econometrics ,Mathematical optimization ,Computer science ,Monte Carlo method ,0211 other engineering and technologies ,Computational Finance (q-fin.CP) ,Context (language use) ,02 engineering and technology ,01 natural sciences ,FOS: Economics and business ,010104 statistics & probability ,Quantitative Finance - Computational Finance ,Kriging ,FOS: Mathematics ,0101 mathematics ,Mathematics - Optimization and Control ,Valuation (algebra) ,Emulation ,021103 operations research ,business.industry ,Modular design ,Optimal control ,Grid ,General Energy ,Optimization and Control (math.OC) ,Modeling and Simulation ,business - Abstract
We consider solution of stochastic storage problems through regression Monte Carlo (RMC) methods. Taking a statistical learning perspective, we develop the dynamic emulation algorithm (DEA) that unifies the different existing approaches in a single modular template. We then investigate the two central aspects of regression architecture and experimental design that constitute DEA. For the regression piece, we discuss various non-parametric approaches, in particular introducing the use of Gaussian process regression in the context of stochastic storage. For simulation design, we compare the performance of traditional design (grid discretization), against space-filling, and several adaptive alternatives. The overall DEA template is illustrated with multiple examples drawing from natural gas storage valuation and optimal control of back-up generator in a microgrid., 32 pages, 11 figures
- Published
- 2019
10. An Impulse-Regime Switching Game Model of Vertical Competition
- Author
-
Luciano Campi, René Aïd, Michael Ludkovski, Liangchen Li, Laboratoire d'Economie de Dauphine (LEDa), Institut de Recherche pour le Développement (IRD)-Université Paris Dauphine-PSL, Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS), Department of Mathematics 'Federigo Enriques', Università degli Studi di Milano [Milano] (UNIMI), Department of Statistics and Applied Probability, University of California, Santa Barbara (USCB), Department of Statistics and Applied Probability [Santa Barbara] (PSTAT-UCSB), University of California [Santa Barbara] (UCSB), University of California-University of California-University of California [Santa Barbara] (UCSB), and University of California-University of California
- Subjects
Statistics and Probability ,Economics and Econometrics ,Impulse controls ,Diversification (finance) ,Optimal switching ,Vertical integration ,Nash equilibrium ,Competition (economics) ,FOS: Economics and business ,symbols.namesake ,0502 economics and business ,Differential game ,q-fin.MF ,Economics ,Econometrics ,FOS: Mathematics ,050207 economics ,Quasi-variational inequalities ,Mathematics - Optimization and Control ,Downstream (petroleum industry) ,Upstream (petroleum industry) ,JEL: C - Mathematical and Quantitative Methods/C.C7 - Game Theory and Bargaining Theory/C.C7.C73 - Stochastic and Dynamic Games • Evolutionary Games • Repeated Games ,050208 finance ,Numerical and Computational Mathematics ,math.OC ,[QFIN]Quantitative Finance [q-fin] ,business.industry ,Applied Mathematics ,05 social sciences ,Computer Graphics and Computer-Aided Design ,Mathematical Finance (q-fin.MF) ,Computer Science Applications ,Commodity markets ,Computational Mathematics ,JEL: Q - Agricultural and Natural Resource Economics • Environmental and Ecological Economics/Q.Q4 - Energy/Q.Q4.Q43 - Energy and the Macroeconomy ,Computational Theory and Mathematics ,Quantitative Finance - Mathematical Finance ,Optimization and Control (math.OC) ,symbols ,Stochastic differential games ,business ,Commodity (Marxism) - Abstract
We study a new kind of non-zero-sum stochastic differential game with mixed impulse/switching controls, motivated by strategic competition in commodity markets. A representative upstream firm produces a commodity that is used by a representative downstream firm to produce a final consumption good. Both firms can influence the price of the commodity. By shutting down or increasing generation capacities, the upstream firm influences the price with impulses. By switching (or not) to a substitute, the downstream firm influences the drift of the commodity price process. We study the resulting impulse--regime switching game between the two firms, focusing on explicit threshold-type equilibria. Remarkably, this class of games naturally gives rise to multiple Nash equilibria, which we obtain via a verification based approach. We exhibit three types of equilibria depending on the ultimate number of switches by the downstream firm (zero, one or an infinite number of switches). We illustrate the diversification effect provided by vertical integration in the specific case of the crude oil market. Our analysis shows that the diversification gains strongly depend on the pass-through from the crude price to the gasoline price., Comment: 34 pages
- Published
- 2021
11. GAUSSIAN PROCESS MODELS FOR MORTALITY RATES AND IMPROVEMENT FACTORS
- Author
-
James Risk, Howard Zail, and Michael Ludkovski
- Subjects
Economics and Econometrics ,050208 finance ,Computer science ,05 social sciences ,Bayesian probability ,Nonparametric statistics ,Residual ,01 natural sciences ,Stability (probability) ,Regression ,010104 statistics & probability ,symbols.namesake ,Kriging ,Accounting ,Statistics ,0502 economics and business ,Econometrics ,symbols ,0101 mathematics ,Spatial dependence ,Gaussian process ,Finance ,Smoothing - Abstract
We develop a Gaussian process (GP) framework for modeling mortality rates and mortality improvement factors. GP regression is a nonparametric, data-driven approach for determining the spatial dependence in mortality rates and jointly smoothing raw rates across dimensions, such as calendar year and age. The GP model quantifies uncertainty associated with smoothed historical experience and generates full stochastic trajectories for out-of-sample forecasts. Our framework is well suited for updating projections when newly available data arrives, and for dealing with “edge” issues where credibility is lower. We present a detailed analysis of GP model performance for US mortality experience based on the CDC (Center for Disease Control) datasets. We investigate the interaction between mean and residual modeling, Bayesian and non-Bayesian GP methodologies, accuracy of in-sample and out-of-sample forecasting, and stability of model parameters. We also document the general decline, along with strong age-dependency, in mortality improvement factors over the past few years, contrasting our findings with the Society of Actuaries (SOA) MP-2014 and -2015 models that do not fully reflect these recent trends.
- Published
- 2018
12. Sequential Design and Spatial Modeling for Portfolio Tail Risk Measurement
- Author
-
Jimmy Risk and Michael Ludkovski
- Subjects
Numerical Analysis ,050208 finance ,Applied Mathematics ,05 social sciences ,01 natural sciences ,010104 statistics & probability ,Kriging ,Sequential analysis ,0502 economics and business ,Econometrics ,Economics ,Capital requirement ,Portfolio ,Tail risk ,0101 mathematics ,Portfolio optimization ,Finance - Abstract
We consider calculation of capital requirements when the underlying economic scenarios are determined by simulatable risk factors. In the respective nested simulation framework, the goal is to esti...
- Published
- 2018
13. Quickest detection in the Wiener disorder problem with post-change uncertainty
- Author
-
Heng Yang, Olympia Hadjiliadis, and Michael Ludkovski
- Subjects
Statistics and Probability ,Constraint (information theory) ,Mathematical optimization ,Asymptotically optimal algorithm ,Modeling and Simulation ,Probability distribution ,CUSUM ,Stochastic optimization ,Measure (mathematics) ,Change detection ,Statistic ,Mathematics - Abstract
We consider the problem of quickest detection of an abrupt change when there is uncertainty about the post-change distribution. In particular, we examine this problem in the continuous-time Wiener model where the drift of observations changes from zero to a random drift with a prescribed discrete distribution. We set up the problem as a stochastic optimization in which the objective is to minimize a measure of detection delay subject to a constraint on frequency of false alarms. We design a novel composite stopping rule and prove that it is asymptotically optimal of third order under a weighted Lorden’s criterion for detection delay. We also develop the strategy to identify the post-change drift and analyze the conditional identification error asymptotically. Our composite rules are based on CUSUM stopping times, as well as their reaction periods, namely the times between the last reset of the CUSUM statistic process and the CUSUM alarm. The established results shed new light on the performance of...
- Published
- 2017
14. Sequential Design for Ranking Response Surfaces
- Author
-
Ruimeng Hu and Michael Ludkovski
- Subjects
FOS: Computer and information sciences ,Statistics and Probability ,q-fin.CP ,sequential uncertainty reduction ,Computer science ,Bayesian probability ,response surface modeling ,Machine Learning (stat.ML) ,Computational Finance (q-fin.CP) ,Statistics - Computation ,01 natural sciences ,Synthetic data ,FOS: Economics and business ,010104 statistics & probability ,Quantitative Finance - Computational Finance ,Statistics - Machine Learning ,Kriging ,0502 economics and business ,Discrete Mathematics and Combinatorics ,0101 mathematics ,Computation (stat.CO) ,stat.CO ,Stochastic control ,Numerical and Computational Mathematics ,050208 finance ,Applied Mathematics ,Statistics ,05 social sciences ,Sampling (statistics) ,expected improvement ,stat.ML ,sequential design ,Ranking ,Sequential analysis ,stochastic kriging ,Modeling and Simulation ,Statistics, Probability and Uncertainty ,Heuristics ,Algorithm - Abstract
We propose and analyze sequential design methods for the problem of ranking several response surfaces. Namely, given $L \ge 2$ response surfaces over a continuous input space $\cal X$, the aim is to efficiently find the index of the minimal response across the entire $\cal X$. The response surfaces are not known and have to be noisily sampled one-at-a-time. This setting is motivated by stochastic control applications and requires joint experimental design both in space and response-index dimensions. To generate sequential design heuristics we investigate stepwise uncertainty reduction approaches, as well as sampling based on posterior classification complexity. We also make connections between our continuous-input formulation and the discrete framework of pure regret in multi-armed bandits. To model the response surfaces we utilize kriging surrogates. Several numerical examples using both synthetic data and an epidemics control problem are provided to illustrate our approach and the efficacy of respective adaptive designs., 26 pages, 7 figures (updated several sections and figures)
- Published
- 2017
15. Statistical Learning for Probability-Constrained Stochastic Optimal Control
- Author
-
Jan Palczewski, Alessandro Balata, Aditya Maheshwari, and Michael Ludkovski
- Subjects
Mathematical optimization ,Information Systems and Management ,General Computer Science ,Computer science ,Monte Carlo method ,0211 other engineering and technologies ,Computational Finance (q-fin.CP) ,02 engineering and technology ,Management Science and Operations Research ,Logistic regression ,Industrial and Manufacturing Engineering ,FOS: Economics and business ,Quantitative Finance - Computational Finance ,Kriging ,Control theory ,0502 economics and business ,FOS: Mathematics ,Mathematics - Optimization and Control ,Stochastic control ,050210 logistics & transportation ,021103 operations research ,05 social sciences ,Probabilistic logic ,Quantile regression ,Support vector machine ,Dynamic programming ,Optimization and Control (math.OC) ,Modeling and Simulation - Abstract
We investigate Monte Carlo based algorithms for solving stochastic control problems with probabilistic constraints. Our motivation comes from microgrid management, where the controller tries to optimally dispatch a diesel generator while maintaining low probability of blackouts. The key question we investigate are empirical simulation procedures for learning the admissible control set that is specified implicitly through a probability constraint on the system state. We propose a variety of relevant statistical tools including logistic regression, Gaussian process regression, quantile regression and support vector machines, which we then incorporate into an overall Regression Monte Carlo (RMC) framework for approximate dynamic programming. Our results indicate that using logistic or Gaussian process regression to estimate the admissibility probability outperforms the other options. Our algorithms offer an efficient and reliable extension of RMC to probability-constrained control. We illustrate our findings with two case studies for the microgrid problem., Updated literature review and additional discussion on results
- Published
- 2019
16. Technology ladders and R&D in dynamic Cournot markets
- Author
-
Ronnie Sircar and Michael Ludkovski
- Subjects
Economics and Econometrics ,050208 finance ,Control and Optimization ,Markov chain ,Comparative statics ,Applied Mathematics ,05 social sciences ,Cournot competition ,Investment (macroeconomics) ,Microeconomics ,Competition (economics) ,Market structure ,symbols.namesake ,Nash equilibrium ,Dominance (economics) ,0502 economics and business ,symbols ,Economics ,Perfect competition ,Production (economics) ,050207 economics ,Monopoly ,Anecdotal evidence - Abstract
We explore optimal investment in Research and Development activities among producers in a competitive market. R&D effort is costly and results in discrete technological advances that gradually lower production costs. The aggregate cost profile is thus expressed as a stochastic multi-dimensional counting process, individually controlled by the players.Our model combines features of patent racing with dynamic market structure, capturing the interplay between the immediate competition in terms of production rates and the long-term competition in R&D.Using a Cournot model of competition with substitutable goods (e.g. markets for different energy commodities) we analyze the resulting Markov Nash equilibrium which reduces to analysis of a sequence of the one-step static games arising between R&D successes. Several numerical examples and extensive analysis of the emerging comparative statics are presented.
- Published
- 2016
17. Statistical emulators for pricing and hedging longevity risk products
- Author
-
Jimmy Risk and Michael Ludkovski
- Subjects
Statistics and Probability ,Economics and Econometrics ,050208 finance ,Longevity risk ,Stochastic process ,05 social sciences ,Life annuity ,Expected value ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,Variable (computer science) ,0502 economics and business ,symbols ,Econometrics ,Economics ,0101 mathematics ,Statistics, Probability and Uncertainty ,Basis risk ,Gaussian process ,Complement (set theory) - Abstract
We propose the use of statistical emulators for the purpose of analyzing mortality-linked contracts in stochastic mortality models. Such models typically require (nested) evaluation of expected values of nonlinear functionals of multi-dimensional stochastic processes. Except in the simplest cases, no closed-form expressions are available, necessitating numerical approximation. To complement various analytic approximations, we advocate the use of modern statistical tools from machine learning to generate a flexible, non-parametric surrogate for the true mappings. This method allows performance guarantees regarding approximation accuracy and removes the need for nested simulation. We illustrate our approach with case studies involving (i) a Lee–Carter model with mortality shocks; (ii) index-based static hedging with longevity basis risk; (iii) a Cairns–Blake–Dowd stochastic survival probability model; (iv) variable annuities under stochastic interest rate and mortality.
- Published
- 2016
18. Testing Alternative Regression Frameworks for Predictive Modeling of Health Care Costs
- Author
-
Michael Loginov, Michael Ludkovski, and Ian Duncan
- Subjects
Statistics and Probability ,Economics and Econometrics ,Actuarial science ,Computer science ,business.industry ,030503 health policy & services ,Statistical model ,Risk adjustment ,01 natural sciences ,Regression ,010104 statistics & probability ,03 medical and health sciences ,Covariate ,Health care ,Econometrics ,Health insurance ,Revenue ,0101 mathematics ,Statistics, Probability and Uncertainty ,0305 other medical science ,business ,health care economics and organizations ,Quantile - Abstract
Predictive models of health care costs have become mainstream in much health care actuarial work. The Affordable Care Act requires the use of predictive modeling-based risk-adjuster models to transfer revenue between different health exchange participants. Although the predictive accuracy of these models has been investigated in a number of studies, the accuracy and use of models for applications other than risk adjustment have not been the subject of much investigation. We investigate predictive modeling of future health care costs using several statistical techniques. Our analysis was performed based on a dataset of 30,000 insureds containing claims information from two contiguous years. The dataset contains more than 100 covariates for each insured, including detailed breakdown of past costs and causes encoded via coexisting condition flags. We discuss statistical models for the relationship between next-year costs and medical and cost information to predict the mean and quantiles of future cost, ranki...
- Published
- 2016
19. The effect of rate design on power distribution reliability considering adoption of distributed energy resources
- Author
-
Michael Ludkovski, Miguel Heleno, and Aditya Maheshwari
- Subjects
Economics ,Computer science ,020209 energy ,02 engineering and technology ,Management, Monitoring, Policy and Law ,DER planning ,Engineering ,Affordable and Clean Energy ,020401 chemical engineering ,Component (UML) ,0202 electrical engineering, electronic engineering, information engineering ,Microgrids ,0204 chemical engineering ,Reliability (statistics) ,Rate design ,Energy ,business.industry ,Mechanical Engineering ,Photovoltaic system ,Building and Construction ,Reliability ,Grid ,Term (time) ,Power (physics) ,Reliability engineering ,General Energy ,Distributed generation ,Electricity ,business - Abstract
Electricity rates are a main driver for adoption of Distributed Energy Resources (DERs) by private consumers. In turn, DERs are a major component of the reliability of energy access in the long run. Defining reliability indices in a paradigm where energy is generated both behind and in front of the meter is part of an ongoing discussion about the future role of utilities and system operators with many regulatory implications. This paper contributes to that discussion by analyzing the effect of rate design on the long term reliability indices of power distribution. A methodology to quantify this effect is proposed and a case study involving photovoltaic (PV) and storage technology adoption in California is presented. Several numerical simulations illustrate how electricity rates affect the grid reliability by altering dispatch and adoption of the DERs. We further document that the impact of rate design on reliability can be very different from the perspective of the utility versus that of the consumers. Our model affirms the positive connection between investments in DERs and the grid reliability and provides an additional tool to policy-makers for improving the reliability of the grid in the long term.
- Published
- 2020
20. Dynamic Contagion in a Banking System with Births and Defaults
- Author
-
Michael Ludkovski, Andrey Sarantsev, and Tomoyuki Ichiba
- Subjects
media_common.quotation_subject ,Stability (probability) ,math.PR ,0502 economics and business ,FOS: Mathematics ,Econometrics ,Economics ,Limit (mathematics) ,Brownian motion ,media_common ,040101 forestry ,60J70, 60J75, 60K35, 91B70 ,050208 finance ,Mathematical finance ,Probability (math.PR) ,05 social sciences ,04 agricultural and veterinary sciences ,Infinity ,Scaling limit ,91B70 ,60K35 ,Jump ,0401 agriculture, forestry, and fisheries ,Default ,60J75 ,General Economics, Econometrics and Finance ,Finance ,60J70 ,Mathematics - Probability - Abstract
We consider a dynamic model of interconnected banks. New banks can emerge, and existing banks can default, creating a birth-and-death setup. Microscopically, banks evolve as independent geometric Brownian motions. Systemic effects are captured through default contagion: as one bank defaults, reserves of other banks are reduced by a random proportion. After examining the long-term stability of this system, we investigate mean-field limits as the number of banks tends to infinity. Our main results concern the measure-valued scaling limit which is governed by a McKean-Vlasov jump-diffusion. The default impact creates a mean-field drift, while the births and defaults introduce jump terms tied to the current distribution of the process. Individual dynamics in the limit is described by the propagation of chaos phenomenon. In certain cases, we explicitly characterize the limiting average reserves., 37 pages, 4 figures. Keywords: default contagion, mean field limit, interacting birth-and-death process, McKean-Vlasov jump-diffusion, propagation of chaos, Lyapunov function
- Published
- 2018
21. Stochastic Optimal Coordination of Small UAVs for Target Tracking using Regression-based Dynamic Programming
- Author
-
Steven A. P. Quintero, Joao P. Hespanha, and Michael Ludkovski
- Subjects
020301 aerospace & aeronautics ,0209 industrial biotechnology ,Engineering ,Mathematical optimization ,business.industry ,Mechanical Engineering ,02 engineering and technology ,Covariance ,Optimal control ,Industrial and Manufacturing Engineering ,Robust regression ,Dynamic programming ,Geolocation ,020901 industrial engineering & automation ,0203 mechanical engineering ,Artificial Intelligence ,Control and Systems Engineering ,Position (vector) ,Motion planning ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Software ,Curse of dimensionality - Abstract
We study the problem of optimally coordinating multiple fixed-wing UAVs to perform vision-based target tracking, which entails that the UAVs are tasked with gathering the best joint vision-based measurements of an unpredictable ground target. We utilize an analytic expression for the error covariance associated with the fused measurements of the target's position, and we employ stochastic fourth-order models for all vehicles, thereby incorporating a high degree of realism into the problem formulation. While dynamic programming can generate an optimal control policy that minimizes the expected value of the fused geolocation error covariance over time, it is accompanied by significant computational challenges due to the curse of dimensionality. In order to circumvent this challenge, we present a novel policy generation technique that combines simulation-based policy iteration with a robust regression scheme. The resulting control policy offers a significant advantage over alternative approaches and shows that the optimal control strategy involves coordinating the UAVs' distances to the target rather than their viewing angles, which had been a common practice in target tracking.
- Published
- 2015
22. Capacity Expansion Games with Application to Competition in Power Generation Investments
- Author
-
Michael Ludkovski, René Aïd, Liangchen Li, Laboratoire d'Economie de Dauphine (LEDa), Université Paris Dauphine-PSL, Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL), Department of Statistics and Applied Probability, University of California, Santa Barbara (USCB), Department of Statistics and Applied Probability [Santa Barbara] (PSTAT-UCSB), University of California [Santa Barbara] (UCSB), University of California-University of California-University of California [Santa Barbara] (UCSB), and University of California-University of California
- Subjects
Economics and Econometrics ,Control and Optimization ,0211 other engineering and technologies ,Preemption ,02 engineering and technology ,JEL: Q - Agricultural and Natural Resource Economics • Environmental and Ecological Economics/Q.Q4 - Energy/Q.Q4.Q40 - General ,01 natural sciences ,Non-zero-sum stopping games ,Microeconomics ,Competition (economics) ,010104 statistics & probability ,symbols.namesake ,Capacity expansion ,0502 economics and business ,Economics ,0101 mathematics ,Diffusion (business) ,Duopoly ,Timing game ,JEL: C - Mathematical and Quantitative Methods/C.C7 - Game Theory and Bargaining Theory/C.C7.C73 - Stochastic and Dynamic Games • Evolutionary Games • Repeated Games ,050208 finance ,021103 operations research ,[QFIN]Quantitative Finance [q-fin] ,Applied Mathematics ,Continuous-time games of timing ,05 social sciences ,JEL: L - Industrial Organization/L.L9 - Industry Studies: Transportation and Utilities/L.L9.L94 - Electric Utilities ,Investment (macroeconomics) ,JEL: D - Microeconomics/D.D4 - Market Structure, Pricing, and Design/D.D4.D43 - Oligopoly and Other Forms of Market Imperfection ,Power generation investments ,JEL: L - Industrial Organization/L.L1 - Market Structure, Firm Strategy, and Market Performance/L.L1.L11 - Production, Pricing, and Market Structure • Size Distribution of Firms ,Electricity generation ,Nash equilibrium ,symbols ,Aggregate supply - Abstract
We consider competitive capacity investment for a duopoly of two distinct producers. The producers are exposed to stochastically fluctuating costs and interact through aggregate supply. Capacity expansion is irreversible and modeled in terms of timing strategies characterized through threshold rules. Because the impact of changing costs on the producers is asymmetric, we are led to a nonzero-sum timing game describing the transitions among the discrete investment stages. Working in a continuous-time diffusion framework, we characterize and analyze the resulting Nash equilibrium and game values. Our analysis quantifies the dynamic competition effects and yields insight into dynamic preemption and over-investment in a general asymmetric setting. A case-study considering the impact of fluctuating emission costs on power producers investing in nuclear and coal-fired plants is also presented.
- Published
- 2017
23. Replication or exploration? Sequential design for stochastic simulation experiments
- Author
-
Michael Ludkovski, Robert B. Gramacy, Jiangeng Huang, and Mickaël Binois
- Subjects
Statistics and Probability ,Optimal design ,FOS: Computer and information sciences ,Computer science ,0211 other engineering and technologies ,02 engineering and technology ,01 natural sciences ,Statistics - Computation ,Methodology (stat.ME) ,010104 statistics & probability ,symbols.namesake ,Surrogate model ,Stochastic simulation ,Replication (statistics) ,0101 mathematics ,Gaussian process ,Statistics - Methodology ,Computation (stat.CO) ,Emulation ,021103 operations research ,Applied Mathematics ,Computer experiment ,Sequential analysis ,Modeling and Simulation ,symbols ,Algorithm - Abstract
We investigate the merits of replication, and provide methods for optimal design (including replicates), with the goal of obtaining globally accurate emulation of noisy computer simulation experiments. We first show that replication can be beneficial from both design and computational perspectives, in the context of Gaussian process surrogate modeling. We then develop a lookahead based sequential design scheme that can determine if a new run should be at an existing input location (i.e., replicate) or at a new one (explore). When paired with a newly developed heteroskedastic Gaussian process model, our dynamic design scheme facilitates learning of signal and noise relationships which can vary throughout the input space. We show that it does so efficiently, on both computational and statistical grounds. In addition to illustrative synthetic examples, we demonstrate performance on two challenging real-data simulation experiments, from inventory management and epidemiology., Comment: 34 pages, 9 figures
- Published
- 2017
- Full Text
- View/download PDF
24. Sequential Bayesian inference in hidden Markov stochastic kinetic models with application to detection and response to seasonal epidemics
- Author
-
Michael Ludkovski and Junjing Lin
- Subjects
FOS: Computer and information sciences ,Statistics and Probability ,Markov chain ,Computer science ,Probability (math.PR) ,Markov process ,Inference ,Mathematics - Statistics Theory ,Statistics Theory (math.ST) ,Bayesian inference ,Statistics - Computation ,Theoretical Computer Science ,symbols.namesake ,Bayes' theorem ,Computational Theory and Mathematics ,FOS: Mathematics ,symbols ,Statistics, Probability and Uncertainty ,Hidden Markov model ,Epidemic model ,Particle filter ,Algorithm ,Computation (stat.CO) ,Mathematics - Probability - Abstract
We study sequential Bayesian inference in stochastic kinetic models with latent factors. Assuming continuous observation of all the reactions, our focus is on joint inference of the unknown reaction rates and the dynamic latent states, modeled as a hidden Markov factor. Using insights from nonlinear filtering of continuous-time jump Markov processes we develop a novel sequential Monte Carlo algorithm for this purpose. Our approach applies the ideas of particle learning to minimize particle degeneracy and exploit the analytical jump Markov structure. A motivating application of our methods is modeling of seasonal infectious disease outbreaks represented through a compartmental epidemic model. We demonstrate inference in such models with several numerical illustrations and also discuss predictive analysis of epidemic countermeasures using sequential Bayes estimates., 26 pages, 7 figures
- Published
- 2013
25. Commodities, Energy and Environmental Finance
- Author
-
René Aïd, Michael Ludkovski, Ronnie Sircar, René Aïd, Michael Ludkovski, and Ronnie Sircar
- Subjects
- Environmental economics--Mathematical models, Energy industries, Game theory, Distribution (Probability theory), Economics--Mathematical models, System theory
- Abstract
This volume is a collection of chapters covering the latest developments in applications of financial mathematics and statistics to topics in energy, commodity financial markets and environmental economics. The research presented is based on the presentations and discussions that took place during the Fields Institute Focus Program on Commodities, Energy and Environmental Finance in August 2013. The authors include applied mathematicians, economists and industry practitioners, providing for a multi-disciplinary spectrum of perspectives on the subject. The volume consists of four sections: Electricity Markets; Real Options; Trading in Commodity Markets; and Oligopolistic Models for Energy Production. Taken together, the chapters give a comprehensive summary of the current state of the art in quantitative analysis of commodities and energy finance. The topics covered include structural models of electricity markets, financialization of commodities, valuation of commodity real options, game-theory analysis of exhaustible resource management and analysis of commodity ETFs. The volume also includes two survey articles that provide a source for new researchers interested in getting into these topics.
- Published
- 2015
26. Practical heteroskedastic Gaussian process modeling for large simulation experiments
- Author
-
Michael Ludkovski, Mickaël Binois, and Robert B. Gramacy
- Subjects
FOS: Computer and information sciences ,Statistics and Probability ,Heteroscedasticity ,Mathematical optimization ,Computer science ,Gaussian ,media_common.quotation_subject ,0211 other engineering and technologies ,Inference ,Context (language use) ,02 engineering and technology ,01 natural sciences ,Statistics - Computation ,Methodology (stat.ME) ,010104 statistics & probability ,symbols.namesake ,Discrete Mathematics and Combinatorics ,0101 mathematics ,Function (engineering) ,Gaussian process ,Computation (stat.CO) ,Statistics - Methodology ,media_common ,021103 operations research ,16. Peace & justice ,Replication (computing) ,Noise ,Efficiency ,symbols ,Statistics, Probability and Uncertainty - Abstract
We present a unified view of likelihood based Gaussian progress regression for simulation experiments exhibiting input-dependent noise. Replication plays an important role in that context, however previous methods leveraging replicates have either ignored the computational savings that come from such design, or have short-cut full likelihood-based inference to remain tractable. Starting with homoskedastic processes, we show how multiple applications of a well-known Woodbury identity facilitate inference for all parameters under the likelihood (without approximation), bypassing the typical full-data sized calculations. We then borrow a latent-variable idea from machine learning to address heteroskedasticity, adapting it to work within the same thrifty inferential framework, thereby simultaneously leveraging the computational and statistical efficiency of designs with replication. The result is an inferential scheme that can be characterized as single objective function, complete with closed form derivatives, for rapid library-based optimization. Illustrations are provided, including real-world simulation experiments from manufacturing and the management of epidemics., 33 pages, 7 figures
- Published
- 2016
27. Bayesian Quickest Detection in Sensor Arrays
- Author
-
Michael Ludkovski
- Subjects
Statistics and Probability ,Dynamic programming ,Bayes' theorem ,Mathematical optimization ,Modeling and Simulation ,Monte Carlo method ,Bayesian probability ,Fuse (electrical) ,Optimal stopping ,Particle filter ,Point process ,Mathematics - Abstract
We study Bayesian quickest detection problems with sensor arrays. An underlying signal is assumed to gradually propagate through a network of several sensors, triggering a cascade of interdependent change-points. The aim of the decision maker is to centrally fuse all available information to find an optimal detection rule that minimizes Bayes risk. We develop a tractable continuous-time formulation of this problem focusing on the case of sensors collecting point process observations and monitoring the resulting changes in intensity and type of observed events. Our approach uses methods of nonlinear filtering and optimal stopping and lends itself to an efficient numerical scheme that combines particle filtering with a Monte Carlo–based approach to dynamic programming. The developed models and algorithms are illustrated with plenty of numerical examples.
- Published
- 2012
28. LIQUIDATION IN LIMIT ORDER BOOKS WITH CONTROLLED INTENSITY
- Author
-
Erhan Bayraktar and Michael Ludkovski
- Subjects
Stochastic control ,Economics and Econometrics ,Mathematical optimization ,Fluid limit ,050208 finance ,Applied Mathematics ,05 social sciences ,01 natural sciences ,Point process ,010104 statistics & probability ,Order (exchange) ,Position (vector) ,Accounting ,Bellman equation ,0502 economics and business ,Order book ,Limit (mathematics) ,0101 mathematics ,Social Sciences (miscellaneous) ,Finance ,Mathematics - Abstract
We consider a framework for solving optimal liquidation problems in limit order books. In particular, order arrivals are modeled as a point process whose intensity depends on the liquidation price. We set up a stochastic control problem in which the goal is to maximize the expected revenue from liquidating the entire position held. We solve this optimal liquidation problem for power-law and exponential-decay order book models explicitly and discuss several extensions. We also consider the continuous selling (or fluid) limit when the trading units are ever smaller and the intensity is ever larger. This limit provides an analytical approximation to the value function and the optimal solution. Using techniques from viscosity solutions we show that the discrete state problem and its optimal solution converge to the corresponding quantities in the continuous selling limit uniformly on compacts.
- Published
- 2012
29. Finite Horizon Decision Timing with Partially Observable Poisson Processes
- Author
-
Semih Onur Sezer and Michael Ludkovski
- Subjects
Statistics and Probability ,Reliability theory ,Mathematical optimization ,Primary 62L10, Secondary 62L15, 62C10, 60G40 ,Process (engineering) ,Applied Mathematics ,Probability (math.PR) ,Bayesian probability ,Markov process ,Observable ,Poisson distribution ,Unobservable ,symbols.namesake ,Optimization and Control (math.OC) ,Modeling and Simulation ,FOS: Mathematics ,symbols ,Optimal stopping ,Mathematics - Optimization and Control ,Mathematics - Probability ,Mathematics - Abstract
We study decision timing problems on finite horizon with Poissonian information arrivals. In our model, a decision maker wishes to optimally time her action in order to maximize her expected reward. The reward depends on an unobservable Markovian environment, and information about the environment is collected through a (compound) Poisson observation process. Examples of such systems arise in investment timing, reliability theory, Bayesian regime detection and technology adoption models. We solve the problem by studying an optimal stopping problem for a piecewise-deterministic process which gives the posterior likelihoods of the unobservable environment. Our method lends itself to simple numerical implementation and we present several illustrative numerical examples., 40 pages, 7 figures. Originally a technical report at U of Michigan from 2008
- Published
- 2012
30. Impact of Counterparty Risk on the Reinsurance Market
- Author
-
Michael Ludkovski, Carole Bernard, and Business
- Subjects
Statistics and Probability ,Reinsurance ,Tail value at risk ,Economics and Econometrics ,Actuarial science ,Comparative statics ,Multiplicative function ,Default risk ,Economics ,Statistics, Probability and Uncertainty ,Deductible ,Expected utility hypothesis ,Credit risk - Abstract
We investigate the impact of counterparty risk (from the insurer’s viewpoint) on contract design in the reinsurance market. We study a multiplicative default risk model with partial recovery and where the probability of the reinsurer’s default depends on the loss incurred by the insurer. The reinsurer (reinsurance seller) is assumed to be risk-neutral, while the insurer (reinsurance buyer) is risk-averse and uses either expected utility or a conditional tail expectation risk criterion. We show that generally the reinsurance buyer wishes to overinsure above a deductible level, and that many of the standard comparative statics cease to hold. We also derive the properties of stop-loss insurance in our model and consider the possibility of divergent beliefs about the default probability. Classical results are recovered when default risk is loss-independent or there is zero recovery rate. Results are illustrated with numerical examples.
- Published
- 2012
31. Ex Post Moral Hazard and Bayesian Learning in Insurance
- Author
-
Michael Ludkovski and Virginia R. Young
- Subjects
Insurance fraud ,Economics and Econometrics ,Information asymmetry ,Actuarial science ,Ex-ante ,Moral hazard ,Order (business) ,Accounting ,Insurance policy ,Economics ,Adverse selection ,Finance ,Property insurance - Abstract
We study a dynamic insurance market with asymmetric information and ex post moral hazard. In our model, the insurance buyer's risk type is unknown to the insurer; moreover, the buyer has the option of not reporting losses. The insurer sets premia according to the buyer's experience rating, computed via Bayesian estimation based on buyer's history of reported claims. Accordingly, the buyer has strategic incentive to withhold information about losses. We construct an insurance market information equilibrium model and show that a variety of reporting strategies are possible. The results are illustrated with explicit computations in a two-period risk-neutral case study. INTRODUCTION Buyers of casualty and property insurance possess varying levels of risk. Risk type is used by the insurer to set premia and is the main parameter in pricing the resulting insurance contract. Unfortunately, the intrinsic riskiness of the buyer is typically unknown from the point of view of the insurer. This leads ex ante to problems of moral hazard and adverse selection. Namely, higher-risk buyers will attempt to enter into contracts designed for low-risk buyers, and once they obtain insurance, all buyers have little incentive to act prudently. However, many insurance contracts (notably automobile insurance) have a recurring nature. Thus, the issue of information asymmetry is partially mitigated by implementing an experience-rating or bonus-malus system (see Lemaire, 1995, for a comprehensive review of such insurance schemes), through which the insurer gives incentives to the buyer to act in the "best" behavior. Through such incentives, a self-serving buyer may reveal his (1) risk type or exercise an optimal amount of preventive efforts. A second level of ex post information asymmetry arises in connection with reporting losses. After an insurable event occurs, the buyer has the option of not reporting the loss in the hopes of signaling that he is of a lower risk type and, hence, deserves a lower future premium. If the gain from lowering his perceived riskiness (and the corresponding future premia) outweighs the cost for settling the loss out-of-pocket, the buyer will not report the loss. This might happen, for instance, if the risk loading on the insurance is high enough and leads to ex post moral hazard. The presence of this nonreporting option has serious implications since it dramatically alters the information received by the insurer. Instead of acting to reveal his risk type, the buyer strategically manipulates reports. Although the insurer is not necessarily hurt by underreporting and may even encourage it to reduce processing costs, she needs a learning mechanism to infer the risk type of the buyer based on submitted claims. This is necessary in order to correctly determine the premium and the corresponding experience-rating update. In a competitive insurance market, failure to properly infer the buyer's risk profile would immediately result in losses. Thus, a rational insurer recognizes that nonreporting occurs and acts accordingly. The converse of nonreporting is insurance fraud whereby the buyer may manufacture false claims. The information asymmetry of insurance fraud is usually resolved by claim verification and monitoring; by contrast, verifying nonreported losses is usually impractical or against the law. There is anecdotal empirical evidence for such strategic behavior by both buyers and insurers, especially in the private passenger automobile and homeowner's insurance industries. For instance, after minor car accidents of the "fender-bender" type, it is commonplace that insurance agents advise their clients to pay for repairs themselves and not file a claim, so as to maintain their high "rating." Conversely, it is often observed that the actual malus penalties for reporting auto claims are relatively severe and would be excessive in a world with perfect reporting (i. …
- Published
- 2010
32. GAUSSIAN PROCESS MODELS FOR MORTALITY RATES AND IMPROVEMENT FACTORS – CORRIGENDUM
- Author
-
Howard Zail, Jimmy Risk, and Michael Ludkovski
- Subjects
Economics and Econometrics ,050208 finance ,Computer science ,Mortality rate ,05 social sciences ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,Email address ,Accounting ,0502 economics and business ,Statistics ,symbols ,0101 mathematics ,Gaussian process ,Finance - Abstract
In Ludkovski, Risk, and Zail (2018), the email address for Jimmy Risk appeared incorrectly.Jimmy Risk's email address should appear as jrisk@cpp.edu.The original article has been corrected to rectify this error.
- Published
- 2018
33. Valuation of energy storage: an optimal switching approach
- Author
-
René Carmona and Michael Ludkovski
- Subjects
Stochastic control ,Mathematical optimization ,Monte Carlo method ,Markov process ,Price model ,Energy storage ,symbols.namesake ,Hydroelectricity ,symbols ,Economics ,General Economics, Econometrics and Finance ,Mathematical economics ,Finance ,Valuation (finance) - Abstract
We consider the valuation of energy storage facilities within the framework of stochastic control. Our two main examples are natural gas dome storage and hydroelectric pumped storage. Focusing on the timing flexibility aspect of the problem we construct an optimal switching model with inventory. Thus, the manager has a constrained compound American option on the inter-temporal spread of the commodity prices. Extending the methodology from Carmona and Ludkovski [Appl. Math. Finance, 2008], we then construct a robust numerical scheme based on Monte Carlo regressions. Our simulation method can handle a generic Markovian price model and easily incorporates many operational features and constraints. To overcome the main challenge of the path-dependent storage levels, two numerical approaches are proposed. The resulting scheme is compared with the traditional quasi-variational framework and illustrated with several concrete examples. We also consider related problems of interest, such as supply guarantees and m...
- Published
- 2010
34. Order Flows and Limit Order Book Resiliency on the Meso-Scale
- Author
-
Kyle Bechler and Michael Ludkovski
- Subjects
Mathematical optimization ,Quantitative Finance - Trading and Market Microstructure ,050208 finance ,Computer science ,Carry (arithmetic) ,05 social sciences ,01 natural sciences ,Trading and Market Microstructure (q-fin.TR) ,FOS: Economics and business ,Meso scale ,010104 statistics & probability ,Order (exchange) ,0502 economics and business ,0101 mathematics - Abstract
We investigate the behavior of limit order books on the meso-scale motivated by order execution scheduling algorithms. To do so we carry out empirical analysis of the order flows from market and limit order submissions, aggregated from tick-by-tick data via volume-based bucketing, as well as various LOB depth and shape metrics. We document a nonlinear relationship between trade imbalance and price change, which however can be converted into a linear link by considering a weighted average of market and limit order flows. We also document a hockey-stick dependence between trade imbalance and one-sided limit order flows, highlighting numerous asymmetric effects between the active and passive sides of the LOB. To address the phenomenological features of price formation, book resilience, and scarce liquidity we apply a variety of statistical models to test for predictive power of different predictors. We show that on the meso-scale the limit order flows (as well as the relative addition/cancellation rates) carry the most predictive power. Another finding is that the deeper LOB shape, rather than just the book imbalance, is more relevant on this timescale. The empirical results are based on analysis of six large-tick assets from Nasdaq., Comment: 8 figures; One figure is missing due to arxiv size constraints (was 6MB originally)
- Published
- 2017
35. A simulation approach to optimal stopping under partial information
- Author
-
Michael Ludkovski
- Subjects
Statistics and Probability ,Stochastic modelling ,Monte Carlo method ,Applied probability ,Regression Monte Carlo ,Modelling and Simulation ,Calculus ,Optimal stopping ,FOS: Mathematics ,State space ,Mathematics - Optimization and Control ,Mathematics ,Stochastic process ,Applied Mathematics ,Probability (math.PR) ,Nonlinear filtering ,Snell envelope ,60G40, 62M20, 65C35 ,Optimization and Control (math.OC) ,Modeling and Simulation ,Probability distribution ,Particle filters ,Particle filter ,Algorithm ,Mathematics - Probability - Abstract
We study the numerical solution of nonlinear partially observed optimal stopping problems. The system state is taken to be a multi-dimensional diffusion and drives the drift of the observation process, which is another multi-dimensional diffusion with correlated noise. Such models where the controller is not fully aware of her environment are of interest in applied probability and financial mathematics. We propose a new approximate numerical algorithm based on the particle filtering and regression Monte Carlo methods. The algorithm maintains a continuous state space and yields an integrated approach to the filtering and control sub-problems. Our approach is entirely simulation-based and therefore allows for a robust implementation with respect to model specification. We carry out the error analysis of our scheme and illustrate with several computational examples. An extension to discretely observed stochastic volatility models is also considered.
- Published
- 2009
- Full Text
- View/download PDF
36. Optimal risk sharing under distorted probabilities
- Author
-
Virginia R. Young and Michael Ludkovski
- Subjects
Statistics and Probability ,Risk sharing ,Quantitative Finance ,Computer science ,Risk premium ,Pareto optimal allocations ,Deductible ,Applications of Mathematics ,Distortion ,Finance /Banking ,FOS: Mathematics ,Clearing ,Econometrics ,Mathematics - Optimization and Control ,Distortion risk measures ,Transaction cost ,Comonotonicity ,Financial Economics ,Pareto optimal ,Optimization and Control (math.OC) ,Game Theory/Mathematical Methods ,Statistics, Probability and Uncertainty ,Mathematics ,Finance ,Statistics for Business/Economics/Mathematical Finance/Insurance - Abstract
We study optimal risk sharing among $n$ agents endowed with distortion risk measures. Our model includes market frictions that can either represent linear transaction costs or risk premia charged by a clearing house for the agents. Risk sharing under third-party constraints is also considered. We obtain an explicit formula for Pareto optimal allocations. In particular, we find that a stop-loss or deductible risk sharing is optimal in the case of two agents and several common distortion functions. This extends recent result of Jouini et al. (2006) to the problem with unbounded risks and market frictions., Comment: 22 pages and 3 figures
- Published
- 2009
37. Sequential tracking of a hidden Markov chain using point process observations
- Author
-
Michael Ludkovski and Erhan Bayraktar
- Subjects
Statistics and Probability ,State variable ,Mathematical optimization ,050208 finance ,Markov chain ,Stochastic process ,Applied Mathematics ,05 social sciences ,Optimal switching ,Markov process ,Markov modulated Poisson processes ,01 natural sciences ,Point process ,010104 statistics & probability ,symbols.namesake ,Modeling and Simulation ,Bellman equation ,Modelling and Simulation ,0502 economics and business ,Calculus ,symbols ,Hidden semi-Markov model ,0101 mathematics ,Hidden Markov model ,Mathematics - Abstract
We study finite horizon optimal switching problems for hidden Markov chain models with point process observations. The controller possesses a finite range of strategies and attempts to track the state of the unobserved state variable using Bayesian updates over the discrete observations. Such a model has applications in economic policy making, staffing under variable demand levels and generalized Poisson disorder problems. We show regularity of the value function and explicitly characterize an optimal strategy. We also provide an efficient numerical scheme and illustrate our results with several computational examples.
- Published
- 2009
- Full Text
- View/download PDF
38. Relative Hedging of Systematic Mortality Risk
- Author
-
Erhan Bayraktar and Michael Ludkovski
- Subjects
Statistics and Probability ,Computer Science::Computer Science and Game Theory ,Economics and Econometrics ,Actuarial science ,Statistics::Applications ,Life annuity ,Mathematics::Optimization and Control ,Model parameters ,Exponential function ,Nonlinear system ,Life insurance ,Econometrics ,Economics ,Portfolio ,Statistics, Probability and Uncertainty ,Valuation (finance) - Abstract
We study indifference valuation mechanisms for mortality contingent claims under stochastic mortality age structures. Our focus is on capturing the internal cross-hedge between components of an insurer’s portfolio, especially between life annuities and life insurance. We carry out an exhaustive analysis of the dynamic exponential premium principle, which is the representative nonlinear valuation rule in our framework. Using this valuation rule we derive formulas for optimal quantity of contracts to sell. Our results are further enhanced by asymptotic expansions that show the relative effects of model parameters. We also compare the exponential premium principle to other valuation rules. Furthermore, we provide numerical examples to illustrate our approach.
- Published
- 2009
39. Pricing Asset Scheduling Flexibility using Optimal Switching
- Author
-
Michael Ludkovski and René Carmona
- Subjects
Financial engineering ,Dynamic programming ,Flexibility (engineering) ,Mathematical optimization ,Applied Mathematics ,Convergence (routing) ,Monte Carlo method ,Benchmark (computing) ,Scheduling (production processes) ,Economics ,Optimal stopping ,Finance - Abstract
We study the financial engineering aspects of operational flexibility of energy assets. The current practice relies on a representation that uses strips of European spark‐spread options, ignoring the operational constraints. Instead, we propose a new approach based on a stochastic impulse control framework. The model reduces to a cascade of optimal stopping problems and directly demonstrates that the optimal dispatch policies can be described with the aid of ‘switching boundaries’, similar to the free boundaries of standard American options. Our main contribution is a new method of numerical solution relying on Monte Carlo regressions. The scheme uses dynamic programming to efficiently approximate the optimal dispatch policy along the simulated paths. Convergence analysis is carried out and results are illustrated with a variety of concrete computational examples. We benchmark and compare our scheme with alternative numerical methods.
- Published
- 2008
40. On comonotonicity of Pareto optimal risk sharing
- Author
-
Ludger Rüschendorf and Michael Ludkovski
- Subjects
Statistics and Probability ,Pareto optimal ,Probability theory ,Order (exchange) ,Risk aversion ,Comonotonicity ,Statistical dispersion ,Statistics, Probability and Uncertainty ,Measure (mathematics) ,Stochastic ordering ,Mathematical economics ,Mathematics - Abstract
We establish various extensions of the comonotone improvement result of Landsberger and Meilijson [Landsberger, M., Meilijson, I., 1994. Co-monotone allocations, Bickel–Lehmann dispersion and the Arrow–Pratt measure of risk aversion. Annals of Operations Research 52, 97–106] which are of interest for the risk sharing problem. As a consequence we obtain general results of the comonotonicity of Pareto optimal risk allocations using risk measures consistent with the stochastic convex order.
- Published
- 2008
41. Indifference pricing of pure endowments and life annuities under stochastic hazard and interest rates
- Author
-
Virginia R. Young and Michael Ludkovski
- Subjects
Statistics and Probability ,Hazard (logic) ,Computer Science::Computer Science and Game Theory ,Economics and Econometrics ,education.field_of_study ,Actuarial science ,Comparative statics ,Stochastic modelling ,media_common.quotation_subject ,Life annuity ,Population ,Indifference price ,Interest rate ,Exponential utility ,Econometrics ,Economics ,Statistics, Probability and Uncertainty ,education ,media_common - Abstract
We study indifference pricing of mortality contingent claims in a fully stochastic model. We assume both stochastic interest rates and stochastic hazard rates governing the population mortality. In this setting we compute the indifference price charged by an insurer that uses exponential utility and sells k contingent claims to k independent but homogeneous individuals. Throughout we focus on the examples of pure endowments and temporary life annuities. We begin with a continuous-time model where we derive the linear pdes satisfied by the indifference prices and carry out extensive comparative statics. In particular, we show that the price-per-risk grows as more contracts are sold. We then also provide a more flexible discrete-time analog that permits general hazard rate dynamics. In the latter case we construct a simulation-based algorithm for pricing general mortality-contingent claims and illustrate with a numerical example.
- Published
- 2008
42. Filling the gap between American and Russian options: adjustable regret
- Author
-
Savas Dayanik and Michael Ludkovski
- Subjects
Statistics and Probability ,Geometric Brownian motion ,Mathematical optimization ,Conjecture ,Fractional Brownian motion ,Reflected Brownian motion ,Modeling and Simulation ,Bellman equation ,Open problem ,Regret ,Brownian excursion ,Mathematical economics ,Mathematics - Abstract
We study several infinite-horizon optimal multiple-stopping problems for (geometric) Brownian motion. In finance, they naturally span between the American and Russian option formulations in terms of price and reduced regret. In statistics, they are continuous-time examples of best-choice problems with multiple rights. We find explicit formulas for the value functions and describe completely optimal exercise strategies whenever one exists. We also conjecture a new characterization of the value function for the open problem of the Russian option for arithmetic Brownian motion with drift.
- Published
- 2007
43. Computational Method for Epidemic Detection in Multiple populations
- Author
-
Ekaterina Shatskikh and Michael Ludkovski
- Subjects
Stochastic compartmental models ,Computer science ,ISDS 2014 Conference Abstracts ,Monte Carlo method ,General Earth and Planetary Sciences ,Data mining ,State (computer science) ,Early outbreak detection ,Sequential regression ,computer.software_genre ,computer ,Simulation ,General Environmental Science - Abstract
We propose an epidemic detection algorithm that uses the information about the state of infection in multiple populations. Our method is based on the combination of the Quickest Detection and Sequential Regression Monte Carlo. As a result, we produce a detection map of the model states, where an epidemic is announced.
- Published
- 2015
44. Game Theoretic Models for Energy Production
- Author
-
Ronnie Sircar and Michael Ludkovski
- Subjects
Sequential game ,business.industry ,Cournot competition ,Renewable energy ,Oligopoly ,Microeconomics ,Competition (economics) ,symbols.namesake ,Nash equilibrium ,Differential game ,Economics ,symbols ,Production (economics) ,business - Abstract
We give a selective survey of oligopoly models for energy production which capture to varying degrees issues such as exhaustibility of fossil fuels, development of renewable sources, exploration and new technologies, and changing costs of production. Our main focus is on dynamic Cournot competition with exhaustible resources. We trace the resulting theory of competitive equilibria and discuss some of the major emerging strands, including competition between renewable and exhaustible producers, endogenous market phase transitions, stochastic differential games with controlled jumps, and mean field games.
- Published
- 2015
45. Dynamic Cournot Models for Production of Exhaustible Commodities Under Stochastic Demand
- Author
-
Michael Ludkovski and Xuwei Yang
- Subjects
Relative cost ,Microeconomics ,Economics ,Production (economics) ,Cournot competition ,Proxy (statistics) ,Duopoly ,Shut down ,Low demand - Abstract
We extend the dynamic Cournot model of Ludkovski and Sircar (2011) by considering stochastic demand. We analyze a duopoly between an exhaustible producer and a “green” competitor. Both producers dynamically make decisions regarding their production rates; in addition the exhaustible producer optimizes search for new reserves. The aggregate price earned by the producers switches between high and low demand regimes with exogenously given holding rates. We study how the regime changes and the relative cost of production, which is a proxy for market competitiveness, affect game equilibria, and compare with the case of deterministic demand. A novel feature driven by stochasticity of demand is that production may shut down during low demand to conserve reserves.
- Published
- 2015
46. Testing Alternative Regression Frameworks for Predictive Modeling of Healthcare Costs
- Author
-
Michael Ludkovski, Ian Duncan, and Michael Loginov
- Subjects
Multivariate adaptive regression splines ,Ranking ,Lasso (statistics) ,Computer science ,Linear regression ,Covariate ,Decision tree ,Econometrics ,Statistical model ,Random forest - Abstract
Predictive models of healthcare costs have become mainstream in much healthcare actuarial work. The Affordable Care Act requires the use of predictive modeling-based risk-adjuster models to transfer revenue between different health exchange participants. While the predictive accuracy of these models has been investigated in a number of studies, the accuracy and use of models for applications other than risk adjustment has not been the subject of much investigation. We investigate predictive modeling of future healthcare costs using a number of different statistical techniques. Our analysis was performed based on a dataset of 30,000 insureds containing claims information from two contiguous years. The dataset contains over a hundred covariates for each insured, including detailed breakdown of past costs and causes encoded via coexisting condition (CC) flags. We discuss statistical models for the relationship between next-year costs and medical and cost information to predict the mean and quantiles of future cost, ranking risks and identifying most predictive covariates. A comparison of multiple models is presented, including (in addition to the traditional linear regression model underlying risk adjusters) Lasso GLM, multivariate adaptive regression splines, random forests, decision trees, and boosted trees. A detailed performance analysis shows that the traditional regression approach does not perform well and that more accurate models are possible.
- Published
- 2015
47. Game Theoretic Models for Energy Production
- Author
-
Michael Ludkovski and Ronnie Sircar
- Published
- 2015
48. Spot convenience yield models for the energy markets
- Author
-
René Carmona and Michael Ludkovski
- Published
- 2004
49. Optimal Execution with Dynamic Order Flow Imbalance
- Author
-
Michael Ludkovski and Kyle Bechler
- Subjects
Stochastic control ,Numerical Analysis ,Mathematical optimization ,Quantitative Finance - Trading and Market Microstructure ,Applied Mathematics ,Horizon ,Control (management) ,Process (computing) ,Market microstructure ,Trading and Market Microstructure (q-fin.TR) ,FOS: Economics and business ,Flow (mathematics) ,Order (exchange) ,Key (cryptography) ,Economics ,Mathematical economics ,Finance - Abstract
We examine optimal execution models that take into account both market microstructure impact and informational costs. Informational footprint is related to order flow and is represented by the trader's influence on the flow imbalance process, while microstructure influence is captured by instantaneous price impact. We propose a continuous-time stochastic control problem that balances between these two costs. Incorporating order flow imbalance leads to the consideration of the current market state and specifically whether one's orders lean with or against the prevailing order flow, key components often ignored by execution models in the literature. In particular, to react to changing order flow, we endogenize the trading horizon $T$. After developing the general indefinite-horizon formulation, we investigate several tractable approximations that sequentially optimize over price impact and over $T$. These approximations, especially a dynamic version based on receding horizon control, are shown to be very accurate and connect to the prevailing Almgren-Chriss framework. We also discuss features of empirical order flow and links between our model and "Optimal Execution Horizon" by Easley et al (Mathematical Finance, 2013)., 31 pages, 8 figures
- Published
- 2014
50. New Families of Ideal 2-Level Autocorrelation Ternary Sequences From Second Order DHT
- Author
-
Guang Gong and Michael Ludkovski
- Subjects
Discrete mathematics ,Combinatorics ,Sequence ,Complementary sequences ,Applied Mathematics ,Golomb coding ,Autocorrelation ,Discrete Mathematics and Combinatorics ,Ideal (ring theory) ,Ternary operation ,Pseudorandom binary sequence ,Realization (systems) ,Mathematics - Abstract
Following the work of Gong and Golomb on binary sequences, we used the technique of applying Second Order Decimation-Hadamard Transform (DHT) operater to obtain previously unknown ternary (ideal) two-level autocorrelation sequences. This process is referred to as realization. We obtained new multi-term ternary two-level autocorrelation sequences as realization of single 2-term ternary two-level autocorrelation sequence. We conjecture that for n = 2m + 1 odd, there exist m or m − 1 infinite inequivalent families of ternary two-level autocorrelation (AC) sequence which are given by four constructions. We have verified this for n = 5, 7, 9 and 11. Experimental results are provided.
- Published
- 2001
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.