1,708 results on '"stochastic optimal control"'
Search Results
2. Leveraging More of Biology in Evolutionary Reinforcement Learning
- Author
-
Gašperov, Bruno, Đurasević, Marko, Jakobovic, Domagoj, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Smith, Stephen, editor, Correia, João, editor, and Cintrano, Christian, editor
- Published
- 2024
- Full Text
- View/download PDF
3. Optimal order execution under price impact: a hybrid model.
- Author
-
Di Giacinto, Marina, Tebaldi, Claudio, and Wang, Tai-Ho
- Subjects
- *
PRICES , *LIQUIDATION , *MARKET makers , *RICCATI equation , *STOCHASTIC control theory - Abstract
In this paper we explore optimal liquidation in a market populated by a number of heterogeneous market makers that have limited inventory-carrying and risk-bearing capacity. We derive a reduced form model for the dynamics of their aggregated inventory considering a proper scaling limit. The resulting price impact profile is shown to depend on the characteristics and relative importance of their inventories. The model is flexible enough to reproduce the empirically documented power law behavior of the price impact function. For any choice of the market makers characteristics, optimal execution within this modeling approach can be recast as a linear-quadratic stochastic control problem. The value function and the associated optimal trading rate can be obtained semi-explicitly subject to solving a differential matrix Riccati equation. Numerical simulations are conducted to illustrate the performance of the resulting optimal liquidation strategy in relation to standard benchmarks. Remarkably, they show that the increase in performance is determined by a substantial reduction of higher order moment risk. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. On the maximum principle for relaxed control problems of nonlinear stochastic systems.
- Author
-
Mezerdi, Meriem and Mezerdi, Brahim
- Subjects
- *
STOCHASTIC systems , *BROWNIAN motion , *NONLINEAR equations , *NONLINEAR systems , *STOCHASTIC differential equations , *MARTINGALES (Mathematics) , *STOCHASTIC control theory , *DIFFUSION coefficients - Abstract
We consider optimal control problems for a system governed by a stochastic differential equation driven by a d-dimensional Brownian motion where both the drift and the diffusion coefficient are controlled. It is well known that without additional convexity conditions the strict control problem does not admit an optimal control. To overcome this difficulty, we consider the relaxed model, in which admissible controls are measure-valued processes and the relaxed state process is governed by a stochastic differential equation driven by a continuous orthogonal martingale measure. This relaxed model admits an optimal control that can be approximated by a sequence of strict controls by the so-called chattering lemma. We establish optimality necessary conditions, in terms of two adjoint processes, extending Peng's maximum principle to relaxed control problems. We show that relaxing the drift and diffusion martingale parts directly as in deterministic control does not lead to a true relaxed model as the obtained controlled dynamics is not continuous in the control variable. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. IMPROVING CONTROL BASED IMPORTANCE SAMPLING STRATEGIES FOR METASTABLE DIFFUSIONS VIA ADAPTED METADYNAMICS.
- Author
-
BORRELL, ENRIC RIBERA, QUER, JANNES, RICHTER, LORENZ, and SCHUTTE, CHRISTOF
- Subjects
- *
STOCHASTIC control theory , *DYNAMICAL systems , *SAMPLING methods - Abstract
Sampling rare events in metastable dynamical systems is often a computationally expensive task and one needs to resort to enhanced sampling methods such as importance sampling. Since we can formulate the problem of finding optimal importance sampling controls as a stochastic optimization problem, this then brings additional numerical challenges and the convergence of corresponding algorithms might suffer from metastabilty. In this article, we address this issue by combining systematic control approaches with the heuristic adaptive metadynamics method. Crucially, we approximate the importance sampling control by a neural network, which makes the algorithm in principle feasible for high-dimensional applications. We can numerically demonstrate in relevant metastable problems that our algorithm is more effective than previous attempts and that only the combination of the two approaches leads to a satisfying convergence and therefore to an efficient sampling in certain metastable settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Nonlinear Optimal Control for Stochastic Dynamical Systems.
- Author
-
Lanchares, Manuel and Haddad, Wassim M.
- Subjects
- *
STOCHASTIC control theory , *STOCHASTIC systems , *NONLINEAR dynamical systems , *DYNAMICAL systems , *HAMILTON-Jacobi-Bellman equation , *NONLINEAR systems , *CLOSED loop systems - Abstract
This paper presents a comprehensive framework addressing optimal nonlinear analysis and feedback control synthesis for nonlinear stochastic dynamical systems. The focus lies on establishing connections between stochastic Lyapunov theory and stochastic Hamilton–Jacobi–Bellman theory within a unified perspective. We demonstrate that the closed-loop nonlinear system's asymptotic stability in probability is ensured through a Lyapunov function, identified as the solution to the steady-state form of the stochastic Hamilton–Jacobi–Bellman equation. This dual assurance guarantees both stochastic stability and optimality. Additionally, optimal feedback controllers for affine nonlinear systems are developed using an inverse optimality framework tailored to the stochastic stabilization problem. Furthermore, the paper derives stability margins for optimal and inverse optimal stochastic feedback regulators. Gain, sector, and disk margin guarantees are established for nonlinear stochastic dynamical systems controlled by nonlinear optimal and inverse optimal Hamilton–Jacobi–Bellman controllers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Stochastic optimal control of a coupled tri-stable energy harvester under correlated colored noises.
- Author
-
Zhang, Tingting and Jin, Yanfei
- Abstract
In this paper, the stochastic optimal control of a tri-stable energy harvester (TEH) under a standard rectifier circuit driven by correlated colored noises is considered. From the view of physical intuition, the control force is directly separated into conservative component and dissipative one. Then, the stationary probability density function (SPDF) and the DC power of the controlled electromechanical coupled TEH can be derived by using the stochastic averaging based on energy-dependent frequency. The weighted combination of mean DC power and rectification efficiency is regarded as a performance index to transform the stochastic optimal control problem into an extremum problem of a multivariable function. The stochastic direct optimal control strategy developed in this paper avoids the trouble of dealing with complex differential equations. In contrast with the uncontrolled case, the TEH under direct optimal control achieves significant optimization of harvesting performance. The cross-correlation between the additive and multiplicative colored noises can break the symmetry of SPDF to induce random transition. The power conversion efficiency of the harvesting system can be optimized by choosing the appropriate noise intensities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. The second-order maximum principle for partially observed optimal controls.
- Author
-
Li, Mengzhen and Wu, Zhen
- Subjects
MALLIAVIN calculus ,MAXIMUM principles (Mathematics) ,STOCHASTIC systems ,STOCHASTIC control theory ,STOCHASTIC differential equations - Abstract
In this paper, the stochastic control system under partial observation for singular optimal controls in the classical sense is studied. Note that both the system and the observation contain noise, in which the control variable is allowed to enter into all the coefficient terms. By adding some regularity assumptions, the pointwise second-order maximum principle under partial observation can be obtained by Malliavin calculus. Eventually, an example is given to demonstrate how to find the optimal control. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Second‐order necessary optimality conditions for discrete‐time stochastic systems.
- Author
-
Song, Teng and Yao, Yong
- Subjects
DISCRETE-time systems ,STOCHASTIC systems ,STOCHASTIC control theory ,STOCHASTIC matrices - Abstract
Summary: This paper deals with the second‐order necessary optimality conditions for discrete‐time stochastic optimal control problems under weakened convexity assumptions. Using a special variation of the control, and by virtue of a new discrete‐time backward stochastic equation, we establish a more general and constructive first‐order necessary optimality condition in the form of a global stochastic maximum principle. Moreover, by introducing a new discrete‐time backward stochastic matrix equation, the second‐order multipoint necessary optimality conditions of singular controls are derived, which covers and improves the classical second‐order necessary optimality conditions of discrete‐time stochastic systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. The Role of Longevity-Indexed Bond in Risk Management of Aggregated Defined Benefit Pension Scheme.
- Author
-
Zhang, Xiaoyi, Li, Yanan, and Guo, Junyi
- Subjects
INVESTMENT policy ,INFLATION-indexed bonds ,PENSION trust management ,INTEREST rates ,PENSIONS ,DYNAMIC programming ,STOCHASTIC control theory - Abstract
Defined benefit (DB) pension plans are a primary type of pension schemes with the sponsor assuming most of the risks. Longevity-indexed bonds have been used to hedge or transfer risks in pension plans. Our objective is to study an aggregated DB pension plan's optimal risk management problem focusing on minimizing the solvency risk over a finite time horizon and to investigate the investment strategies in a market, comprising a longevity-indexed bond and a risk-free asset, under stochastic nominal interest rates. Using the dynamic programming technique in the stochastic control problem, we obtain the closed-form optimal investment strategy by solving the corresponding Hamilton–Jacobi–Bellman (HJB) equation. In addition, a comparative analysis implicates that longevity-indexed bonds significantly reduce solvency risk compared to zero-coupon bonds, offering a strategic advantage in pension fund management. Besides the closed-form solution and the comparative study, another novelty of this study is the extension of actuarial liability (AL) and normal cost (NC) definitions, and we introduce the risk neutral valuation of liabilities in DB pension scheme with the consideration of mortality rate. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Chebyshev wavelet-based method for solving various stochastic optimal control problems and its application in finance
- Author
-
M. Yarahmadi and S. Yaghobipour
- Subjects
stochastic optimal control ,chebyshev wavelets ,expansion ,optimal asset allocation ,Applied mathematics. Quantitative methods ,T57-57.97 - Abstract
In this paper, a computational method based on parameterizing state and control variables is presented for solving Stochastic Optimal Control (SOC) problems. By using Chebyshev wavelets with unknown coefficients, state and control variables are parameterized, and then a stochastic optimal control problem is converted to a stochastic optimization problem. The expected cost functional of the resulting stochastic optimization problem is approximated by sample average approximation thereby the problem can be solved by optimization methods more easily. For facilitating and guar-anteeing convergence of the presented method, a new theorem is proved. Finally, the proposed method is implemented based on a newly designed algorithm for solving one of the well-known problems in mathematical fi-nance, the Merton portfolio allocation problem in finite horizon. The simu-lation results illustrate the improvement of the constructed portfolio return.
- Published
- 2024
- Full Text
- View/download PDF
12. A sufficient maximum principle for backward stochastic systems with mixed delays
- Author
-
Heping Ma, Hui Jian, and Yu Shi
- Subjects
stochastic optimal control ,stochastic differential equation with time delay ,noisy memory ,malliavin derivative ,Biotechnology ,TP248.13-248.65 ,Mathematics ,QA1-939 - Abstract
In this paper, we study the problem of optimal control of backward stochastic differential equations with three delays (discrete delay, moving-average delay and noisy memory). We establish the sufficient optimality condition for the stochastic system. We introduce two kinds of time-advanced stochastic differential equations as the adjoint equations, which involve the partial derivatives of the function $ f $ and its Malliavin derivatives. We also show that these two kinds of adjoint equations are equivalent. Finally, as applications, we discuss a linear-quadratic backward stochastic system and give an explicit optimal control. In particular, the stochastic differential equations with time delay are simulated by means of discretization techniques, and the effect of time delay on the optimal control result is explained.
- Published
- 2023
- Full Text
- View/download PDF
13. Turnpike properties for stochastic linear-quadratic optimal control problems with periodic coefficients.
- Author
-
Sun, Jingrui and Yong, Jiongmin
- Subjects
- *
STOCHASTIC control theory , *RICCATI equation - Abstract
This paper is concerned with the turnpike property for a class of stochastic linear-quadratic (LQ, for short) optimal control problems with periodic coefficients. The stability and stabilizability of the control system are studied, followed by the discussion of the existence and uniqueness of periodic solutions. A deterministic periodic LQ problem is introduced and solved, whose optimal pair, together with a pair of correction processes, serves as the turnpike limit of the stochastic problem. It is shown that the turnpike limit is periodic in the distribution sense. In the special case of constant coefficients, the turnpike limit turns out to have stationary distributions, with the expectation being the solution to a static optimization problem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Optimal consumption, investment and life-insurance purchase under a stochastically fluctuating economy.
- Author
-
Mousa, A. S., Pinheiro, D., Pinheiro, S., and Pinto, A. A.
- Subjects
- *
STOCHASTIC differential equations , *NONLINEAR differential equations , *STOCHASTIC control theory , *DYNAMIC programming , *UTILITY functions , *STOCHASTIC processes - Abstract
We study the optimal consumption, investment and lifeinsurance purchase and selection strategies for a wage-earner with an uncertain lifetime with access to a financial market comprised of one risk-free security and one risky-asset whose prices evolve according to linear diffusions modulated by a continuous-time stochastic process determined by an additional diffusive nonlinear stochastic differential equation. The process modulating the linear diffusions may be regarded as an indicator describing the state of the economy in a given instant of time. Additionally, we allow the Brownian motions driving each of these equations to be correlated. The lifeinsurance market under consideration herein consists of a fixed number of providers offering pairwise distinct contracts. We use dynamic programming techniques to characterize the solutions to the problem described above for a general family of utility functions, studying the case of discounted constant relative risk aversion utilities with more detail. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. TURNPIKE PROPERTIES FOR MEAN-FIELD LINEAR-QUADRATIC OPTIMAL CONTROL PROBLEMS.
- Author
-
JINGRUI SUN and JIONGMIN YONG
- Subjects
- *
LINEAR differential equations , *STOCHASTIC differential equations , *STOCHASTIC control theory , *RICCATI equation , *TIME perspective , *FUNCTIONAL differential equations - Abstract
This paper is concerned with an optimal control problem for a mean-field linear stochastic differential equation with a quadratic functional in the infinite time horizon. Under suitable conditions, including the stabilizability, the (strong) exponential, integral, and mean-square turnpike properties for the optimal pair are established. The keys are to correctly formulate the corresponding static optimization problem and find the equations determining the correction processes. These have revealed the main feature of the stochastic problems which are significantly different from the deterministic version of the theory. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. DISCRETE-TIME APPROXIMATION OF STOCHASTIC OPTIMAL CONTROL WITH PARTIAL OBSERVATION.
- Author
-
YUNZHANG LI, XIAOLU TAN, and SHANJIAN TANG
- Subjects
- *
STOCHASTIC approximation , *STOCHASTIC control theory , *DISCRETE-time systems , *MACHINE learning - Abstract
We consider a class of stochastic optimal control problems with partial observation, and study their approximation by discrete-time control problems. We establish a convergence result by using the weak convergence technique of Kushner and Dupuis [Numerical Methods for Stochastic Control Problems in Continuous Time, Springer, New York], together with the notion of relaxed control rule introduced by El Karoui, Huú Nguyen and Jeanblanc-Picqué [SIAM J. Control Optim., 26 (1988), pp. 1025--1061]. In particular, with a well chosen discrete-time control system, we obtain a first implementable numerical algorithm (with convergence) for the partially observed control problem. Moreover, our discrete-time approximation result would open the door to study convergence of more general numerical approximation methods, such as machine learning based methods. Finally, we illustrate our convergence result by numerical experiments on a partially observed control problem in a linear quadratic setting. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Robust interplanetary trajectory design under multiple uncertainties via meta-reinforcement learning.
- Author
-
Federici, Lorenzo and Zavoli, Alessandro
- Subjects
- *
RECURRENT neural networks , *REINFORCEMENT learning , *STOCHASTIC control theory , *TRANSFER of training , *HEBBIAN memory - Abstract
This paper focuses on the application of meta-reinforcement learning to the robust design of low-thrust interplanetary trajectories in the presence of multiple uncertainties. A closed-loop control policy is used to optimally steer the spacecraft to a final target state despite the considered perturbations. The control policy is approximated by a deep recurrent neural network, trained by policy-gradient reinforcement learning on a collection of environments featuring mixed sources of uncertainty, namely dynamic uncertainty and control execution errors. The recurrent network is able to build an internal representation of the distribution of environments, thus better adapting the control to the different stochastic scenarios. The results in terms of optimality, constraint handling, and robustness on a fuel-optimal low-thrust transfer between Earth and Mars are compared with those obtained via a traditional reinforcement learning approach based on a feed-forward neural network. • In interplanetary space missions, the spacecraft trajectory is affected by multiple uncertainties. • Meta-reinforcement learning can be applied seamlessly to any uncertainty and dynamic model. • A recurrent neural network is used as a history-dependent closed-loop control policy. • The network is trained on a low-thrust transfer featuring dynamic uncertainties and control execution errors. • Meta-reinforcement learning shows improved performance and robustness compared to standard reinforcement learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Chebyshev wavelet-based method for solving various stochastic optimal control problems and its application in finance.
- Author
-
Yarahmadi, M. and Yaghobipour, S.
- Subjects
CHEBYSHEV approximation ,STOCHASTIC control theory ,PARAMETERIZATION ,ALGORITHMS ,ASSET allocation ,APPROXIMATION theory - Abstract
In this paper, a computational method based on parameterizing state and control variables is presented for solving Stochastic Optimal Control (SOC) problems. By using Chebyshev wavelets with unknown coefficients, state and control variables are parameterized, and then a stochastic optimal control problem is converted to a stochastic optimization problem. The expected cost functional of the resulting stochastic optimization problem is approximated by sample average approximation thereby the problem can be solved by optimization methods more easily. For facilitating and guaranteeing convergence of the presented method, a new theorem is proved. Finally, the proposed method is implemented based on a newly designed algorithm for solving one of the well-known problems in mathematical finance, the Merton portfolio allocation problem in finite horizon. The simulation results illustrate the improvement of the constructed portfolio return. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. G-stochastic maximum principle for risk-sensitive control problem and its applications.
- Author
-
Dassa, Meriyam and Chala, Adel
- Subjects
MAXIMUM principles (Mathematics) ,STOCHASTIC control theory ,STOCHASTIC differential equations ,WIENER processes ,ADJOINT differential equations ,APPLIED mathematics ,STOCHASTIC analysis - Published
- 2023
- Full Text
- View/download PDF
20. Optimal dynamic pricing and production policy for a stochastic inventory system with perishable products and inventory-level-dependent demand.
- Author
-
Luo, Xuxiang and Chu, Yuqing
- Subjects
TIME-based pricing ,STOCHASTIC systems ,STOCHASTIC control theory ,OPTIMAL control theory ,PRICE regulation ,DEMAND function ,HAMILTON-Jacobi-Bellman equation - Abstract
In this paper, we consider a joint dynamic pricing and production policy for a stochastic inventory system with perishable products. The demand is stochastic and dependent on the price and the level of the on-hand inventory. Combined dynamic pricing and production control, a stochastic dynamic optimization model that maximizes the total discounted profit is built. Applying the stochastic optimal control theory, we formulate the problem of finding the optimal joint dynamic pricing and production schedule as the problem of solving a Hamilton-Jacobi-Bellman (HJB) equation. By solving the HJB equation, analytical solutions for optimal pricing and production rate are obtained. In addition, a numerical example is given to illustrate the validness of the theoretical results and sensitivity analysis on major system parameters is carried out. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Dynamic analysis and optimal control of a stochastic COVID-19 model.
- Author
-
Zhang, Ge, Li, Zhiming, Din, Anwarud, and Chen, Tao
- Subjects
- *
STOCHASTIC control theory , *STOCHASTIC models , *OPTIMAL control theory , *VIRAL transmission , *INFECTIOUS disease transmission - Abstract
In this paper, we construct a stochastic SAIR (Susceptible–Asymptomatic–Infected–Removed) epidemic model to study the dynamic and control strategy of COVID-19. The existence and uniqueness of the global positive solution are obtained by using the Lyapunov method. We prove the necessary conditions for the existence of extinction and ergodic stationary distribution by defining two new thresholds, respectively. Through the stochastic control theory, the optimal control strategy is obtained. Numerical simulations show the validity of stationary distribution and optimal control. The parameters of the model are estimated by a set of real COVID-19 data. And, the sensitivity of all parameters shows that decreasing physical interaction and screening the asymptomatic as swiftly as possible can prevent the wide spread of the virus in communities. Finally, we also display the trend of the epidemic without control strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Optimal additional voluntary contribution in DC pension schemes to manage inadequacy risk
- Author
-
Ferreira Morici, Henrique and Vigna, Elena
- Published
- 2024
- Full Text
- View/download PDF
23. An Optimal Sustainable Production Policy for Imperfect Production System with Stochastic Demand, Price, and Machine Failure with FRW Policy Under Carbon Emission
- Author
-
Khedlekar, U. K., Kumar, Lalji, Sharma, Kajal, and Dwivedi, Vinita
- Published
- 2024
- Full Text
- View/download PDF
24. The Role of Longevity-Indexed Bond in Risk Management of Aggregated Defined Benefit Pension Scheme
- Author
-
Xiaoyi Zhang, Yanan Li, and Junyi Guo
- Subjects
longevity-indexed bond ,DB pension plan ,solvency risk management ,asset allocation ,stochastic optimal control ,HJB equation ,Insurance ,HG8011-9999 - Abstract
Defined benefit (DB) pension plans are a primary type of pension schemes with the sponsor assuming most of the risks. Longevity-indexed bonds have been used to hedge or transfer risks in pension plans. Our objective is to study an aggregated DB pension plan’s optimal risk management problem focusing on minimizing the solvency risk over a finite time horizon and to investigate the investment strategies in a market, comprising a longevity-indexed bond and a risk-free asset, under stochastic nominal interest rates. Using the dynamic programming technique in the stochastic control problem, we obtain the closed-form optimal investment strategy by solving the corresponding Hamilton–Jacobi–Bellman (HJB) equation. In addition, a comparative analysis implicates that longevity-indexed bonds significantly reduce solvency risk compared to zero-coupon bonds, offering a strategic advantage in pension fund management. Besides the closed-form solution and the comparative study, another novelty of this study is the extension of actuarial liability (AL) and normal cost (NC) definitions, and we introduce the risk neutral valuation of liabilities in DB pension scheme with the consideration of mortality rate.
- Published
- 2024
- Full Text
- View/download PDF
25. A stochastic goodwill model depending on quality level and advertising.
- Author
-
Meng, Jun, Wang, Ming-hui, Yang, Ben-zhang, and Huang, Nan-jing
- Subjects
- *
STOCHASTIC models , *UTILITY theory , *STOCHASTIC control theory , *WHITE noise , *SUPPLY chains - Abstract
This paper focuses on a stochastic goodwill model involving a single manufacturer and a single retailer in a supply chain, in which both quality level and goodwill level are governed by stochastic differential equations. In both the linear-quadratic Stackelberg game and linear-quadratic cooperative game, explicit expressions are obtained for the optimal quality improvement efforts level of the manufacturer, the optimal advertising efforts level of the retailer as well as the optimal profit of the supply chain. We also derive the expectation and variance values for quality and goodwill levels, respectively. The distribution of incremental profit and subsidy are analysed in the cooperative game via the utility theory. Finally, several numerical experiments are reported to show the influence of decay rates and white noise disturbance of the quality and goodwill levels on the expectation of quality and goodwill levels. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Optimal control of non-instantaneous impulsive second-order stochastic McKean–Vlasov evolution system with Clarke subdifferential.
- Author
-
Anukiruthika, K., Durga, N., and Muthukumar, P.
- Subjects
- *
DIFFERENTIAL evolution , *DISTRIBUTION (Probability theory) , *BROWNIAN motion , *STOCHASTIC analysis , *STOCHASTIC control theory - Abstract
The optimal control of non-instantaneous impulsive second-order stochastic McKean–Vlasov evolution system with Clarke subdifferential and mixed fractional Brownian motion is investigated in this article. The deterministic nonlinear second-order controlled partial differential system is enriched with stochastic perturbations, non-instantaneous impulses, and Clarke subdifferential. In particular, the nonlinearities in the system that rely on the state of the solution are allowed to rely on the corresponding probability distribution of the state. The solvability of the considered system is discussed with the help of stochastic analysis, multivalued analysis, and multivalued fixed point theorem. Further, the existence of optimal control is established with the aid of Balder's theorem. Finally, an example is provided to illustrate the developed theory. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Optimal social welfare policy within financial and life insurance markets.
- Author
-
Hoshiea, M., Mousa, A. S., and Pinto, A. A.
- Subjects
- *
INSURANCE companies , *PUBLIC welfare policy , *LIFE insurance , *FINANCIAL policy , *NONLINEAR differential equations - Abstract
We consider a continuous lifetime model for investor whose lifetime is a random variable. We assume the investor has an access to the social welfare system, the financial market and the life insurance market. The investor aims to find the optimal strategies that maximize the expected utility obtained from consumption, investing in the financial market, buying life insurance, registering in the social welfare system, the size of his estate in the event of premature death and the size of his fortune at time of retirement if he lives that long. We use dynamic programming techniques to derive a second-order nonlinear partial differential equation whose solution is the maximum objective function. We use special case of discounted constant relative risk aversion utilities to find an explicit solutions for the optimal strategies. Finally, we have shown a numerical solution for the problem under consideration and study some properties for the optimal strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Nonlinear Stochastic Trajectory Optimization for Centroidal Momentum Motion Generation of Legged Robots
- Author
-
Gazar, Ahmad, Khadiv, Majid, Kleff, Sébastien, Del Prete, Andrea, Righetti, Ludovic, Siciliano, Bruno, Series Editor, Khatib, Oussama, Series Editor, Antonelli, Gianluca, Advisory Editor, Fox, Dieter, Advisory Editor, Harada, Kensuke, Advisory Editor, Hsieh, M. Ani, Advisory Editor, Kröger, Torsten, Advisory Editor, Kulic, Dana, Advisory Editor, Park, Jaeheung, Advisory Editor, Billard, Aude, editor, and Asfour, Tamim, editor
- Published
- 2023
- Full Text
- View/download PDF
29. Maximum principle for mean‐field controlled systems driven by a fractional Brownian motion.
- Author
-
Sun, Yifang
- Subjects
BROWNIAN motion ,STOCHASTIC differential equations ,FRACTIONAL differential equations ,STOCHASTIC systems ,STOCHASTIC control theory - Abstract
We study a stochastic control problem of mean‐field controlled stochastic differential systems driven by a fractional Brownian motion with Hurst parameter H∈(1/2,1)$$ H\in \left(1/2,1\right) $$. As a necessary condition of the optimal control we obtain a stochastic maximum principle. The associated adjoint mean‐field backward stochastic differential equation driven by a fractional Brownian motion and a classical Brownian motion. Applying the stochastic maximum principle to a mean‐field stochastic linear quadratic problem, we obtain the optimal control and prove that the necessary condition for the optimality of an admissible control is also sufficient under certain assumptions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Solvability of general fully coupled forward–backward stochastic difference equations with delay and applications.
- Author
-
Song, Teng
- Subjects
STOCHASTIC difference equations ,STOCHASTIC control theory ,MAXIMUM principles (Mathematics) ,DIFFERENCE equations ,STOCHASTIC systems - Abstract
A class of fully coupled forward–backward stochastic difference equations with delay (FBSDDEs) over infinite horizon are considered in this article. By establishing a non‐homogeneous explicit relation between the forward and backward equations in terms of Riccati‐like difference equations, we derive the unique solution to the FBSDDEs under certain conditions. Then, we deduce that the FBSDDEs are solvable if and only if the corresponding stochastic delayed system is β$$ \beta $$‐degree open‐loop mean‐square exponentially stabilizable. Finally, as an application, the FBSDDEs are employed to demonstrate the maximum principle of the stochastic LQ optimal control problem. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Attaining stochastic optimal control over debt ratios in U.S. markets.
- Author
-
Liu, Wei-han
- Subjects
STOCHASTIC control theory ,FINANCIAL markets ,HIDDEN Markov models ,REAL estate sales ,CONSUMER credit - Abstract
We propose a refined dynamic programming model based on a hidden Markov chain formulation and a nonlinear filtering technique to calculate the optimal debt ratio for public and private sectors for different scenarios. We then conduct the empirical analysis of the U.S. markets in real estate and equities during 1991.Q1 and 2020.Q1, comparing them with the theoretical results. It indicates that U.S. households and governments spent more than they can afford. While households reduced their debt ratio during times of economic distress, the public sector hiked its debt ratio to stimulate the economy. The policy effect took a long time to accumulate, and the outcome was lower than expected to revitalize the economy in time. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Efficient Resource Allocation Contracts to Reduce Adverse Events.
- Author
-
Liang, Yong, Sun, Peng, Tang, Runyu, and Zhang, Chong
- Subjects
RESOURCE allocation ,ELECTRONIC commerce ,MORAL hazard ,STOCHASTIC control theory ,CONTRACTS ,MONETARY incentives - Abstract
On online platforms, goods, services, and content providers, also known as agents, introduce adverse events. The frequency of these events depends on each agent's effort level. In "Efficient Resource Allocation Contracts to Reduce Adverse Events," Liang, Sun, Tang, and Zhang study continuous-time dynamic contracts that utilize resource allocation and monetary transfers to induce agents to exert effort and reduce the arrival rate of adverse events. They devise an iterative algorithm that characterizes and calculates such contracts and specify the profit-maximizing contract for the platform, also known as the principal. In contrast to the single-agent case, in which efficiency is not achievable, they show that efficient and incentive-compatible contracts, which allocate all resources and induce agents to exert constant effort, generally exist with two or more agents. Additionally, they also provide efficient and incentive-compatible dynamic contracts that can be expressed in closed form and are therefore easy to understand and implement in practice. Motivated by the allocation of online visits to product, service, and content suppliers in the platform economy, we consider a dynamic contract design problem in which a principal constantly determines the allocation of a resource (online visits) to multiple agents. Although agents are capable of running the business, they introduce adverse events, the frequency of which depends on each agent's effort level. We study continuous-time dynamic contracts that utilize resource allocation and monetary transfers to induce agents to exert effort and reduce the arrival rate of adverse events. In contrast to the single-agent case, in which efficiency is not achievable, we show that efficient and incentive-compatible contracts, which allocate all resources and induce agents to exert constant effort, generally exist with two or more agents. We devise an iterative algorithm that characterizes and calculates such contracts, and we specify the profit-maximizing contract for the principal. Furthermore, we provide efficient and incentive-compatible dynamic contracts that can be expressed in closed form and are therefore easy to understand and implement in practice. Funding: Y. Liang acknowledges support from the National Key R&D Program of China [Grant 2020AAA0103801] and the National Natural Science Foundation of China [Grant 71872095]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/opre.2022.2322. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Nonlinear Optimal Control for Stochastic Dynamical Systems
- Author
-
Manuel Lanchares and Wassim M. Haddad
- Subjects
Lyapunov theory ,stochastic optimal control ,inverse optimality ,relative stability margins ,Mathematics ,QA1-939 - Abstract
This paper presents a comprehensive framework addressing optimal nonlinear analysis and feedback control synthesis for nonlinear stochastic dynamical systems. The focus lies on establishing connections between stochastic Lyapunov theory and stochastic Hamilton–Jacobi–Bellman theory within a unified perspective. We demonstrate that the closed-loop nonlinear system’s asymptotic stability in probability is ensured through a Lyapunov function, identified as the solution to the steady-state form of the stochastic Hamilton–Jacobi–Bellman equation. This dual assurance guarantees both stochastic stability and optimality. Additionally, optimal feedback controllers for affine nonlinear systems are developed using an inverse optimality framework tailored to the stochastic stabilization problem. Furthermore, the paper derives stability margins for optimal and inverse optimal stochastic feedback regulators. Gain, sector, and disk margin guarantees are established for nonlinear stochastic dynamical systems controlled by nonlinear optimal and inverse optimal Hamilton–Jacobi–Bellman controllers.
- Published
- 2024
- Full Text
- View/download PDF
34. Densely rewarded reinforcement learning for robust low-thrust trajectory optimization.
- Author
-
Hu, Jincheng, Yang, Hongwei, Li, Shuang, and Zhao, Yingjie
- Subjects
- *
TRAJECTORY optimization , *REWARD (Psychology) , *REINFORCEMENT learning , *DETERMINISTIC algorithms , *STOCHASTIC control theory - Abstract
To overcome the time-consuming training caused by the sparse reward function in reinforcement learning, an efficient dense reward framework for robust low-thrust trajectory optimization is proposed. The dense reward functions are designed separately for the deterministic and considered uncertain scenarios, including state uncertainties, observation uncertainties and execution uncertainties. For the uncertainties, the dense reward function is designed to diminish the deviation with respect to the isochronous nominal state along the corresponding deterministic optimal trajectory at each step, rendering the reward function no longer sparse and suitable for complex multirevolution problems. In addition, a multistage reward function of the terminal constraints for the rendezvous missions is designed by incorporating some exponential acceleration terms, enabling significant improvement in training efficiency as the terminal errors become low. In addition, a dense reward function for the deterministic scenario is also proposed via the introduction of empirical forbidden zones and an exponential term. The effectiveness and efficiency of the proposed method is demonstrated in a simple Earth-Mars mission and a complex Earth-Venus multirevolution mission. The promising results verify the significant effect of the proposed method in speeding up the process of training an initial incapable agent to an 'expert' while guaranteeing or even improving the performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Finite Horizon Optimal Dividend and Reinsurance Problem Driven by a Jump-Diffusion Process with Controlled Jumps.
- Author
-
Guan, Chonghu
- Subjects
- *
VARIATIONAL inequalities (Mathematics) , *JUMP processes , *REINSURANCE , *FUNCTIONAL equations , *DIVIDENDS , *INTEGRO-differential equations - Abstract
In this paper, we discuss an optimal dividend and reinsurance problem for an insurance company facing two types of risks: unstable income and potential loses. The arrival of all loses is characterized as a compound Poisson process. We assumes that every possible loss can be reinsured for a part of it. The reserve is a combination of a diffusion process and a controllable compound Poisson process. We investigate the optimal dividend and reinsurance strategy by analyzing the corresponding variational inequality on the value function. A significant difference from the existing literature is that the HJB equation in this variational inequality is a partial integro-differential equation with a functional optimization problem appearing in the integral operator. We not only prove the existence of a classical solution to the problem and the continuity, strict monotonicity, boundedness of the dividend free boundary, but also discuss the properties of the optimal reinsurance policy, including the continuity, monotonicity of the optimal part covered by reinsurance for each possible loss, and the smoothness of the reinsurance free boundary. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Optimal control for a nonlinear stochastic PDE model of cancer growth.
- Author
-
Esmaili, Sakine, Eslahchi, M. R., and Torres, Delfim F. M.
- Abstract
We study an optimal control problem for a stochastic model of tumour growth with drug application. This model consists of three stochastic hyperbolic equations describing the evolution of tumour cells. It also includes two stochastic parabolic equations describing the diffusions of nutrient and drug concentrations. Since all systems are subject to many uncertainties, we have added stochastic terms to the deterministic model to consider the random perturbations. Then, we have added control variables to the model according to the medical concepts to control the concentrations of drug and nutrient. In the optimal control problem, we have defined the stochastic and deterministic cost functions and we have proved the problems have unique optimal controls. For deriving the necessary conditions for optimal control variables, the stochastic adjoint equations are derived. We have proved the stochastic model of tumour growth and the stochastic adjoint equations have unique solutions. For proving the theoretical results, we have used a change of variable which changes the stochastic model and adjoint equations (a.s.) to deterministic equations. Then we have employed the techniques used for deterministic ones to prove the existence and uniqueness of optimal control. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. A Stochastic Control Approach for Constrained Stochastic Differential Games with Jumps and Regimes.
- Author
-
Savku, Emel
- Subjects
- *
DIFFERENTIAL games , *STOCHASTIC control theory , *LAGRANGE multiplier , *MARKOV processes , *INSURANCE companies , *BUSINESS insurance - Abstract
We develop an approach for two-player constraint zero-sum and nonzero-sum stochastic differential games, which are modeled by Markov regime-switching jump-diffusion processes. We provide the relations between a usual stochastic optimal control setting and a Lagrangian method. In this context, we prove corresponding theorems for two different types of constraints, which lead us to find real-valued and stochastic Lagrange multipliers, respectively. Then, we illustrate our results for a nonzero-sum game problem with the stochastic maximum principle technique. Our application is an example of cooperation between a bank and an insurance company, which is a popular, well-known business agreement type called Bancassurance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Relationships between the maximum principle and dynamic programming for infinite dimensional stochastic control systems.
- Author
-
Chen, Liangying and Lü, Qi
- Subjects
- *
DYNAMIC programming , *STOCHASTIC systems , *MAXIMUM principles (Mathematics) , *STOCHASTIC control theory , *DISTRIBUTED parameter systems , *EVOLUTION equations , *STOCHASTIC programming - Abstract
The Pontryagin type maximum principle and Bellman's dynamic programming principle serve as two of the most important tools in solving optimal control problems. There is a huge literature on the study of relationship between them. The main purpose of this paper is to investigate the relationship between the Pontryagin type maximum principle and the dynamic programming principle for control systems governed by stochastic evolution equations in infinite dimensional space, with the control variables entering into both the drift and the diffusion terms. To do so, we first prove the dynamic programming principle for those systems without employing the martingale solutions. Then we establish the desired relationships in both cases that value function associated is smooth or nonsmooth. For the nonsmooth case, in particular, by employing the relaxed transposition solution, we discover the connection between the superdifferentials and subdifferentials of value function and the first-order and second-order adjoint equations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Time‐average stochastic control based on a singular local Lévy model for environmental project planning under habit formation.
- Author
-
Yoshioka, Hidekazu, Tsujimura, Motoh, Hamagami, Kunihiko, and Tomobe, Haruka
- Subjects
- *
STOCHASTIC programming , *STOCHASTIC control theory , *VERTICAL jump , *HABIT , *FINITE differences , *JUMP processes ,ENVIRONMENTAL protection planning - Abstract
This study applies the theory of stochastic control to an environmental project planning to counteract against the sediment starvation problem in river environments. This can be considered as a time‐average inventory problem to time‐discretely control a continuous‐time system driven by a non‐smooth jump process under habit formation disturbing project implementation. The system is modeled such that the sediment storage dynamics are physically consistent with certain experimental results. Further, the habit formation is modeled as simple linear dynamics and serves as a constraint related to the replenishment amount of the sediment. We show that the time‐average control problem is not necessarily ergodic. Consequently, the effective Hamiltonian may become a non‐constant. Thereafter, ratcheting cases as extreme cases of the irreversible habit formation are considered, owing to them being unique exactly solvable non‐ergodic control problems. The optimality equation associated with a regularized and hence well‐defined control problem is verified. Furthermore, a finite difference scheme is examined against the exactly solvable case and then applied to more complicated cases. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Automated market makers: mean-variance analysis of LPs payoffs and design of pricing functions
- Author
-
Bergault, Philippe, Bertucci, Louis, Bouba, David, and Guéant, Olivier
- Published
- 2023
- Full Text
- View/download PDF
41. بهینهسازی سبد سرمایهگذاری برای یک محصول بیمه زندگی پویا با استفاده از ابزارهای کنترل تصادفی.
- Author
-
سامان وهابی and امیر تیمور پایند
- Subjects
STOCHASTIC control theory ,LIFE insurance ,CONSUMPTION (Economics) ,DIFFERENTIAL calculus ,INSURANCE premiums ,UTILITY functions - Abstract
BACKGROUND AND OBJECTIVES: In this paper, a life insurance product is designed with the help of stochastic control approach. These products are defined in such a way that in exchange for receiving an amount as insurance premium that is paid at specified times, the insurer undertakes to pay insurance benefits when the insured is alive at the end of the contract. METHODS: This research is an analytical study in terms of developmental-applicative purpose. In the literature of life insurance, there are various products that are not the same in the type of benefit payment and the timing of their implementation. Among these examples, term life insurance, term life insurance, and mixed life insurance can be mentioned. Traditional insurance products with fixed benefits are quickly losing their appeal due to inflationary markets. In this research, it is focused on the design of a life insurance product on the condition of life, which is connected to the investment markets. Stochastic differential calculus models have been used to simulate capital markets assets. All the numerical results of this research have been calculated with the help of Matlab and Maple software. FINDINGS: To achieve the best choice of investment type, with the help of stochastic optimal control tool, the best investment strategy was calculated for a person who has the CRRA utility function and buys this product, so that the most benefits are paid at the end of the contract. To invest in this contract, modeling was done in a non-risky market such as a bank and a risky market such as stocks, which have price jumps. In addition, to model the risk asset, the Merton model, which is a representative of the models with finite activity, was used, and at the end, a comparison was made for several mortality functions. CONCLUSION: The main purpose of this article is investment for the insured who bought this product. In the product designed in this article, the insurer undertakes to pay the premiums received at a guaranteed rate at the end of the contract. Also, the insured will share the profit from the investment based on a certain percentage that is determined at the beginning of each year. The simulations show that the behavior of the optimal consumption rate is the same as the Merton model with the approach that the behavior of full price jumps is transparent in the optimal consumption rate designed in this article. Investment results for several mortality functions are reported in the Numerical Results section. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. Learning-based importance sampling via stochastic optimal control for stochastic reaction networks.
- Author
-
Ben Hammouda, Chiheb, Ben Rached, Nadhir, Tempone, Raúl, and Wiechert, Sophia
- Abstract
We explore efficient estimation of statistical quantities, particularly rare event probabilities, for stochastic reaction networks. Consequently, we propose an importance sampling (IS) approach to improve the Monte Carlo (MC) estimator efficiency based on an approximate tau-leap scheme. The crucial step in the IS framework is choosing an appropriate change of probability measure to achieve substantial variance reduction. This task is typically challenging and often requires insights into the underlying problem. Therefore, we propose an automated approach to obtain a highly efficient path-dependent measure change based on an original connection in the stochastic reaction network context between finding optimal IS parameters within a class of probability measures and a stochastic optimal control formulation. Optimal IS parameters are obtained by solving a variance minimization problem. First, we derive an associated dynamic programming equation. Analytically solving this backward equation is challenging, hence we propose an approximate dynamic programming formulation to find near-optimal control parameters. To mitigate the curse of dimensionality, we propose a learning-based method to approximate the value function using a neural network, where the parameters are determined via a stochastic optimization algorithm. Our analysis and numerical experiments verify that the proposed learning-based IS approach substantially reduces MC estimator variance, resulting in a lower computational complexity in the rare event regime, compared with standard tau-leap MC estimators. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Optimal assets allocation and benefit adjustment strategy with longevity risk for target benefit pension plans.
- Author
-
Liu, Zilan, Zhang, Huanying, and He, Lei
- Subjects
PENSIONS ,ASSET allocation ,STOCHASTIC control theory ,PORTFOLIO management (Investments) ,LONGEVITY ,PENSION trusts - Abstract
This paper studies the optimal investment choices and benefit adjustment strategy for a target benefit plan (TBP) with stochastic mortality force. The pension fund is invested in a risk-free asset, a stock, and a longevity-linked asset, as a derivative of the mortality force. Using the stochastic optimal control approach, we obtain closed-form solutions for optimal portfolio choices and benefit adjustment strategy in the financial market with or without the longevity-linked asset, which minimize the combination of benefit gap from the target level and pension plan's discontinuity risk. Numerical analysis is provided to show the effects of parameters on the optimal strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. Stochastic optimal and time-optimal control studies for additional food provided prey–predator systems involving Holling type III functional response
- Author
-
Prakash Daliparthi Bhanu and Vamsi Dasu Krishna Kiran
- Subjects
stochastic optimal control ,time-optimal control ,holling type iii response ,biological conservation ,pest management ,37a50 ,60h10 ,60j65 ,60j70 ,Biotechnology ,TP248.13-248.65 ,Physics ,QC1-999 - Abstract
This article consists of a detailed and novel stochastic optimal control analysis of a coupled non-linear dynamical system. The state equations are modelled as an additional food-provided prey–predator system with Holling type III functional response for predator and intra-specific competition among predators. We first discuss the optimal control problem as a Lagrangian problem with a linear quadratic control. Second, we consider an optimal control problem in the time-optimal control setting. We initially establish the existence of optimal controls for both these problems and later characterize these optimal controls using the Stochastic maximum principle. Further numerical simulations are performed based on stochastic forward-backward sweep methods for realizing the theoretical findings. The results obtained in these optimal control problems are discussed in the context of biological conservation and pest management.
- Published
- 2023
- Full Text
- View/download PDF
45. Data-Driven Tube-Based Stochastic Predictive Control
- Author
-
Sebastian Kerz, Johannes Teutsch, Tim Brudigam, Marion Leibold, and Dirk Wollherr
- Subjects
Data-driven control ,predictive control for linear systems ,stochastic optimal control ,uncertain systems ,Control engineering systems. Automatic machinery (General) ,TJ212-225 ,Technology - Abstract
A powerful result from behavioral systems theory known as the fundamental lemma allows for predictive control akin to Model Predictive Control (MPC) for linear time-invariant (LTI) systems with unknown dynamics purely from data. While most data-driven predictive control literature focuses on robustness with respect to measurement noise, only a few works consider exploiting probabilistic information of disturbances for performance-oriented control as in stochastic MPC. This work proposes a novel data-driven stochastic predictive control scheme for chance-constrained LTI systems subject to measurement noise and additive stochastic disturbances. In order to render the otherwise stochastic and intractable optimal control problem deterministic, our approach leverages ideas from tube-based MPC by decomposing the state into a deterministic nominal state driven by inputs and a stochastic error state affected by disturbances. Satisfaction of original chance constraints is guaranteed by tightening nominal constraints probabilistically with respect to additive disturbances and robustly with respect to measurement noise. The resulting data-driven receding horizon optimal control problem is lightweight, recursively feasible, and renders the closed loop input-to-state stable in the presence of both additive disturbances and measurement noise. We demonstrate the effectiveness of the proposed approach in a simulation example.
- Published
- 2023
- Full Text
- View/download PDF
46. An optimal control problem without control costs
- Author
-
Mario Lefebvre
- Subjects
stochastic optimal control ,diffusion processes ,first-passage time ,dynamic programming ,partial differential equation ,Biotechnology ,TP248.13-248.65 ,Mathematics ,QA1-939 - Abstract
A two-dimensional diffusion process is controlled until it enters a given subset of $ \mathbb{R}^2 $. The aim is to find the control that minimizes the expected value of a cost function in which there are no control costs. The optimal control can be expressed in terms of the value function, which gives the smallest value that the expected cost can take. To obtain the value function, one can make use of dynamic programming to find the differential equation it satisfies. This differential equation is a non-linear second-order partial differential equation. We find explicit solutions to this non-linear equation, subject to the appropriate boundary conditions, in important particular cases. The method of similarity solutions is used.
- Published
- 2023
- Full Text
- View/download PDF
47. Stochastic time-optimal control and sensitivity studies for additional food provided prey-predator systems involving Holling type-IV functional response
- Author
-
D. Bhanu Prakash and D. K. K. Vamsi
- Subjects
stochastic optimal control ,time-optimal control ,Holling type-IV response ,biological conservation ,pest management ,Brownian motion ,Applied mathematics. Quantitative methods ,T57-57.97 ,Probabilities. Mathematical statistics ,QA273-280 - Abstract
In this study we consider an additional food provided prey-predator model exhibiting Holling type-IV functional response incorporating the combined effects of both the continuous white noise and discontinuous Lévy noise. We prove the existence and uniqueness of global positive solutions for the proposed model. We perform the stochastic sensitivity analysis for each of the parameters in a chosen range. Later we do the time optimal control studies with respect quality and quantity of additional food as control variables. Making use of the arrow condition of the sufficient stochastic maximum principle, we characterize the optimal quality of additional food and optimal quantity of additional food. We then perform the sensitivity of these control variables with respect to each of the model parameters. Numerical results are given to illustrate the theoretical findings with applications in biological conservation and pest management. At the end we briefly study the influence of the noise on the dynamics of the model.
- Published
- 2023
- Full Text
- View/download PDF
48. Stochastic optimal control with Contingent Convertible Bond in banking industry
- Author
-
Asma Khadimallah and Fathi Abid
- Subjects
contingent convertible bond ,stochastic optimal control ,asset allocation strategy ,bank capital structure ,optimization problem ,power utility ,Finance ,HG1-9999 ,Mathematics ,QA1-939 - Abstract
This paper has potential implications for the management of the bank. We examine a bank capital structure with contingent convertible debt to improve financial stability. This type of debt converts to equity when the bank is facing financial difficulties and a conversion trigger occurs. We use a leverage ratio, which is introduced in Basel III to trigger conversion instead of traditional capital ratios. We formulate an optimization problem for a bank to choose an asset allocation strategy to maximize the expected utility of the bank's asset value. Our study presents an application of stochastic optimal control theory to a banking portfolio choice problem. By applying a dynamic programming principle to derive the HJB equation, we define and solve the optimization problem in the power utility case.The numerical results show that the evolution of the optimal asset allocation strategy is really affected by the realization of the stochastic variables characterizing the economy. We carried out a sensitivity analysis of risk aversion, time and volatility. We also reveal that the optimal asset allocation strategy is relatively sensitive to risk aversion as well as that the allocation in CoCo and equity decreases as the investment horizon increases. Finally, sensitivity analysis highlights the importance of dynamic considerations in optimal asset allocation based on the stochastic characteristics of investment opportunities.
- Published
- 2022
- Full Text
- View/download PDF
49. Efficient state estimation strategies for stochastic optimal control of financial risk problems
- Author
-
Yue Yuin Lim, Sie Long Kek, and Kok Lay Teo
- Subjects
financial risk ,stochastic optimal control ,state estimation ,kalman filter ,control law design ,Finance ,HG1-9999 ,Statistics ,HA1-4737 - Abstract
In this paper, a financial risk model, which is formulated from the risk management process of financial markets, is studied. By considering the presence of Gaussian white noise, the financial risk model is reformulated as a stochastic optimal control problem. On this basis, two efficient computational approaches for state estimation, which are the extended Kalman filter (EKF) and unscented Kalman filter (UKF) approaches, are applied. Later, based on the state estimate given by the EKF and UKF approaches, a linear feedback control policy is designed from the stationary condition. For illustration, some parameter values and the initial conditions of the financial risk model are used for the simulation of the stochastic optimal control problem. From the results, it is noticed that the UKF algorithm provides a better state estimate with a smaller value of the sum of squared errors (SSE) as compared to the SSE given by the EKF algorithm. Thus, the estimated output trajectory has a high accuracy that is close to the real output. Moreover, the control effort assists in estimating the state dynamics at the minimum cost. In conclusion, the efficiency of the computational approaches for optimal control of the financial risk model has been well presented.
- Published
- 2022
- Full Text
- View/download PDF
50. Optimal Life Insurance and Annuity Demand with Jump Diffusion and Regime Switching
- Author
-
Zhang, Jinhui, Purcal, Sachi, Wei, Jiaqin, and Terzioğlu, M. Kenan, editor
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.