Back to Search Start Over

Stabilization of Nonlinear Discrete-Time Systems to Target Measures Using Stochastic Feedback Laws

Authors :
Karthik Elamvazhuthi
Shiba Biswal
Spring Berman
Source :
IEEE Transactions on Automatic Control. 66:1957-1972
Publication Year :
2021
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2021.

Abstract

In this article, we address the problem of stabilizing a discrete-time deterministic nonlinear control system to a target invariant measure using time-invariant stochastic feedback laws. This problem can be viewed as an extension of the problem of designing the transition probabilities of a Markov chain so that the process is exponentially stabilized to a target stationary distribution. Alternatively, it can be seen as an extension of the classical control problem of asymptotically stabilizing a discrete-time system to a single point, which corresponds to the Dirac measure in the measure stabilization framework. We assume that the target measure is supported on the entire state space of the system and is absolutely continuous with respect to the Lebesgue measure. Under the condition that the system is locally controllable at every point in the state space within one time step, we show that the associated measure stabilization problem is well-posed. Given this well-posedness result, we then frame an infinite-dimensional convex optimization problem to construct feedback control laws that stabilize the system to a target invariant measure at a maximized rate of convergence. We validate our optimization approach with numerical simulations of two-dimensional linear and nonlinear discrete-time control systems.

Details

ISSN :
23343303 and 00189286
Volume :
66
Database :
OpenAIRE
Journal :
IEEE Transactions on Automatic Control
Accession number :
edsair.doi...........193d397cb8d80dde541c1fac038daf85