Back to Search Start Over

How does a stochastic optimization/approximation algorithm adapt to a randomly evolving optimum/root with jump Markov sample paths.

Authors :
Yin, G.
Ion, C.
Krishnamurthy, V.
Source :
Mathematical Programming. Aug2009, Vol. 120 Issue 1, p67-99. 33p.
Publication Year :
2009

Abstract

Stochastic optimization/approximation algorithms are widely used to recursively estimate the optimum of a suitable function or its root under noisy observations when this optimum or root is a constant or evolves randomly according to slowly time-varying continuous sample paths. In comparison, this paper analyzes the asymptotic properties of stochastic optimization/approximation algorithms for recursively estimating the optimum or root when it evolves rapidly with nonsmooth (jump-changing) sample paths. The resulting problem falls into the category of regime-switching stochastic approximation algorithms with two-time scales. Motivated by emerging applications in wireless communications, and system identification, we analyze asymptotic behavior of such algorithms. Our analysis assumes that the noisy observations contain a (nonsmooth) jump process modeled by a discrete-time Markov chain whose transition frequency varies much faster than the adaptation rate of the stochastic optimization algorithm. Using stochastic averaging, we prove convergence of the algorithm. Rate of convergence of the algorithm is obtained via bounds on the estimation errors and diffusion approximations. Remarks on improving the convergence rates through iterate averaging, and limit mean dynamics represented by differential inclusions are also presented. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00255610
Volume :
120
Issue :
1
Database :
Academic Search Index
Journal :
Mathematical Programming
Publication Type :
Academic Journal
Accession number :
37371815
Full Text :
https://doi.org/10.1007/s10107-007-0145-1