Back to Search Start Over

Efficient Methods for Non-stationary Online Learning

Authors :
Zhao, Peng
Xie, Yan-Feng
Zhang, Lijun
Zhou, Zhi-Hua
Publication Year :
2023

Abstract

Non-stationary online learning has drawn much attention in recent years. In particular, dynamic regret and adaptive regret are proposed as two principled performance measures for online convex optimization in non-stationary environments. To optimize them, a two-layer online ensemble is usually deployed due to the inherent uncertainty of the non-stationarity, in which a group of base-learners are maintained and a meta-algorithm is employed to track the best one on the fly. However, the two-layer structure raises the concern about the computational complexity -- those methods typically maintain $\mathcal{O}(\log T)$ base-learners simultaneously for a $T$-round online game and thus perform multiple projections onto the feasible domain per round, which becomes the computational bottleneck when the domain is complicated. In this paper, we present efficient methods for optimizing dynamic regret and adaptive regret, which reduce the number of projections per round from $\mathcal{O}(\log T)$ to $1$. Moreover, our obtained algorithms require only one gradient query and one function evaluation at each round. Our technique hinges on the reduction mechanism developed in parameter-free online learning and requires non-trivial twists on non-stationary online methods. Empirical studies verify our theoretical findings.<br />Comment: preliminary conference version appeared at NeurIPS 2022; this extended version improves the paper presentation, further investigates the interval dynamic regret, and adds two applications (online non-stochastic control and online PCA)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.08911
Document Type :
Working Paper