Back to Search Start Over

Achieving Better Regret against Strategic Adversaries

Authors :
Dinh, Le Cong
Nguyen, Tri-Dung
Zemkoho, Alain
Tran-Thanh, Long
Dinh, Le Cong
Nguyen, Tri-Dung
Zemkoho, Alain
Tran-Thanh, Long
Publication Year :
2023

Abstract

We study online learning problems in which the learner has extra knowledge about the adversary's behaviour, i.e., in game-theoretic settings where opponents typically follow some no-external regret learning algorithms. Under this assumption, we propose two new online learning algorithms, Accurate Follow the Regularized Leader (AFTRL) and Prod-Best Response (Prod-BR), that intensively exploit this extra knowledge while maintaining the no-regret property in the worst-case scenario of having inaccurate extra information. Specifically, AFTRL achieves $O(1)$ external regret or $O(1)$ \emph{forward regret} against no-external regret adversary in comparison with $O(\sqrt{T})$ \emph{dynamic regret} of Prod-BR. To the best of our knowledge, our algorithm is the first to consider forward regret that achieves $O(1)$ regret against strategic adversaries. When playing zero-sum games with Accurate Multiplicative Weights Update (AMWU), a special case of AFTRL, we achieve \emph{last round convergence} to the Nash Equilibrium. We also provide numerical experiments to further support our theoretical results. In particular, we demonstrate that our methods achieve significantly better regret bounds and rate of last round convergence, compared to the state of the art (e.g., Multiplicative Weights Update (MWU) and its optimistic counterpart, OMWU).

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1381601960
Document Type :
Electronic Resource