Back to Search Start Over

Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers

Authors :
Negrea, Jeffrey
Bilodeau, Blair
Campolongo, Nicolò
Orabona, Francesco
Roy, Daniel M.
Source :
NeurIPS 2021
Publication Year :
2021

Abstract

Quantile (and, more generally, KL) regret bounds, such as those achieved by NormalHedge (Chaudhuri, Freund, and Hsu 2009) and its variants, relax the goal of competing against the best individual expert to only competing against a majority of experts on adversarial data. More recently, the semi-adversarial paradigm (Bilodeau, Negrea, and Roy 2020) provides an alternative relaxation of adversarial online learning by considering data that may be neither fully adversarial nor stochastic (i.i.d.). We achieve the minimax optimal regret in both paradigms using FTRL with separate, novel, root-logarithmic regularizers, both of which can be interpreted as yielding variants of NormalHedge. We extend existing KL regret upper bounds, which hold uniformly over target distributions, to possibly uncountable expert classes with arbitrary priors; provide the first full-information lower bounds for quantile regret on finite expert classes (which are tight); and provide an adaptively minimax optimal algorithm for the semi-adversarial paradigm that adapts to the true, unknown constraint faster, leading to uniformly improved regret bounds over existing methods.<br />Comment: 30 pages, 2 figures. Jeffrey Negrea and Blair Bilodeau are equal-contribution authors. Updated citations

Details

Database :
arXiv
Journal :
NeurIPS 2021
Publication Type :
Report
Accession number :
edsarx.2110.14804
Document Type :
Working Paper