Back to Search Start Over

Generalized Mixability via Entropic Duality

Authors :
Grünwald, P.D. (Peter)
Hazan, E.
Kale, S.
Reid, M.D.
Frongillo, R.M.
Williamson, R.C.
Mehta, N.A. (Nishant)
Grünwald, P.D. (Peter)
Hazan, E.
Kale, S.
Reid, M.D.
Frongillo, R.M.
Williamson, R.C.
Mehta, N.A. (Nishant)
Source :
Journal of Machine Learning Research vol. 40 no. 28, pp. 1-22
Publication Year :
2015

Abstract

Mixability is a property of a loss which characterizes when constant regret is possible in the game of prediction with expert advice. We show that a key property of mixability generalizes, and the $\exp$ and $\log$ operations present in the usual theory are not as special as one might have thought. In doing so we introduce a more general notion of $\Phi$-mixability where $\Phi$ is a general entropy (\ie, any convex function on probabilities). We show how a property shared by the convex dual of any such entropy yields a natural algorithm (the minimizer of a regret bound) which, analogous to the classical Aggregating Algorithm, is guaranteed a constant regret when used with $\Phi$-mixable losses. We characterize which $\Phi$ have non-trivial $\Phi$-mixable losses and relate $\Phi$-mixability and its associated Aggregating Algorithm to potential-based methods, a Blackwell-like condition, mirror descent, and risk measures from finance. We also define a notion of ``dominance'' between different entropies in terms of bounds they guarantee and conjecture that classical mixability gives optimal bounds, for which we provide some supporting empirical evidence.

Details

Database :
OAIster
Journal :
Journal of Machine Learning Research vol. 40 no. 28, pp. 1-22
Notes :
application/pdf, English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1251877901
Document Type :
Electronic Resource