1. Anytime Sequential Halving in Monte-Carlo Tree Search
- Author
-
Sagers, Dominic, Winands, Mark H. M., and Soemers, Dennis J. N. J.
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Monte-Carlo Tree Search (MCTS) typically uses multi-armed bandit (MAB) strategies designed to minimize cumulative regret, such as UCB1, as its selection strategy. However, in the root node of the search tree, it is more sensible to minimize simple regret. Previous work has proposed using Sequential Halving as selection strategy in the root node, as, in theory, it performs better with respect to simple regret. However, Sequential Halving requires a budget of iterations to be predetermined, which is often impractical. This paper proposes an anytime version of the algorithm, which can be halted at any arbitrary time and still return a satisfactory result, while being designed such that it approximates the behavior of Sequential Halving. Empirical results in synthetic MAB problems and ten different board games demonstrate that the algorithm's performance is competitive with Sequential Halving and UCB1 (and their analogues in MCTS)., Comment: Accepted by the Computers and Games 2024 conference
- Published
- 2024