Back to Search Start Over

Single-Agent Optimization Through Policy Iteration Using Monte-Carlo Tree Search

Authors :
Seify, Arta
Buro, Michael
Publication Year :
2020

Abstract

The combination of Monte-Carlo Tree Search (MCTS) and deep reinforcement learning is state-of-the-art in two-player perfect-information games. In this paper, we describe a search algorithm that uses a variant of MCTS which we enhanced by 1) a novel action value normalization mechanism for games with potentially unbounded rewards (which is the case in many optimization problems), 2) defining a virtual loss function that enables effective search parallelization, and 3) a policy network, trained by generations of self-play, to guide the search. We gauge the effectiveness of our method in "SameGame"---a popular single-player test domain. Our experimental results indicate that our method outperforms baseline algorithms on several board sizes. Additionally, it is competitive with state-of-the-art search algorithms on a public set of positions.<br />Comment: Poster presentation at RL in Games Workshop, AAAI 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2005.11335
Document Type :
Working Paper