Back to Search Start Over

Lifetime policy reuse and the importance of task capacity

Authors :
Bossens, David M.
Sobey, Adam J.
Publication Year :
2021

Abstract

A long-standing challenge in artificial intelligence is lifelong reinforcement learning, where learners are given many tasks in sequence and must transfer knowledge between tasks while avoiding catastrophic forgetting. Policy reuse and other multi-policy reinforcement learning techniques can learn multiple tasks but may generate many policies. This paper presents two novel contributions, namely 1) Lifetime Policy Reuse, a model-agnostic policy reuse algorithm that avoids generating many policies by optimising a fixed number of near-optimal policies through a combination of policy optimisation and adaptive policy selection; and 2) the task capacity, a measure for the maximal number of tasks that a policy can accurately solve. Comparing two state-of-the-art base-learners, the results demonstrate the importance of Lifetime Policy Reuse and task capacity based pre-selection on an 18-task partially observable Pacman domain and a Cartpole domain of up to 125 tasks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2106.01741
Document Type :
Working Paper
Full Text :
https://doi.org/10.3233/AIC-230040