1. Associative Memory Based Experience Replay for Deep Reinforcement Learning
- Author
-
Li, Mengyuan, Kazemi, Arman, Laguna, Ann Franchesca, and Hu, X. Sharon
- Subjects
Computer Science - Hardware Architecture ,Computer Science - Emerging Technologies ,Computer Science - Machine Learning - Abstract
Experience replay is an essential component in deep reinforcement learning (DRL), which stores the experiences and generates experiences for the agent to learn in real time. Recently, prioritized experience replay (PER) has been proven to be powerful and widely deployed in DRL agents. However, implementing PER on traditional CPU or GPU architectures incurs significant latency overhead due to its frequent and irregular memory accesses. This paper proposes a hardware-software co-design approach to design an associative memory (AM) based PER, AMPER, with an AM-friendly priority sampling operation. AMPER replaces the widely-used time-costly tree-traversal-based priority sampling in PER while preserving the learning performance. Further, we design an in-memory computing hardware architecture based on AM to support AMPER by leveraging parallel in-memory search operations. AMPER shows comparable learning performance while achieving 55x to 270x latency improvement when running on the proposed hardware compared to the state-of-the-art PER running on GPU., Comment: 9 pages, 9 figures. The work was accepted by the 41st International Conference on Computer-Aided Design (ICCAD), 2022, San Diego
- Published
- 2022
- Full Text
- View/download PDF