Back to Search
Start Over
Robustness Assessment of Asynchronous Advantage Actor-Critic Based on Dynamic Skewness and Sparseness Computation: A Parallel Computing View.
- Source :
- Journal of Computer Science & Technology (10009000); Oct2021, Vol. 36 Issue 5, p1002-1021, 20p
- Publication Year :
- 2021
-
Abstract
- Reinforcement learning as autonomous learning is greatly driving artificial intelligence (AI) development to practical applications. Having demonstrated the potential to significantly improve synchronously parallel learning, the parallel computing based asynchronous advantage actor-critic (A3C) opens a new door for reinforcement learning. Unfortunately, the acceleration's inuence on A3C robustness has been largely overlooked. In this paper, we perform the first robustness assessment of A3C based on parallel computing. By perceiving the policy's action, we construct a global matrix of action probability deviation and define two novel measures of skewness and sparseness to form an integral robustness measure. Based on such static assessment, we then develop a dynamic robustness assessing algorithm through situational whole-space state sampling of changing episodes. Extensive experiments with different combinations of agent number and learning rate are implemented on an A3C-based pathfinding application, demonstrating that our proposed robustness assessment can effectively measure the robustness of A3C, which can achieve an accuracy of 83.3%. [ABSTRACT FROM AUTHOR]
- Subjects :
- ARTIFICIAL intelligence
PARALLEL programming
ALGORITHMS
ASYNCHRONOUS learning
Subjects
Details
- Language :
- English
- ISSN :
- 10009000
- Volume :
- 36
- Issue :
- 5
- Database :
- Complementary Index
- Journal :
- Journal of Computer Science & Technology (10009000)
- Publication Type :
- Academic Journal
- Accession number :
- 153243272
- Full Text :
- https://doi.org/10.1007/s11390-021-1217-z