Back to Search Start Over

A fuzzy Actor–Critic reinforcement learning network

Authors :
Xuesong Wang
Yuhu Cheng
Jianqiang Yi
Source :
Information Sciences. 177:3764-3781
Publication Year :
2007
Publisher :
Elsevier BV, 2007.

Abstract

One of the difficulties encountered in the application of reinforcement learning methods to real-world problems is their limited ability to cope with large-scale or continuous spaces. In order to solve the curse of the dimensionality problem, resulting from making continuous state or action spaces discrete, a new fuzzy Actor-Critic reinforcement learning network (FACRLN) based on a fuzzy radial basis function (FRBF) neural network is proposed. The architecture of FACRLN is realized by a four-layer FRBF neural network that is used to approximate both the action value function of the Actor and the state value function of the Critic simultaneously. The Actor and the Critic networks share the input, rule and normalized layers of the FRBF network, which can reduce the demands for storage space from the learning system and avoid repeated computations for the outputs of the rule units. Moreover, the FRBF network is able to adjust its structure and parameters in an adaptive way with a novel self-organizing approach according to the complexity of the task and the progress in learning, which ensures an economic size of the network. Experimental studies concerning a cart-pole balancing control illustrate the performance and applicability of the proposed FACRLN.

Details

ISSN :
00200255
Volume :
177
Database :
OpenAIRE
Journal :
Information Sciences
Accession number :
edsair.doi...........bb604210b751c9ec6bf00f7cd525a9e4
Full Text :
https://doi.org/10.1016/j.ins.2007.03.012