Back to Search Start Over

Low-Cost Multi-Agent Navigation via Reinforcement Learning With Multi-Fidelity Simulator

Authors :
Jiantao Qiu
Chao Yu
Weiling Liu
Tianxiang Yang
Jincheng Yu
Yu Wang
Huazhong Yang
Source :
IEEE Access, Vol 9, Pp 84773-84782 (2021)
Publication Year :
2021
Publisher :
IEEE, 2021.

Abstract

In recent years, reinforcement learning (RL) has been widely used to solve multi-agent navigation tasks, and a high-fidelity level for the simulator is critical to narrow the gap between simulation and real-world tasks. However, high-fidelity simulators have high sampling costs and bottleneck the training model-free RL algorithms. Hence, we propose a Multi-Fidelity Simulator framework to train Multi-Agent Reinforcement Learning (MFS-MARL), reducing the total data cost with samples generated by a low-fidelity simulator. We apply the depth-first search to obtain local feasible policies on the low-fidelity simulator as expert policies to help the original reinforcement learning algorithm explore. We built a multi-vehicle simulator with variable fidelity levels to test the proposed method and compared it with the vanilla Soft Actor-Critic (SAC) and expert actor methods. The results show that our method can effectively obtain local feasible policies and can achieve a 23% cost reduction in multi-agent navigation tasks.

Details

Language :
English
ISSN :
21693536
Volume :
9
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.00262cce78a64649a8011d404b48d90b
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2021.3085328