Back to Search Start Over

Traffic signal priority control based on shared experience multi‐agent deep reinforcement learning.

Authors :
Wang, Zhiwen
Yang, Kangkang
Li, Long
Lu, Yanrong
Tao, Yufei
Source :
IET Intelligent Transport Systems (Wiley-Blackwell); Jul2023, Vol. 17 Issue 7, p1363-1379, 17p
Publication Year :
2023

Abstract

Deep Reinforcement Learning (DRL) has demonstrated its great potential for Adaptive Traffic Signal Control (ATSC) tasks at single‐intersection. In the transportation network multi‐agent environment, cooperative learning among multi‐agents has become a hot research topic. Based on the distributed control model, this paper presents a hybrid reward function model for the dynamic density method of intersections, which emphasizes the priority of emergency vehicles (EMV) while maximizing the traffic efficiency of social vehicles, and solves the problem of sparse reward due to the ambiguous guidance relationship between the multi‐agent Deep Reinforcement Learning (MDRL) state and reward function of the urban road network scenario. On the other hand, based on multi‐agent A2C (MA2C) algorithm, this paper presents Shared Experience MA2C (SEMA2C) between agents. In the transportation network, each intersection represented by an agent has similar task objectives. SMEA2C algorithm takes the current agent as the main body of self‐learning, and utilizes the principle of importance sampling to learn from the experience data of the agents located at adjacent intersections. The experimental results show that the proposed SEMA2C performs well in multi‐agent traffic signal control tasks, and has greater advantages than the similar algorithms. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
1751956X
Volume :
17
Issue :
7
Database :
Complementary Index
Journal :
IET Intelligent Transport Systems (Wiley-Blackwell)
Publication Type :
Academic Journal
Accession number :
165048000
Full Text :
https://doi.org/10.1049/itr2.12328