Back to Search
Start Over
Driver Modeling Through Deep Reinforcement Learning And Behavioral Game Theory
- Source :
- IEEE Transactions on Control Systems Technology
- Publication Year :
- 2022
- Publisher :
- Aperta, 2022.
-
Abstract
- In this paper, a synergistic combination of deep reinforcement learning and hierarchical game theory is proposed as a modeling framework for behavioral predictions of drivers in highway driving scenarios. The need for a modeling framework that can address multiple human-human and human-automation interactions, where all the agents can be modeled as decision makers simultaneously, is the main motivation behind this work. Such a modeling framework may be utilized for the validation and verification of autonomous vehicles: It is estimated that for an autonomous vehicle to reach the same safety level of cars with drivers, millions of miles of driving tests are required. The modeling framework presented in this paper may be used in a high-fidelity traffic simulator consisting of multiple human decision makers to reduce the time and effort spent for testing by allowing safe and quick assessment of self-driving algorithms. To demonstrate the fidelity of the proposed modeling framework, game theoretical driver models are compared with real human driver behavior patterns extracted from traffic data.<br />Comment: 22 pages, 19 figures
- Subjects :
- FOS: Computer and information sciences
Computer Science - Machine Learning
Control algorithm
Reinforcement learning (RL)
Behavioral game theory
Computer science
media_common.quotation_subject
Work (physics)
Fidelity
Deep learning
Synergistic combination
Machine Learning (cs.LG)
Control and Systems Engineering
Autonomous vehicles (AVs)
Reinforcement learning
Computer Science - Multiagent Systems
Game theory (GT)
Electrical and Electronic Engineering
Driver modeling
Game theory
Simulation
Traffic simulator
media_common
Multiagent Systems (cs.MA)
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- IEEE Transactions on Control Systems Technology
- Accession number :
- edsair.doi.dedup.....585768a3152cfb47fcf5fc9f710d7100