Back to Search
Start Over
BAGAIL: Multi-modal imitation learning from imbalanced demonstrations.
- Source :
-
Neural networks : the official journal of the International Neural Network Society [Neural Netw] 2024 Jun; Vol. 174, pp. 106251. Date of Electronic Publication: 2024 Mar 19. - Publication Year :
- 2024
-
Abstract
- Expert demonstrations in imitation learning often contain different behavioral modes, e.g., driving modes such as driving on the left, keeping the lane, and driving on the right in the driving tasks. Although most existing multi-modal imitation learning methods allow learning from demonstrations of multiple modes, they have strict constraints on the data of each mode, generally requiring a near data ratio of all modes. Otherwise, it tends to fall into a mode collapse or only learn the data distribution of the mode that has the largest data volume. To address the problem, an algorithm that balances real-fake loss and classification loss by modifying the output of the discriminator, referred to as BAlanced Generative Adversarial Imitation Learning (BAGAIL), is proposed. With this modification, the generator is only rewarded for generating real trajectories with correct modes. BAGAIL is therefore able to deal with imbalanced expert demonstrations and carry out efficient learning for each mode. The learning process of BAGAIL is divided into a pre-training stage and an imitation learning stage. During the pre-training stage, BAGAIL initializes the generator parameters by means of conditional Behavioral Cloning, laying the foundation for the direction of parameter optimization. During the imitation learning stage, BAGAIL optimizes the parameters by using the adversary between the generator and the modified discriminator so that the finally obtained policy can successfully learn the distribution of imbalanced expert data. The experiments showed that BAGAIL accurately distinguished different behavioral modes with imbalanced demonstrations. What is more, the learning result of each mode is close to the expert standard and more stable than other multi-modal imitation learning methods.<br />Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.<br /> (Copyright © 2024 Elsevier Ltd. All rights reserved.)
- Subjects :
- Algorithms
Policy
Reward
Imitative Behavior
Learning
Subjects
Details
- Language :
- English
- ISSN :
- 1879-2782
- Volume :
- 174
- Database :
- MEDLINE
- Journal :
- Neural networks : the official journal of the International Neural Network Society
- Publication Type :
- Academic Journal
- Accession number :
- 38552352
- Full Text :
- https://doi.org/10.1016/j.neunet.2024.106251