Back to Search
Start Over
Optimizing Constrained Guidance Policy With Minimum Overload Regularization.
- Source :
-
IEEE Transactions on Circuits & Systems. Part I: Regular Papers . Jul2022, Vol. 69 Issue 7, p2994-3005. 12p. - Publication Year :
- 2022
-
Abstract
- Using reinforcement learning (RL) algorithm to optimize guidance law can address non-idealities in complex environment. However, the optimization is difficult due to huge state-action space, unstable training, and high requirements on expertise. In this paper, the constrained guidance policy of a neural guidance system is optimized using improved RL algorithm, which is motivated by the idea of traditional model-based guidance method. A novel optimization objective with minimum overload regularization is developed to restrain the guidance policy directly from generating redundant missile maneuver. Moreover, a bi-level curriculum learning is designed to facilitate the policy optimization. Experiment results show that the proposed minimum overload regularization can reduce the vertical overloads of missile significantly, and the bi-level curriculum learning can further accelerate the optimization of guidance policy. [ABSTRACT FROM AUTHOR]
- Subjects :
- *REINFORCEMENT learning
*MATHEMATICAL regularization
*TASK analysis
Subjects
Details
- Language :
- English
- ISSN :
- 15498328
- Volume :
- 69
- Issue :
- 7
- Database :
- Academic Search Index
- Journal :
- IEEE Transactions on Circuits & Systems. Part I: Regular Papers
- Publication Type :
- Periodical
- Accession number :
- 157745397
- Full Text :
- https://doi.org/10.1109/TCSI.2022.3163463