1. A mixture varying-gain dynamic learning network for solving nonlinear and nonconvex constrained optimization problems
- Author
-
Jianyong Zhu, Lu Rongxiu, Zhijun Zhang, Zhenmin Zhu, Hui Yang, Guanhua Qiu, and Xianzhi Deng
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Optimization problem ,Karush–Kuhn–Tucker conditions ,Computer science ,Cognitive Neuroscience ,Mathematics::Optimization and Control ,02 engineering and technology ,Function (mathematics) ,Projection (linear algebra) ,Computer Science Applications ,Nonlinear system ,020901 industrial engineering & automation ,Artificial Intelligence ,Robustness (computer science) ,Control theory ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing - Abstract
Nonlinear and nonconvex optimization problem (NNOP) is a challenging problem in control theory and applications. In this paper, a novel mixture varying-gain dynamic learning network (MVG-DLN) is proposed to solve NNOP with inequality constraints. To do so, first, this NNOP is transformed into some equations through Karush–Kuhn–Tucker (KKT) conditions and projection theorem, and the neuro-dynamics function can be obtained. Second, the time varying convergence parameter is utilized to obtain a faster convergence speed. Third, an integral term is used to strengthen the robustness. Theoretical analysis proves that the proposed MVG-DLN has global convergence and good robustness. Three numerical simulation comparisons between FT-FP-CDNN and MVG-DLN substantiate the faster convergence performance and greater robustness of the MVG-DLN in solving the nonlinear and nonconvex optimization problems.
- Published
- 2021