Back to Search
Start Over
A mixture varying-gain dynamic learning network for solving nonlinear and nonconvex constrained optimization problems
- Source :
- Neurocomputing. 456:232-242
- Publication Year :
- 2021
- Publisher :
- Elsevier BV, 2021.
-
Abstract
- Nonlinear and nonconvex optimization problem (NNOP) is a challenging problem in control theory and applications. In this paper, a novel mixture varying-gain dynamic learning network (MVG-DLN) is proposed to solve NNOP with inequality constraints. To do so, first, this NNOP is transformed into some equations through Karush–Kuhn–Tucker (KKT) conditions and projection theorem, and the neuro-dynamics function can be obtained. Second, the time varying convergence parameter is utilized to obtain a faster convergence speed. Third, an integral term is used to strengthen the robustness. Theoretical analysis proves that the proposed MVG-DLN has global convergence and good robustness. Three numerical simulation comparisons between FT-FP-CDNN and MVG-DLN substantiate the faster convergence performance and greater robustness of the MVG-DLN in solving the nonlinear and nonconvex optimization problems.
- Subjects :
- 0209 industrial biotechnology
Mathematical optimization
Optimization problem
Karush–Kuhn–Tucker conditions
Computer science
Cognitive Neuroscience
Mathematics::Optimization and Control
02 engineering and technology
Function (mathematics)
Projection (linear algebra)
Computer Science Applications
Nonlinear system
020901 industrial engineering & automation
Artificial Intelligence
Robustness (computer science)
Control theory
Convergence (routing)
0202 electrical engineering, electronic engineering, information engineering
020201 artificial intelligence & image processing
Subjects
Details
- ISSN :
- 09252312
- Volume :
- 456
- Database :
- OpenAIRE
- Journal :
- Neurocomputing
- Accession number :
- edsair.doi...........9a1fbedb8684f9e4c9ea2f121cae427c