499 results
Search Results
2. Interval type-2 fuzzy neural networks with asymmetric MFs based on the twice optimization algorithm for nonlinear system identification.
- Author
-
Liu, Jiapu, Zhao, Taoyan, Cao, Jiangtao, and Li, Ping
- Subjects
- *
FUZZY neural networks , *MEMETICS , *MATHEMATICAL optimization , *INFORMATION filtering systems , *SYSTEM identification , *NONLINEAR systems , *STANDARD deviations - Abstract
This paper proposes a novel algorithm twice optimization for interval type-2 fuzzy neural networks with asymmetric membership functions (TOIT2FNN-AMF), for nonlinear system identification problems. The proposed TOIT2FNN-AMF uses an asymmetric Gaussian interval type-2 membership function to enhance the network's ability to describe and solve nonlinear and uncertain problems. The twice optimization algorithm consists of structure learning and parameter learning. Firstly, this paper proposes a multi-strategy adaptive differential evolution (MSADE) algorithm as the first optimization algorithm, which is used to determine the structure and the initial values of the parameters of the TOIT2FNN-AMF. It applies the root mean square error (RMSE) of the TOIT2FNN-AMF as the fitness function to determine the structure (number of rules) and initial parameters of the IT2FNN by searching for the RMSE values under different structures. When the fitness value reaches the minimum, that is, the RMSE value of the TOIT2FNN-AMF, the corresponding number will become the optimal one of fuzzy rules of the TOIT2FNN-AMF. Then, the second optimization algorithm of the TOIT2FNN-AMF turns into a hybrid optimization algorithm composed of an adaptive moment estimation (Adam) algorithm and recursive least squares (RLS) algorithm. Adam is used to optimize the antecedent parameters of TOIT2FNN-AMF rules, so as to maintain rapid convergence without generating oscillation during the training process; RLS is used to optimize the consequent parameters of TOIT2FNN-AMF rules, so that the network parameters can be optimized rapidly. In this way, the problems of excessive parameters to be adjusted and excessive slow convergence of the network can be solved. Finally, this paper evaluates the proposed TOIT2FNN-AMF by testing on problems of nonlinear system identification and chaotic time-series prediction. The simulation results are compared with those of similar methods in the existing literatures, which demonstrates that the proposed TOIT2FNN-AMF model yields a lower RMSE value and a simpler network structure than the other type-2 fuzzy neural networks (T2FNNs). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Stochastic configuration networks with chaotic maps and hierarchical learning strategy.
- Author
-
Qiao, Jinghui and Chen, Yuxi
- Subjects
- *
LEARNING strategies , *MACHINE learning , *CHAOS theory , *GAUSSIAN distribution , *MATHEMATICAL optimization , *REINFORCEMENT learning - Abstract
Stochastic configuration networks (SCNs) have universal approximation capability and fast modeling properties, which have been successfully employed in large-scale data analytics. Based on SCNs, Stochastic configuration networks with block increments (BSC) use the node block increments mechanism to improve training speed but increase the complexity of the model. This paper presents a parallel configuration method (PCM), develops an extension of the original BSC with chaos theory and proposes stochastic configuration networks with chaotic maps (SCNCM), and establishes a hierarchical learning strategy (HLS) to enhance the compactness and construction speed of the model. Firstly, PCM randomly assigns the input weights w and biases b of hidden layer nodes by using uniform and normal distributions. In PCM, an iterative learning algorithm is intended to generate the scope control set and improve configuration efficiency. Secondly, the paper presents two kinds of stochastic configuration networks with chaotic maps, which are SCNCM-I and SCNCM-II. SCNCM-I adjusts block size by using multiple error values and chaotic maps to improve the training speed. Based on SCNCM-I, SCNCM-II utilizes node removal mechanism to enhance the compactness. Finally, HLS integrates with SCNCM-I, SCNCM-II, and the Harris-hawks optimization algorithm (HHO). The purpose of training is to enhance the training speed and compactness for three algorithms. The experiments are conducted on four benchmark data sets and an industrial application shows its effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. MPSC for networked switched systems based on timing-response event-triggering scheme.
- Author
-
Qi, Yiwen, Zhang, Simeng, Yu, Wenke, and Huang, Jie
- Subjects
- *
DENIAL of service attacks , *LINEAR matrix inequalities , *CLOSED loop systems , *WATERMARKS , *PREDICTION models , *MATHEMATICAL optimization - Abstract
This paper studies model predictive security control (MPSC) for networked switched systems under denial-of-service (DoS) attacks. Most of existing works only adjust the triggering scheme when being attacked. Different from them, this paper proposes a novel timing-response event-triggering scheme (TR-ETS) to reduce the impact of attacks on system performance, which can not only configure system resources adaptively, but also accurately detect attack information and compensate the attacked data. Specifically, the proposed scheme includes two event-based triggers, which can dynamically and jointly regulate the communication/calculation ability, generate virtual attack sequences and acquire the number of passive packet loss. Then, based on the triggered states, a class of model predictive controllers is designed to optimize the control action. Due to possible strong attacks, a security control framework including network and local loops be introduced and a permissable type-switching mechanism (PTM) is used. Under the permissable controllers (i.e., network and local controllers), sufficient conditions for the stability of closed-loop switched systems are derived. In addition, a set of model predictive optimization algorithm using linear matrix inequalities (LMIs) technique is addressed. Finally, the effectiveness of the proposed method is verified by illustrative examples. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Population based training and federated learning frameworks for hyperparameter optimisation and ML unfairness using Ulimisana Optimisation Algorithm.
- Author
-
Maumela, Tshifhiwa, Nelwamondo, Fulufhelo, and Marwala, Tshilidzi
- Subjects
- *
MATHEMATICAL optimization , *MACHINE learning , *SOCIAL networks , *ARTIFICIAL intelligence - Abstract
This paper introduces the Ulimisana Optimisation Algorithm enabled Population Based Training (PBT-UOA) framework which allows for hyperparameters to be fine-tuned using a population based meta-heuristic algorithm at the same time as parameters are being optimised. Models are trained until near-convergence on the updated hyperparameters and the parameters of the best performing model are shared to warm start the other models in the next hyperparameter tuning iteration. In the PBT-UOA, all models are trained using the same dataset. This framework performed better than the Bayesian Optimisation algorithm. This paper also introduces the Ulimisana Optimisation Algorithm enabled Federated Learning (FL-UOA) framework which is an extension of the PBT-UOA. This framework is introduced to address the challenges of scattered datasets and privacy that is presented by the increase in connected end-devices. The FL-UOA learns on local data in scattered end-devices without sending datasets to a central server. The training datasets in local end-devices are used to evaluate models trained in other end-devices. The performance metrics are used to update the Social Trust Network (STN) of the FL-UOA framework. The FL-UOA outperformed the classic Federated Learning framework. This STN updating technique was tested in Machine Learning (ML) Unfairness to see how well it functioned as a regularisation term. This was achieved by training different models on subsets that contained datasets representing only specific sensitive groups. Results showed that by updating the hyperparameters while learning the parameters on the dataset scattered across different devices, the FL-UOA, takes advantage of diversified learning and reduces the ML Unfairness for models trained on group specific datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Collaborative granular sieving: A deterministic multievolutionary algorithm for multimodal optimization problems.
- Author
-
Dai, Lei, Zhang, Liming, Chen, Zehua, and Ding, Weiping
- Subjects
- *
DETERMINISTIC algorithms , *MATHEMATICAL optimization , *GLOBAL optimization , *EVOLUTIONARY algorithms , *SIEVES - Abstract
Evolutionary algorithms (EAs) that integrate niching techniques are among the most effective methods for multimodal optimization problems. However, most algorithmic contributions are based on empirical performance observations rather than rigorous mathematical convergence support; this makes most existing methods parameter sensitive. Inspired by a recently proposed deterministic global optimization method, granular sieving (GrS), an extended global optimization method named collaborative GrS (Co-GrS) and a novel deterministic multi-EA design framework are proposed in this paper. The innovations are threefold. (1) Existing EAs are stochastic methods, and this paper introduces the principle of deterministic global optimization into EA for the first time in the literature. (2) A deterministic multi-EA framework is designed and implemented in the paper; from the perspective of population evolution, an easy-to-operate survival-of-the-fittest strategy based on mathematical principles is established in Co-GrS. (3) Unlike existing stochastic EAs, where the reproducibility of optimal solutions is achieved in a statistical sense, Co-GrS does not involve random parameters, and it automatically runs the algorithm only once with pre-set fixed parameters to find all optimal solutions. The experimental results demonstrate the effectiveness and competitiveness of our method compared to 16 state-of-the-art multimodal algorithms on the CEC'2013 benchmark suite. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Proximal policy optimization via enhanced exploration efficiency.
- Author
-
Zhang, Junwei, Zhang, Zhenghao, Han, Shuai, and Lü, Shuai
- Subjects
- *
MACHINE learning , *REINFORCEMENT learning , *MATHEMATICAL optimization - Abstract
Proximal policy optimization (PPO) algorithm is a deep reinforcement learning algorithm with outstanding performance, especially in continuous control tasks. But the performance of this method is still affected by its exploration ability. Based on continuous control tasks, this paper analyzes the original Gaussian action exploration mechanism in PPO algorithm, and clarifies the influence of exploration ability on performance. Afterward, aiming at the problem of exploration, an exploration enhancement mechanism based on uncertainty estimation is designed in this paper. Then, we apply exploration enhancement theory to PPO algorithm and propose the proximal policy optimization algorithm with intrinsic exploration module (IEM-PPO). In the experimental parts, we evaluate our method on multiple tasks in MuJoCo phsysical simulator, and compare IEM-PPO algorithm with PPO and PPO with intrinsic curiosity module (ICM-PPO). The experimental results demonstrate that IEM-PPO algorithm performs better in terms of sample efficiency and cumulative reward, and has stability and robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Handling dynamic multiobjective optimization problems with variable environmental change via classification prediction and dynamic mutation.
- Author
-
Li, Jianxia, Liu, Ruochen, and Wang, Ruinan
- Subjects
- *
PID controllers , *MATHEMATICAL optimization , *COMPLEX variables , *DYNAMICAL systems , *FORECASTING - Abstract
This paper proposes an adaptive dynamic multiobjective optimization algorithm for handling dynamic multiobjective optimization problems with variable environmental change types. Most of the existing dynamic multiobjective optimization problems (DMOPs) only deal with a single change type in the environment. Therefore, we design a set of DMOPs that has variable and mixed change types. Next, this paper proposes an adaptive dynamic multiobjective optimization algorithm (DMOA) focusing on the change types, to solve DMOPs with variable change types. It can detect the different types of environmental changes. The main purpose of a DMOA is to find the Pareto-optimal set (PS) of each environment. Therefore, the change types of DMOPs mainly contain two categories: PS changes over time and PS remains constant. After detecting the change type, an adaptive response strategy is activated to react to environmental changes. If PS changes over time, a classification prediction (CP) strategy is active to respond to environmental changes. If PS remains constant, a dynamic mutation (DM) strategy works to react to environmental changes. The proposed algorithm is extensively studied through comparison with several advanced DMOAs, thereby demonstrating its effectiveness in working out complex DMOPs with variable change types and on the parameter-tuning problem of PID controllers for dynamic systems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. A distributed prescribed-time optimization analysis for multi-agent systems.
- Author
-
Chen, Siyu, Jiang, Haijun, and Yu, Zhiyong
- Subjects
- *
MULTIAGENT systems , *MATHEMATICAL optimization , *COST functions , *STABILITY theory , *LYAPUNOV stability , *MEMETICS - Abstract
This paper considers the distributed prescribed-time optimization problem of multi-agent systems (MASs). Considering the strongly convex function of time-invariant for each agent, the two-stage distributed prescribed-time optimization algorithm is designed based on the idea of zero-gradient-sum. Meanwhile, in order to save system resources, the event-triggered control mechanism is introduced into the algorithm in this paper. In the first stage, the distributed prescribed-time event-triggered algorithm is proposed to minimize the local objective functions of each agent at the prescribed-time interval. In the second stage, the algorithm is driven to optimize the global cost function while maintaining the gradient sum of all local cost functions to zero. The criteria for achieving the consensus and optimization of MASs are obtained by using Lyapunov stability theory and optimization theory. Moreover, it is proved in detail that using the two triggering functions will not result in Zeno behavior. The numerical example is given to demonstrate the correctness of the theoretical analysis and the effectiveness of the control algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Bayesian optimization based dynamic ensemble for time series forecasting.
- Author
-
Du, Liang, Gao, Ruobin, Suganthan, Ponnuthurai Nagaratnam, and Wang, David Z.W.
- Subjects
- *
TIME series analysis , *FORECASTING , *MACHINE learning , *MATHEMATICAL optimization , *SUCCESS - Abstract
• An adaptive forecast combination of highly diversified models is proposed. • The hyperparameter tuning is conducted through Bayesian optimization. • Our model significantly outperforms the classic and latest methods on extensive datasets. Among various time series (TS) forecasting methods, ensemble forecast is extensively acknowledged as a promising ensemble approach achieving great success in research and industry. Due to the high diversification of individual model assumptions, heterogeneous information fusion contributes to generating effective and robust forecasts for Economics, Meteorology, and Transportation. This paper proposes a Bayesian optimization-based dynamic ensemble (BODE) that overcomes the single model-based methods limitation and provides a dynamic ensemble forecast combination for TS with time-varying underlying patterns. The proposed BODE method combines ten disparate model candidates, including statistical methods, machine learning (ML)-based models, and the latest deep neural networks (DNN). We take into consideration their prediction performance for the recent past to adjust their weights for combination and apply the model-based Bayesian optimization algorithm (BOA) for the combination hyperparameter (HP) tuning to endow our method with higher adaptability and better generalization performance. Besides, the frequency impact of TS data on the ensemble forecast methods is under-researched in the current literature. Therefore, four groups of distinct seasonal TS datasets are investigated in this paper. The empirical result demonstrates that our method performs robustly better performance with the main reasons analyzed in a detailed ablation study. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Impact of chaotic dynamics on the performance of metaheuristic optimization algorithms: An experimental analysis.
- Author
-
Zelinka, Ivan, Diep, Quoc Bao, Snášel, Václav, Das, Swagatam, Innocenti, Giacomo, Tesi, Alberto, Schoen, Fabio, and Kuznetsov, Nikolay V.
- Subjects
- *
METAHEURISTIC algorithms , *BIOLOGICAL evolution , *EVOLUTIONARY algorithms , *MATHEMATICAL optimization , *DETERMINISTIC algorithms , *PSYCHOLOGICAL feedback - Abstract
[Display omitted] • Comparing to the other research papers, this paper compares the performance of the oldest, newest, more minor and well-known algorithms on deterministic chaos generators in one massive and unique study. • Paper show that by precision tuning, the original chaotic series convert into short N periodic time series (PTS). Thus no randomness as usually understand is there. These series are then used instead of classics pseudorandom numbers with positive impact. • Paper reveal the clearly visible positive impact of PTS on evolutionary algorithms (EAs) dynamics, which is visible almost on all algorithms used in this paper. Compared with the same EAs with classic random generators. • Paper open the question of whether standard random (nonchaotic) processes are really necessary for algorithm dynamics and suggest relations between randomness in EAs and noise in dynamical system control and theory. • Paper open, sketch and suggest new ideas and strategies on how to understand algorithm dynamics as the discrete feedback dynamical systems. Random mechanisms including mutations are an internal part of evolutionary algorithms, which are based on the fundamental ideas of Darwin's theory of evolution as well as Mendel's theory of genetic heritage. In this paper, we debate whether pseudo-random processes are needed for evolutionary algorithms or whether deterministic chaos, which is not a random process, can be suitably used instead. Specifically, we compare the performance of 10 evolutionary algorithms driven by chaotic dynamics and pseudo-random number generators using chaotic processes as a comparative study. In this study, the logistic equation is employed for generating periodical sequences of different lengths, which are used in evolutionary algorithms instead of randomness. We suggest that, instead of pseudo-random number generators, a specific class of deterministic processes (based on deterministic chaos) can be used to improve the performance of evolutionary algorithms. Finally, based on our findings, we propose new research questions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. An image matching optimization algorithm based on pixel shift clustering RANSAC.
- Author
-
Ma, Shuhua, Guo, Peikai, You, Hairong, He, Ping, Li, Guanglin, and Li, Heng
- Subjects
- *
IMAGE registration , *MATHEMATICAL optimization , *PIXELS , *PARALLAX - Abstract
• The pixel shift clustering, a new mismatches detection, is proposed. • The pixel shift models of feature point are established for different motion patterns. • Mismatches are eliminated by density peaks clustering. • The test use indoor, outdoor and kitti database, then compared with the related works. • The proposed method shows good performance in matching accuracy and robustness. This paper focuses on improving the accuracy of image matching by eliminating the residual mismatches in the matching results of standard RANSAC. Based on pixel shift clustering and RANSAC algorithms, a matching optimization algorithm called pixel shift clustering RANSAC, PSC-RANSAC in short, is proposed in this paper. Firstly, the pixel shift model of space point from two perspectives are established by parallax principle and camera projection model. Then, based on the established pixel shift model, density peaks clustering (DPC) algorithm is used to select the mismatches out to enhance the accuracy of image matching. Meanwhile the comparisons among PSC-RANSAC, standard RANSAC, progressive sample consensus and graph-cut RANSAC show that PSC-RANSAC can more effectively and robustly eliminate the residual mismatches in initial matching results. The proposed method provides an effective tool for optimization on image matching. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. A partition-based convergence framework for population-based optimization algorithms.
- Author
-
Li, Xinxin, Hua, Shuai, Liu, Qunfeng, and Li, Yun
- Subjects
- *
MEMETICS , *MATHEMATICAL optimization , *PARTICLE swarm optimization , *GLOBAL optimization , *GENETIC algorithms , *DIFFERENTIAL evolution - Abstract
[Display omitted] • A framework is proposed for population-based optimization algorithms' convergence. • DIRECT's partition and population evolutions are repeated in this framework. • The global convergence of the framework is proved. • Framework is applied successfully on three population-based optimization algorithms. Population-based optimization algorithms, such as genetic algorithm and particle swarm optimization, have become a class of important algorithms for solving global optimization problems. However, there is an issue that the global convergence is often absent for most of them. This paper proposes a partition-based convergence framework for population-based optimization algorithms to solve this troubling problem. In this framework, regular partitions and evolutions of populations are implemented alternatively. Specifically, the initial population is generated from a regular partition on the search space; after several generations of evolution of the population, the evolution result is returned to join in the regular partition again, and a new population is generated. Repeat such progress until some stop condition is satisfied. Global convergence is guaranteed for the framework. Then this convergence framework is applied to particle swarm optimization, differential evolution, and genetic algorithm. The modified algorithms are globally convergent and perform better than the original version. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Adaptive multiple selection strategy for multi-objective particle swarm optimization.
- Author
-
Han, Honggui, Zhang, Linlin, Yinga, A., and Qiao, Junfei
- Subjects
- *
PARTICLE swarm optimization , *EVOLUTIONARY computation , *MATHEMATICAL optimization - Abstract
Multiple-swarm approach is a quite successful evolutionary computation framework for multi-objective particle swarm optimization algorithm (MOPSO) to solve multi-objective optimization problems (MOPs). However, the main challenge of using this framework lies in the lack of leader selection, resulting in the optimal solutions being distributed loosely, as well as far away from the true Pareto-optimal front. To overcome this problem, a multi-swarm MOPSO with an adaptive multiple selection strategy (MOPSO-AMS) is investigated in this paper. This proposed MOPSO-AMS is able to guide each swarm with a suitable leader to improve the evolutionary performance. The novelties and advantages of MOPSO-AMS include the following three aspects. First, a hierarchical evolutionary state detection mechanism, based on the distribution and dominance information of non-dominated solutions, is designed to obtain the evolutionary state of current iteration. Then, the requirements of evolutionary process can be detected. Second, an adaptive multiple selection strategy, using the evolutionary state information and spatial features of candidate solutions, is developed to select leaders of sub-swarms in multiple evolutionary states. Then, suitable leaders can be selected to keep the balance between convergence and diversity. Third, an adaptive parameter adjustment mechanism, based on the dominance relationship of each particle, is introduced to further improve the evolutionary performance of MOPSO-AMS. Finally, numerical simulations and a practical application are used to validate the analytical results and demonstrate the significant improvement of MOPSO-AMS. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Many-objective evolutionary algorithm based on spatial distance and decision vector self-learning.
- Author
-
Yang, Lei, Li, Kangshun, Zeng, Chengzhou, Liang, Shumin, Zhu, Binjie, and Wang, Dongya
- Subjects
- *
EVOLUTIONARY algorithms , *MATHEMATICAL optimization - Abstract
In this paper, a new many-objective optimization evolutionary algorithm (MaOEA), namely Many-Objective Evolutionary Algorithm Based on Spatial Distance and Decision Vector Self-Learning (DVSLEA), is proposed for many-objective optimization. The core idea of the algorithm is to use spatial distance to influence the value of disturbance ratio and then affect the generation of offspring. In order to make the algorithm have a good distribution, distribution vector is introduced for the procedure of disturbance. Moreover, a self-learning process is corporated to ascertain the value of disturbance ratio. To evaluate the performance of DVSLEA, the DTLZ and WFG test suites with 3, 5, 8, 10, and 15 objectives are adopted. The experimental results indicate that DVSLEA shows superior performance over nine competitive evolutionary algorithms(MOEA/DD, NSGA-III, VaEA, SPEA2, SPEA2-SDE, MOEADAWA, onebyoneEA, PREA, RVEA), when solving most of the test problems used. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Satisfaction-aware Task Assignment in Spatial Crowdsourcing.
- Author
-
Xie, Yuan, Wang, Yongheng, Li, Kenli, Zhou, Xu, Liu, Zhao, and Li, Keqin
- Subjects
- *
CROWDSOURCING , *SWARM intelligence , *SATISFACTION , *NASH equilibrium , *PRICES , *MATHEMATICAL optimization , *PROBLEM solving , *MEMETICS , *KEYWORD searching - Abstract
With the ubiquitous of GPS-equipped devices, spatial crowdsourcing (SC) technology has been widely utilized in our daily life. As a novel computing paradigm, it hires mobile users as workers who physically move to the location of the task and perform the task. Task assignment is a fundamental issue in SC. In real life, there are many complex tasks requiring different workers, among which the quality of worker cooperation and the price satisfaction of users should not be ignored. Hence, this paper examines a satisfaction-aware task assignment (SATA) problem with the goal of maximizing overall user satisfaction, where the user satisfaction integrates the satisfaction towards price and cooperation quality. The SATA problem has been proved to be NP-hard by reducing it from the k -set packing problem. In addition, two algorithms, namely, conflict-aware greedy (CAG) algorithm and game theoretic (GT) algorithm with an optimization strategy, are proposed for solving the SATA problem. The CAG algorithm can efficiently obtain a result with provable approximate bound, while the GT algorithm is proven to be convergent which can find a Nash equilibrium. Extensive experiments have demonstrated the effectiveness and efficiency of our proposed approaches on both real and synthetic datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Satisfaction-aware Task Assignment in Spatial Crowdsourcing.
- Author
-
Xie, Yuan, Wang, Yongheng, Li, Kenli, Zhou, Xu, Liu, Zhao, and Li, Keqin
- Subjects
- *
CROWDSOURCING , *SWARM intelligence , *SATISFACTION , *NASH equilibrium , *PRICES , *MATHEMATICAL optimization , *PROBLEM solving , *MEMETICS , *KEYWORD searching - Abstract
With the ubiquitous of GPS-equipped devices, spatial crowdsourcing (SC) technology has been widely utilized in our daily life. As a novel computing paradigm, it hires mobile users as workers who physically move to the location of the task and perform the task. Task assignment is a fundamental issue in SC. In real life, there are many complex tasks requiring different workers, among which the quality of worker cooperation and the price satisfaction of users should not be ignored. Hence, this paper examines a satisfaction-aware task assignment (SATA) problem with the goal of maximizing overall user satisfaction, where the user satisfaction integrates the satisfaction towards price and cooperation quality. The SATA problem has been proved to be NP-hard by reducing it from the k -set packing problem. In addition, two algorithms, namely, conflict-aware greedy (CAG) algorithm and game theoretic (GT) algorithm with an optimization strategy, are proposed for solving the SATA problem. The CAG algorithm can efficiently obtain a result with provable approximate bound, while the GT algorithm is proven to be convergent which can find a Nash equilibrium. Extensive experiments have demonstrated the effectiveness and efficiency of our proposed approaches on both real and synthetic datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Novel binary differential evolution algorithm for knapsack problems.
- Author
-
Ali, Ismail M., Essam, Daryl, and Kasmarik, Kathryn
- Subjects
- *
KNAPSACK problems , *DIFFERENTIAL evolution , *ALGORITHMS , *MATHEMATICAL optimization , *EVALUATION methodology - Abstract
• In this paper, proposing a novel design of DE for Knapsack problems is reported. • A dual representation of solutions in continuous and binary is introduced. • New efficient fitness evaluation and repairing method is proposed. • Results demonstrated the efficiency of Novel DE in solving Knapsack problems. • Novel DE defined new solutions, better than best-known ones for 5 large instances. The capability of the conventional differential evolution algorithm to solve optimization problems in continuous spaces has been well demonstrated and documented in the literature. However, differential evolution has been commonly considered inapplicable for several binary/permutation-based real-world problems because of its arithmetic reproduction operator. Moreover, many limitations of the standard differential evolution algorithm, such as slow convergence and becoming trapped in local optima, have been defined. In this paper, a novel technique which makes a simple differential evolution algorithm suitable and very effective for solving binary-based problems, such as binary knapsack ones, is proposed. It incorporates new components, such as representations of solutions, a mapping method and a diversity technique. Also, a new efficient fitness evaluation approach for calculating and, at the same time, repairing knapsack candidate solutions, is introduced. To assess the performance of this new algorithm, four datasets with a total of 44 instances of binary knapsack problems are considered. Its performance and those of 22 state-of-the-art algorithms are compared, with the experimental results demonstrating its superiority in terms of both the quality of solutions and computational times. It is also capable of finding new solutions which are better than the current best ones for five large knapsack problems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Generalized possibilistic c-means clustering with double weighting exponents.
- Author
-
Wu, Chengmao and Yu, Dongxue
- Subjects
- *
EXPONENTS , *DOCUMENT clustering , *MATHEMATICAL optimization , *PERFORMANCES - Abstract
• The generalized possibilistic clustering model with double exponents is established. • Two improved possibilistic clustering algorithms are proposed. • The convergence of the two proposed algorithms is strictly analyzed. • Experiment results indicate that the proposed algorithms have good performance. Considering that the improved possibilistic c-means (PCM) algorithms are sensitive to noise while addressing the issue of consistency clustering in PCM, this paper proposes the generalized possibilistic c-means clustering with double weighting exponents. Firstly, double weighting exponents are introduced into the PCM algorithm, and the generalized possibilistic c-means clustering model is established. Secondly, the difference between the double weighting exponents in the generalized possibilistic clustering model is set to 1 or −1; two improved single-exponent possibilistic clustering algorithms called IPCM1 and IPCM2 are proposed, and their local convergence is strictly proven by the Zangwill theorem. Finally, the reasonable range of the weighting exponent in IPCM1 and IPCM2 algorithms is determined by mathematical optimization. Experimental results indicate that IPCM1 and IPCM2 outperform existing PCM-related algorithms; they achieve excellent clustering performance, significantly improve the robustness to noise, and decrease the sensitivity to the weighting exponent. The work of this paper has far-reaching significance for promoting the development of the possibilistic c-means clustering theory. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Combining Lipschitz and RBF surrogate models for high-dimensional computationally expensive problems.
- Author
-
Kůdela, Jakub and Matoušek, Radomil
- Subjects
- *
EVOLUTIONARY algorithms , *RADIAL basis functions , *DIFFERENTIAL evolution , *MATHEMATICAL optimization , *BUDGET - Abstract
Standard evolutionary optimization algorithms assume that the evaluation of the objective and constraint functions is straightforward and computationally cheap. However, in many real-world optimization problems, these evaluations involve computationally expensive numerical simulations or physical experiments. Surrogate-assisted evolutionary algorithms (SAEAs) have recently gained increased attention for their performance in solving these types of problems. The main idea of SAEAs is the integration of an evolutionary algorithm with a selected surrogate model that approximates the computationally expensive function. In this paper, we propose a surrogate model based on a Lipschitz underestimation and use it to develop a differential evolution-based algorithm. The algorithm, called Lipschitz Surrogate-assisted Differential Evolution (LSADE), utilizes the Lipschitz-based surrogate model, along with a standard radial basis function surrogate model and a local search procedure. The experimental results on seven benchmark functions of dimensions 30, 50, 100, and 200 show that the proposed LSADE algorithm is competitive compared with the state-of-the-art algorithms under a limited computational budget, being especially effective for the very complicated benchmark functions in high dimensions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Quantum approximate optimization for combinatorial problems with constraints.
- Author
-
Ruan, Yue, Yuan, Zhiqiang, Xue, Xiling, and Liu, Zhihao
- Subjects
- *
COMBINATORIAL optimization , *SWARM intelligence , *MATHEMATICAL optimization , *QUANTUM computing - Abstract
The Quantum Approximate Optimization Algorithm (QAOA) is an algorithmic framework for finding approximate solutions to combinatorial optimization problems, derived from an approximation to the Quantum Adiabatic Algorithm (QAA). In solving combinatorial optimization problems with constraints in the context of QAOA, one needs to find a way to encode problem constraints into the scheme. In this paper, we propose and discuss several QAOA-based algorithms to solve combinatorial optimization problems with equality and/or inequality constraints. We formalize the encoding method of different types of constraints, and demonstrate the effectiveness and efficiency of the proposed scheme by providing examples and results for some well-known NP optimization problems. Compared to previous constraint-encoding methods, we argue our work leads to a more generalized framework for finding, in the context of QAOA, higher-quality approximate solutions to combinatorial problems with various types of constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. An investigation of F-Race training strategies for cross domain optimisation with memetic algorithms.
- Author
-
Gümüş, Düriye Betül, Özcan, Ender, Atkin, Jason, and Drake, John H.
- Subjects
- *
MATHEMATICAL optimization , *METAHEURISTIC algorithms , *HEURISTIC - Abstract
Parameter tuning is a challenging and time-consuming task, crucial to obtaining improved metaheuristic performance. There is growing interest in cross-domain search methods, which consider a range of optimisation problems rather than being specialised for a single domain. Metaheuristics and hyper-heuristics are typically used as high-level cross-domain search methods, utilising problem-specific low-level heuristics for each problem domain to modify a solution. Such methods have a number of parameters to control their behaviour, whose initial settings can influence their search behaviour significantly. Previous methods in the literature either fix these parameters based on previous experience, or set them specifically for particular problem instances. There is a lack of extensive research investigating the tuning of these parameters systematically. In this paper, F-Race is deployed as an automated cross-domain parameter tuning approach. The parameters of a steady-state memetic algorithm and the low-level heuristics used by this algorithm are tuned across nine single-objective problem domains, using different training strategies and budgets to investigate whether F-Race is capable of effectively tuning parameters for cross-domain search. The empirical results show that the proposed methods manage to find good parameter settings, outperforming many methods from the literature, with different configurations identified as the best depending upon the training approach used. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Sample-level weights learning for multi-view clustering on spectral rotation.
- Author
-
Yu, Xiao, Liu, Hui, Lin, Yuxiu, Liu, Nan, and Sun, Shanbao
- Subjects
- *
DOCUMENT clustering , *LAPLACIAN matrices , *ROTATIONAL motion , *SPECTRAL imaging , *MATHEMATICAL optimization - Abstract
[Display omitted] • The weights of different views are learned in the sample level. • It can deal with various multi-view clustering scenarios such as there are corrupted data whose locations are unknown in partial views. • Experiments conducted on seven real-life datasets in comparison with ten baselines demonstrate the effectiveness of our method. Multi-view clustering usually yields better results than single-view clustering since it utilizes more information from multi-view data. However, in the original multi-view data, some samples may be corrupted in partial views. Under this situation, the locations of the corrupted data are often unknown. But there is limited literature regarding this problem. Moreover, most existing multi-view spectral clustering methods need a post-processed algorithm k -means after obtaining the partition matrix, which leads to deviations in the clustering results. To resolve these problems, we propose a multi-view spectral clustering method named sample-level weights learning for Multi-view Clustering on Spectral Rotation (SR-MC) in this paper. By learning the weights in the sample level, SR-MC can make full use of the helpful complementary information among different views while reducing the effects of low-quality data for each sample. Therefore, it can deal with various multi-view clustering scenarios such as data are complete or corrupted in partial views. To reduce the deviations of the clustering results, a joint framework is designed for combining the learning of the consensus Laplacian matrix, the real-valued partition matrix and the binary indicator matrix together. The objective function of SR-MC can be efficiently optimized by an alternative optimization algorithm. Compared with other ten baselines, experiments on seven datasets show the superiority of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. A strategy learning framework for particle swarm optimization algorithm.
- Author
-
Xu, Hua-Qiang, Gu, Shuai, Fan, Yu-Cheng, Li, Xiao-Shuang, Zhao, Yue-Feng, Zhao, Jun, and Wang, Jing-Jing
- Subjects
- *
PARTICLE swarm optimization , *LEARNING strategies , *MATHEMATICAL optimization , *EVOLUTIONARY algorithms , *MACHINE learning , *VIDEO coding - Abstract
Many variants with various strategies have been proposed to improve the efficiency of Particle Swarm Optimization (PSO) algorithm. These strategies are a precious resource waiting to be exploited. We conjecture that some new combinations of strategies selected from different PSO variants may better improve the performance of PSO. Inspired by this idea, this paper proposes a strategy learning framework to learn an optimal combination of strategies and thus derive a new PSO variant based on this combination. In this framework, a strategy pool with strategies selected from existing PSO variants is first constructed. Then, a training engine, implemented by an adaptive differential evolutionary algorithm, is employed to evaluate the performance of strategy combinations on training benchmark functions. Furthermore, a new PSO variant, named SLFPSO, is created based on the strategies learned from training results. This framework provides a novel method to design PSO variants by learning from existing algorithms through a learning mechanism. The performance and scalability of SLFPSO are compared with ten state-of-the-art PSO variants on 10/30/50/100-dimensional CEC2013/2014/2017 benchmark functions. The results verify that SLFPSO performs significantly better than the compared algorithms in most test scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. An effective and efficient evolutionary algorithm for many-objective optimization.
- Author
-
Xue, Yani, Li, Miqing, and Liu, Xiaohui
- Subjects
- *
EVOLUTIONARY algorithms , *MATHEMATICAL optimization , *ALGORITHMS , *SWARM intelligence - Abstract
In evolutionary multiobjective optimization, effectiveness refers to how an evolutionary algorithm performs in terms of converging its solutions into the Pareto front and also diversifying them over the front. This is not an easy job, particularly for optimization problems with more than three objectives, dubbed many-objective optimization problems. In such problems, classic Pareto-based algorithms fail to provide sufficient selection pressure towards the Pareto front, whilst recently developed algorithms, such as decomposition-based ones, may struggle to maintain a set of well-distributed solutions on certain problems (e.g., those with irregular Pareto fronts). Another issue in some many-objective optimizers is rapidly increasing computational requirement with the number of objectives, such as hypervolume-based algorithms and shift-based density estimation (SDE) methods. In this paper, we aim to address this problem and develop an effective and efficient evolutionary algorithm (E3A) that can handle various many-objective problems. In E3A, inspired by SDE, a novel population maintenance method is proposed to select high-quality solutions in the environmental selection procedure. We conduct extensive experiments and show that E3A performs better than 11 state-of-the-art many-objective evolutionary algorithms in quickly finding a set of well-converged and well-diversified solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. A framework of adaptive fuzzy control and optimization for nonlinear systems with output constraints.
- Author
-
Bao, Dan, Liang, Xiaoling, Ge, Shuzhi Sam, Hao, Zhiwei, and Hou, Baolin
- Subjects
- *
ADAPTIVE fuzzy control , *NONLINEAR systems , *MATHEMATICAL optimization , *PARTICLE swarm optimization , *FUZZY logic - Abstract
This paper presents a framework for adaptive fuzzy control and optimization of nonlinear systems subject to uncertainties and disturbances. The barrier Lyapunov function (BLF) technique was adopted to determine output constraints. To enhance the tracking performance of the system, fuzzy logic systems (FLSs) were introduced to approximate nonlinear terms. Subsequently, a combination of Bayesian optimization and particle swarm optimization (BO-PSO) was employed for gains optimization to further improve the control performance. Furthermore, the multilayer neural networks (MNNs) were applied as surrogate models of nonlinear systems with interval parameters to improve the computational efficiency of the optimization process. Finally, two simulations were conducted to demonstrate the effectiveness of the proposed framework. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Solving multimodal optimization problems using adaptive differential evolution with archive.
- Author
-
Agrawal, Suchitra and Tiwari, Aruna
- Subjects
- *
DIFFERENTIAL evolution , *EVOLUTIONARY algorithms , *EVOLUTIONARY computation , *MEMETICS , *MATHEMATICAL optimization , *GRIDS (Cartography) - Abstract
Evolutionary algorithms are widely used to solve multimodal optimization problems. The two main challenges faced while solving MMOPs are locating multiple optimal solutions and improving the accuracy of these solutions. In this paper, we have proposed an adaptive algorithm based on differential evolution using the distributed framework in mutation strategy and an elite archive mechanism termed Adaptive Differential Evolution with Archive to deal with these challenges. The following techniques have been proposed and integrated to locate multiple diverse optimal solutions with refined accuracy. Firstly, each individual in the population is treated as a possible exemplar and is expected to reach an optimal value by exploring the nearby search space. The search space is controlled by using an adjustable range mechanism. An adaptive mutation strategy is then used to ensure that all the good solutions or individuals of the population move to better positions. Next, an elite archive is constructed for stagnated individuals to avoid getting stuck in local optimas. The experimental results on the 20 multimodal functions from IEEE Congress on Evolutionary Computation 2013 illustrate that the performance of the proposed algorithm is better than the existing multimodal optimization algorithms in terms of finding more number of accurate solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. A new maximal flow algorithm for solving optimization problems with linguistic capacities and flows.
- Author
-
Akram, Muhammad, Habib, Amna, and Allahviranloo, Tofigh
- Subjects
- *
PROBLEM solving , *WATER pipelines , *WATER distribution , *MATHEMATICAL optimization , *FUZZY numbers , *MULTICASTING (Computer networks) - Abstract
The maximal flow problems (MFPs) are among the most significant optimization problems in network flow theory with widespread and diverse applications. To represent qualitative aspects of uncertainty in the maximal flow model, which asks for the largest amount of flow transported from one vertex to another, the use of linguistic variables has effective means for experts in expressing their views. In this paper, we first define trapezoidal Pythagorean fuzzy numbers (TrPFNs) along with some new arithmetic operations which cover the gaps in previously defined operations. For defuzzification of TrPFNs, we introduce a ranking procedure based on value and ambiguity indices. This work puts forward a theoretical framework for a new Pythagorean fuzzy maximal flow algorithm (PFMFA), which helps to solve different optimization problems with PF information by considering linguistic capacities and flows. The implementation of the algorithm is elaborated by considering two case studies. Firstly, we examine the maximum flow of a water distribution pipeline network in Pyigyitagon Township, Mandalay, Myanmar. Secondly, we compute maximum PF power flow in a 14-bus electricity network provided by the IEEE working group, concerning the example data from the University of Washington. The results illustrate the superiority of the proposed method and give a detailed analysis of flow connected with several practical performances. In addition, the Pythagorean fuzzy optimal flows corresponding to each network arc are compared and performance comparison of our method is investigated which shows the increasing and decreasing trends of backward and forward arcs of the network, respectively. Moreover, the runtime analysis of existing well-known maximal flow algorithms is provided. Finally, we present the advantages of our technique to promote its cogency. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Finding top-K solutions for the decision-maker in multiobjective optimization.
- Author
-
Luo, Wenjian, Shi, Luming, Lin, Xin, Zhang, Jiajia, Li, Miqing, and Yao, Xin
- Subjects
- *
MATHEMATICAL optimization , *EVOLUTIONARY algorithms , *GROUP decision making , *MULTIPLE criteria decision making - Abstract
Multiobjective optimization problems (MOPs) are the optimization problem with multiple conflicting objectives. Generally, an optimization algorithm can find a large number of optimal solutions for MOPs, which easily overwhelm decision makers (DMs) and make it difficult for decision-making. Preference-based evolutionary multiobjective optimization (EMO) aims to find the partial optima in the regions preferred by the DM. Although it narrows the scope of the optimal solutions, it usually still returns a population of optimal solutions (typically 100 or larger in EMO) with a small distance between adjacent optima. Top-K, which is a well-established research subject in many fields to find the best K solutions, may be a direction to reduce the number of optimal solutions. In this paper, first, we introduce the top-K notion into preference-based EMO and propose the top-K model to obtain the best K individuals of multiobjective optimization problems (MOPs). Then, with the top-K model, we propose NSGA-II-TopK and SPEA2-TopK to search for the top-K preferred solutions for preference-based continuous and combinatorial MOPs, respectively. Finally, the proposed algorithms with several representative preference-based EMO algorithms are compared in different preference situations for MOPs. Experimental results show the proposed algorithms have strong performances against the compared algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. A new multi-objective optimization algorithm based on combined swarm intelligence and Monte Carlo simulation.
- Author
-
Zhang, Kangkang and Song, Yan
- Subjects
- *
MONTE Carlo method , *SWARM intelligence , *MATHEMATICAL optimization - Abstract
Currently, multi-objective optimization is an important problem in various fields. This paper proposes an innovative multi-objective flower pollination algorithm combined with Monte Carlo simulation (called MFPAMC). MFPAMC incorporates the following three technologies: the flower pollination algorithm (FPA), the Monte Carlo method (MC), and a sorting function. Initially, the FPA is used to search for the optimal solution, which improves the search efficiency. When the solution obtained by the FPA is no longer updated, MC simulation is adopted to further find the optimal solution. The sorting function prevents the loss of objective function information, and the result obtained is objective and practical. A test function is used to verify whether the combination of the FPA and MC simulation can improve the search accuracy. The results confirm that the search accuracy is improved. Finally, MFPAMC is used for empirical analysis, and the experimental results are compared with those of four other methods. The comparison results confirm that the optimized results are ideal and further indicate that MFPAMC has significant potential for practical application. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. A back-diffusion median integrated evolutionary optimization algorithm.
- Author
-
Kang, Lanlan, Liu, Naiwei, Lu, Xinrong, Cao, Wenliang, and Peng, Yong
- Subjects
- *
EVOLUTIONARY algorithms , *MATHEMATICAL optimization , *PARTICLE swarm optimization , *DIFFERENTIAL evolution - Abstract
• A novel mean-median velocity updating formula is proposed to guide individuals motion. • A new random differential mutation strategy is devised. • A back-diffusion strategy is proposed to restore the diversity of population. To accelerate the convergence speed and enhance robustness, a back-diffusion median integrated evolutionary algorithm (BMIEA) is proposed combining the advantages of particle swarm optimization (PSO) and differential evolution algorithm (DE) in this paper. The BMIEA includes three mainly optimization strategies. (1) Firstly, a new mean-median velocity updating formula is proposed to control optimal path of individuals. It can accelerate the convergence speed via reducing adverse effects of outliers on the population. (2) Secondly, a random differential mutation (RDM) inspired by the DE is devised to avoid the individuals trapping into local optimum via getting one more chance to explore optimal position while exploiting local region in each evolution. (3) Thirdly, a targeted exploration method i.e., back-diffusion operation, is proposed inspired by the duality principle to further accelerate the convergence rate and enhance robustness of algorithm. A series of simulation experiments have verified that BMIEA algorithm has revealed competitiveness compared with 13 state-of-art GOBL-based optimization algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. A new multi-objective optimization algorithm based on combined swarm intelligence and Monte Carlo simulation.
- Author
-
Zhang, Kangkang and Song, Yan
- Subjects
- *
MONTE Carlo method , *SWARM intelligence , *MATHEMATICAL optimization - Abstract
Currently, multi-objective optimization is an important problem in various fields. This paper proposes an innovative multi-objective flower pollination algorithm combined with Monte Carlo simulation (called MFPAMC). MFPAMC incorporates the following three technologies: the flower pollination algorithm (FPA), the Monte Carlo method (MC), and a sorting function. Initially, the FPA is used to search for the optimal solution, which improves the search efficiency. When the solution obtained by the FPA is no longer updated, MC simulation is adopted to further find the optimal solution. The sorting function prevents the loss of objective function information, and the result obtained is objective and practical. A test function is used to verify whether the combination of the FPA and MC simulation can improve the search accuracy. The results confirm that the search accuracy is improved. Finally, MFPAMC is used for empirical analysis, and the experimental results are compared with those of four other methods. The comparison results confirm that the optimized results are ideal and further indicate that MFPAMC has significant potential for practical application. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. A back-diffusion median integrated evolutionary optimization algorithm.
- Author
-
Kang, Lanlan, Liu, Naiwei, Lu, Xinrong, Cao, Wenliang, and Peng, Yong
- Subjects
- *
EVOLUTIONARY algorithms , *MATHEMATICAL optimization , *PARTICLE swarm optimization , *DIFFERENTIAL evolution - Abstract
• A novel mean-median velocity updating formula is proposed to guide individuals motion. • A new random differential mutation strategy is devised. • A back-diffusion strategy is proposed to restore the diversity of population. To accelerate the convergence speed and enhance robustness, a back-diffusion median integrated evolutionary algorithm (BMIEA) is proposed combining the advantages of particle swarm optimization (PSO) and differential evolution algorithm (DE) in this paper. The BMIEA includes three mainly optimization strategies. (1) Firstly, a new mean-median velocity updating formula is proposed to control optimal path of individuals. It can accelerate the convergence speed via reducing adverse effects of outliers on the population. (2) Secondly, a random differential mutation (RDM) inspired by the DE is devised to avoid the individuals trapping into local optimum via getting one more chance to explore optimal position while exploiting local region in each evolution. (3) Thirdly, a targeted exploration method i.e., back-diffusion operation, is proposed inspired by the duality principle to further accelerate the convergence rate and enhance robustness of algorithm. A series of simulation experiments have verified that BMIEA algorithm has revealed competitiveness compared with 13 state-of-art GOBL-based optimization algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. A self-exploratory competitive swarm optimization algorithm for large-scale multiobjective optimization.
- Author
-
Qi, Sheng, Zou, Juan, Yang, Shengxiang, Jin, Yaochu, Zheng, Jinhua, and Yang, Xu
- Subjects
- *
MATHEMATICAL optimization , *EVOLUTIONARY algorithms , *LEARNING ability , *FLIPPED classrooms - Abstract
With the popularity of "flipped classrooms," teachers pay more attention to cultivating students' autonomous learning ability while imparting knowledge. Inspired by this, this paper proposes a Self-exploratory Competitive Swarm Optimization algorithm for Large-scale Multiobjective Optimization (SECSO). Its idea is very simple and there are no parameters that need to be adjusted. Particles evolve by exploring their neighboring space and learning from other particles in the swarm, thereby simultaneously enhancing the diversity and convergence performance of the algorithm. Compared with eight state-of-the-art large-scale multiobjective evolutionary algorithms, the proposed method exhibited outstanding performance on LSMOP problems with up to 10,000 decision variables. Unlike most existing large-scale evolutionary algorithms that usually require a large number of objective evaluations, SECSO shows the ability to find a set of well converged and diverse non-dominated solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Intelligent ensembling of auto-ML system outputs for solving classification problems.
- Author
-
Consuegra-Ayala, Juan Pablo, Gutiérrez, Yoan, Almeida-Cruz, Yudivian, and Palomar, Manuel
- Subjects
- *
PROBLEM solving , *MACHINE learning , *MATHEMATICAL optimization , *SOURCE code , *SCIENTIFIC community , *SWARM intelligence , *VIRTUAL communities - Abstract
[Display omitted] • A two-phase optimization system for solving classification problems. • Combining Auto-ML pipelines to improve overall performance. • Intelligent selection of ensemble methods. • Best results delivered by double-fault measure and 20 or 50 maximum number of base models. Automatic Machine Learning (Auto-ML) tools enable the automatic solution of real-world problems through machine learning techniques. These tools tend to be more time consuming than standard machine learning libraries, therefore, exploiting all the available resources to the full is a valuable feature. This paper presents a two-phase optimization system for solving classification problems. The system is designed to produce more robust classifiers by exploiting the different architectures that are generated while solving classification problems with Auto-ML tools, particularly AutoGOAL. In the first phase, the system follows a probabilistic strategy to find the best combination of algorithms and hyperparameters to generate a collection of base models according to certain diversity criteria; and in the second, it follows similar Auto-ML strategies to ensemble those models. The HAHA 2019 challenge corpus and the Adult dataset were used to evaluate the system. The experimental results show that: i) a better solution can be built by ensembling a subset of the already tested models; ii) the performance of ensemble methods depends on the collection of base models used; and, iii) ensuring diversity using the double-fault measure produces better results than the disagreement measure. The source code is available online for the research community. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. Adjustable driving force based particle swarm optimization algorithm.
- Author
-
Yu, Fei, Tong, Lei, and Xia, Xuewen
- Subjects
- *
PARTICLE swarm optimization , *MATHEMATICAL optimization - Abstract
• In this paper, a novelty-based driving force is introduced to overcome deficiencies of the fitness-based driving force. • During the evolution, two types of exemplars respectively with high fitness and high novelty are saved in two archives. • In each generation, a particle respectively chooses two exemplars from the two archives to update its velocity. • Three time-varying parameters are introduced to adjust a particle's learning weights for the two driving forces aiming to satisfy distinct requirements of different evolution stages. Particle swarm optimization algorithm (PSO) is a popular optimizer, in which each particle selects its learning exemplars relying on their fitness. Thus, the search process of each particle can be seen as driven by a fitness-based force. Intuitively, the driving force is conducive to the optimizing process. However, it may bring a premature convergence of a population. In this work, a novelty-based driving force is put forward to overcome deficiencies of the fitness-based driving force. In the new proposed adjustable driving force based PSO, named as ADFPSO, two types of exemplars respectively with high fitness and high novelty are saved in two archives. In each generation, a particle respectively chooses two exemplars from the two archives to update its velocity. In addition, three time-varying parameters are introduced to adjust the particle's learning weights for the two exemplars aiming to satisfy distinct requirements of different evolution stages. Comprehensive properties of ADFPSO are extensively testified by a set of experiments, in which nine PSO variants are adopted as peer algorithms and two CEC test suites are selected as optimization problems. Moreover, distinct characteristics of the proposed novelty-based driving force are also analyzed based on a few experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. A parameter-free particle swarm optimization algorithm using performance classifiers.
- Author
-
Harrison, Kyle Robert, Ombuki-Berman, Beatrice M., and Engelbrecht, Andries P.
- Subjects
- *
PARTICLE swarm optimization , *MACHINE learning , *METAHEURISTIC algorithms , *PREDICTION models , *MATHEMATICAL optimization - Abstract
This paper presents an investigation into the short-term versus long-term performance of various particle swarm optimization (PSO) control parameter configurations. While evidence suggests that the best PSO parameter values to employ are time-dependent, this paper provides an in-depth examination of a small set of parameter values to provide a more concrete quantification of the performance degradation observed with specific control parameter configurations over time. Given that the short-term performance is not necessarily indicative of long-term performance, this paper proposes that machine learning techniques be used to build predictive models based on two easily-observable landscape characteristics. Finally, using the predictive models as a basis, this paper also proposes a parameter-free PSO algorithm, which performs on par with other top-performing PSO variants, namely the three best performing static PSO configurations, particle swarm optimization with time-varying acceleration coefficients (PSO-TVAC), and particle swarm optimization with improved random constants (PSO-iRC). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
38. Optimization of linear problems subjected to the intersection of two fuzzy relational inequalities defined by Dubois-Prade family of t-norms.
- Author
-
Ghodousian, Amin
- Subjects
- *
MATHEMATICAL optimization , *FUZZY relational calculus , *FUZZY mathematics , *FUZZY relational equations , *FUZZY systems - Abstract
In this paper, optimization of a linear objective function with fuzzy relational inequality constraints is investigated In doing so, the feasible region is formed by the intersection of two inequality fuzzy systems and Dubois-Prade family of t-norms which are considered as fuzzy composition. The most well-known continuous t-norms are Archimedean such as Frank, Yager, Hamacher, Sugeno-Weber and Schweizer-Sklar family. An interesting family of t-norms that is not Archimedean has been introduced by Dubois and Prade. In this paper, the resolution of the feasible region of the problem is initialy investigated when it is defined with max-Duboise-Prade composition. A necessary and sufficient condition along with three other necessary conditions are derived for determining the feasibility. of the problem. Moreover, two procedures have also been presented with the aim of simplifying the current linear problems. A method is proposed to generate random feasible max-Dubois-Prade fuzzy relational inequalities and an algorithm is accordingly presented to solve the problem. Finally, an example is described to illustrate this algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
39. An adaptive differential evolution framework based on population feature information.
- Author
-
Cao, Zijian, Wang, Zhenyu, Fu, Yanfang, Jia, Haowen, and Tian, Feng
- Subjects
- *
DIFFERENTIAL evolution , *GLOBAL optimization , *MATHEMATICAL optimization , *STANDARD deviations , *ARCHIVES , *PARAMETERS (Statistics) , *SUCCESS - Abstract
Differential Evolution (DE) is an effective global optimization algorithm, and many existing adaptive variants of it have been proposed to solve engineering problems. It is well known that population feature information that refers to some mathematical statistic feature information of all individuals in the dimension of decision space, and it can reflect the features of the problem to be solved. However, the population feature information has not been fully utilized by DE's adaptive variants. As a result, those adaptive variants do not obtain promising performance in optimizing nonlinear, non-differentiable and non-separable multi-modal problems. To make adequate extraction and effective use of population feature information, we propose an adaptive differential evolution framework based on population feature information in this paper, named PFI for short. In the PFI framework, the population feature information consists of the standard deviation of fitness value and the sum of standard deviation of each dimension of population. Besides, population feature information archive is designed to store the population feature information and success parameters, and the utilization mechanism of population feature information assigns historical success parameters with high population feature similarity to the current corresponding population. Four widely used mutation strategies of DE are incorporated into the PFI framework to evaluate its performance by optimizing CEC2005, CEC2015, CEC2020 benchmark functions and two real world applications to verify the performance of the PFI framework. Experiment results have demonstrated that PFI framework can significantly improve the performance of 4 popular mutation strategies of DE. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. An efficient mixture sampling model for gaussian estimation of distribution algorithm.
- Author
-
Dang, Qianlong, Gao, Weifeng, and Gong, Maoguo
- Subjects
- *
DISTRIBUTION (Probability theory) , *GAUSSIAN distribution , *GAUSSIAN mixture models , *GLOBAL optimization , *MATHEMATICAL optimization - Abstract
Estimation of distribution algorithm (EDA) is a stochastic optimization algorithm based on probability distribution model and has been widely applied in global optimization. However, the random sampling of Gaussian EDA (GEDA) usually suffers from the poor diversity and the premature convergence, which severely limits its performance. This paper analyzes the shortcomings of the random sampling and develops an efficient mixture sampling model (EMSM). EMSM can explore more promising regions and utilize the unsuccessful mutation vectors, which achieves a good tradeoff between the diversity and the convergence. Moreover, the feasibility analysis of EMSM is studied. A new GEDA variant named EMSM-EDA is developed, which combines EMSM with enhancing Gaussian estimation of distribution algorithm (EDA2). The experimental results on IEEE CEC2013 and IEEE CEC2014 test suites demonstrate that EMSM-EDA is efficient and competitive. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Multi-objective scheduling of priority-based rescue vehicles to extinguish forest fires using a multi-objective discrete gravitational search algorithm.
- Author
-
Tian, Guangdong, Fathollahi-Fard, Amir M., Ren, Yaping, Li, Zhiwu, and Jiang, Xingyu
- Subjects
- *
SEARCH algorithms , *FOREST fires , *EMERGENCY management , *FIRE engines , *SCHEDULING , *MATHEMATICAL optimization - Abstract
• A novel multi-objective scheduling model is developed for the emergency rescue planning problem. • The proposed model minimizes the fire extinguishing time, delay time, and number of fire rescue vehicles. • The factor of fire spreading speed is firstly considered in our optimization model. • A revised multi-objective discrete gravitational search slgorithm is designed to produce Pareto solutions. • An empirical case study in Heilongjiang Province, China, is applied. The implementation of emergency scheduling of priority-based rescue vehicles for extinguishing forest fires is a complex optimization problem facing with many challenges and difficulties to optimize the operational costs and to improve the efficiency to make robust decisions. The main challenges are to minimize the number of fire engines while minimizing firefighting time and firefighting delay time, simultaneously. The main difficulties to make these decisions are to consider the severity of each fire point with regards to the limited resources of vehicles. Hence, this paper motivates to develop a new multi-objective scheduling model for extinguishing the fire of forests considering rescue priority with the limited rescue resources. To make an efficient and robust solution for the proposed problem, another novelty is to propose a hybrid optimization algorithm which is a modified discrete gravitational search algorithm. To confirm the applicability of the proposed problem, an actual forest fire emergency scheduling in Heilongjiang Province, China, is simulated. The proposed hybrid optimization algorithm is tested to check the feasibility the Pareto solutions and it is compared with a set of well-known and recent algorithms to show its efficiency. Finally, after a comprehensive discussion, the results prove that this work provides an accurate and effective tool for performing the emergency scheduling for extinguishing the forest fire. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Fixed-time consensus for multi-agent systems with objective optimization on directed detail-balanced networks.
- Author
-
Yu, Zhiyong, Sun, Jian, Yu, Shuzhen, and Jiang, Haijun
- Subjects
- *
MULTIAGENT systems , *MATHEMATICAL optimization , *GLOBAL optimization , *LYAPUNOV stability , *STABILITY theory , *SWARM intelligence - Abstract
This paper investigates the distributed control of multi-agent systems (MASs) with objective optimization on directed detail-balanced networks, in which the global optimization function is expressed as a convex combination of local objectives of agents. First, a directed and detail-balanced network depending on the weights of an optimization function is constructed, and a distributed consensus protocol with gradients of local objectives is proposed over the designed network. Using Lyapunov stability theory and a projection technique, we prove that the proposed protocol not only makes all agents achieve consensus in a fixed-time interval but can also solve the global optimization problem asymptotically. Moreover, the optimization problem with box constraints is studied, and a δ -exact penalty method is employed to eliminate the constraints. Similarly, a distributed fixed-time consensus protocol with gradient measurement is developed, and we prove that the optimal solution can be reached asymptotically. Finally, two examples are presented to show the efficacy of the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. A reinforcement learning level-based particle swarm optimization algorithm for large-scale optimization.
- Author
-
Wang, Feng, Wang, Xujie, and Sun, Shilei
- Subjects
- *
PARTICLE swarm optimization , *MATHEMATICAL optimization , *REINFORCEMENT learning , *EVOLUTIONARY algorithms , *LEARNING strategies - Abstract
• A level-based population structure is adopted to exploration ability. • A reinforcement learning strategy is introduced to improve the search efficiency. • A level competition mechanism is proposed to optimize the convergence performance. • RLLPSO can achieve good performance for large-scale optimization. Large-scale optimization problems (LSOPs) have drawn researchers' increasing attention since their resemblance to real-world problems. However, due to the complex search space and massive local optima, it is challenging to simultaneously guarantee the diversity and convergence of the algorithms. As a widely used evolutionary algorithm with fast convergence, particle swarm optimization (PSO) shows competitive performances on some LSOPs. Nevertheless, it can easily get trapped into local optima. Overcoming the complexity of LSOPs and improving search efficiency have become vital issues. The reinforcement learning method has proven to be an effective technique in self-adaptive adjustment, which can help search for better results in large-scale solution space more effectively. In this paper, we propose a large-scale optimization algorithm called reinforcement learning level-based particle swarm optimization algorithm (RLLPSO). In RLLPSO, a level-based population structure is constructed to improve population diversity. A reinforcement learning strategy for level number control is employed to help improve the search efficiency of RLLPSO. To further enhance the convergence ability of RLLPSO, a level competition mechanism is introduced. The experimental results from two large-scale benchmark test suites demonstrate that, compared with five state-of-the-art large-scale optimization algorithms, RLLPSO shows superiority in most cases. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Information diffusion-aware likelihood maximization optimization for community detection.
- Author
-
Zhang, Zheng, Wan, Jun, Zhou, Mingyang, Lu, Kezhong, Chen, Guoliang, and Liao, Hao
- Subjects
- *
INFORMATION networks , *MATHEMATICAL optimization , *SCALABILITY , *PRIOR learning - Abstract
As a hot research topic in network science, community detection has attracted much attention of scholars. In recent years, many methods have emerged to discover the underlying community structure in the network. However, most of these methods need to take the network topology information as prior knowledge that is not feasible in practical cases. When information diffusion occurs in the network, one can observe the cascade data in which nodes participate in the propagation process, which reflects the network's community structure to some extent. In this paper, we build a likelihood maximization model by utilizing the diffusion information and propose two different optimization algorithms to obtain community division of the network. Extensive experiments on various datasets show that our proposed methods achieve significant improvements in terms of accuracy, scalability, and efficiency of community detection compared with the existing state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Distributed parallel deep learning with a hybrid backpropagation-particle swarm optimization for community detection in large complex networks.
- Author
-
Nasser Al-Andoli, Mohammed, Chiang Tan, Shing, and Ping Cheah, Wooi
- Subjects
- *
DEEP learning , *PARTICLE swarm optimization , *CENTRAL processing units , *GRAPHICS processing units , *MATHEMATICAL optimization , *BLENDED learning - Abstract
In this paper, a parallel deep learning-based community detection method in large complex networks (CNs) is proposed. First, a CN partitioning method is employed to divide the CN into multiple chunks to improve the efficiency in terms of space and time complexities. Next, the method is integrated with two optimization algorithms: (1) backpropagation (BP), which optimizes deep learning locally within each local chunk of the CN; (2) particle swarm optimization (PSO), which is used to improve the BP optimization involving all CN chunks. PSO utilizes a multi-objective function to improve the effectiveness of the proposed method. In addition, a distributed environment is set up to conduct parallel optimization of the proposed method so that multi-local optimizations could be performed simultaneously. A set of 16 real-world CNs in a range from small to large size are used to verify the effectiveness and efficiency of the method in a benchmark study. The proposed method is implemented in multi-machines with central processing unit (CPU) and graphics processing unit (GPU) devices. The results reveal the effective role of the proposed deep learning with hybrid BP–PSO optimization in detecting communities in large CNs, which requires minimum execution time on both CPU and GPU devices. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Dictionary-based transfer learning with Universum data.
- Author
-
Che, Zhiyong, Liu, Bo, Xiao, Yanshan, and Lin, Luyue
- Subjects
- *
TRANSFER of training , *LAGRANGIAN functions , *KNOWLEDGE transfer , *MACHINE learning , *MATHEMATICAL optimization - Abstract
Recently, transfer learning is a popular method in machine learning, which transfers the knowledge learned from source task into target task. In practice, we can obtain the third-class examples except for the positive samples or negative samples, which are called Universum data, and Universum data can improve the performance of the classifier. In this paper, we propose a dictionarybased transfer learning with Universum data method, named U-DTL. In the proposed method, we first introduce the Universum data into the proposed model by the ∊ -insensitive loss. We then embed two dictionaries for the source and target domains into a new model, and put forward the similarity constraint for dictionaries between both domains to determine the relationship among samples of source and target domains. Further, we use the gradient-based optimization and SVD algorithm to alternately optimize and update the dictionaries, and utilize Lagrangian function to iteratively optimize the proposed U-DTL model to obtain the classifier. Finally, the statistic result of Wilcoxon-test has shown that the proposed U-DTL method has the better performance than previous methods. And we have conducted extensive experiments on the benchmark datasets to evaluate the performance of the proposed U-DTL method and baselines. The results show that the proposed U-DTL method makes the better performance than previous methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Low rank label subspace transformation for multi-label learning with missing labels.
- Author
-
Kumar, Sanjay and Rastogi, Reshma
- Subjects
- *
LOW-rank matrices , *MATHEMATICAL optimization , *CONVEX functions - Abstract
• An integrated framework to recover missing labels and train the multi-label classifier by learning label correlations and transforming label subspace. • Maximally separated label subspaces for label differentiation and a low rank structure capturing label specific subspace. • Modelling global label correlations to learn auxiliary label matrix for missing label recovery. • Experimental evaluations on fourteen multi-label datasets and performance comparison with six state of the art methods prove effectiveness of model. Multi-label datasets often contain label information with missing values and recovering them is a non-trivial challenge. Several methods augment the observed label matrix by constructing auxiliary labels and learning high order label correlations. Some other techniques exploit the low rank of the label matrix to capture a mix of label correlations. Both these approaches rely on label correlations, however in different ways. In this paper, we propose a unified framework that captures the label correlations utilizing both auxiliary label matrix and the low rank constraints on estimated labels. Our model also enforces maximal separation among different label subspaces for better label differentiation. The proposed method captures local and global correlations using L ow R ank label subspace transformation for M ulti-label learning with M issing L abels (LRMML). The model considers an auxiliary label matrix which facilitates the missing label information recovery. Low rank on predictions ensures that local label structures are captured and the maximal inter-label subspace separation helps identify discriminatory label correlations. The proposed method builds a multi-label classification model by solving a multivariate difference of convex objective function using surrogate optimization technique and alternating minimization. Empirical results on several benchmark datasets validate the effectiveness of the proposed method against state-of-the-art multi-label learning approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. A distributed adaptive optimization spiking neural P system for approximately solving combinatorial optimization problems.
- Author
-
Dong, Jianping, Zhang, Gexiang, Luo, Biao, Yang, Qiang, Guo, Dequan, Rong, Haina, Zhu, Ming, and Zhou, Kang
- Subjects
- *
COMBINATORIAL optimization , *KNAPSACK problems , *EVOLUTIONARY algorithms , *SWARM intelligence , *MATHEMATICAL optimization , *QUANTUM information science - Abstract
• Proposes a distributed adaptive optimization spiking neural P system with a distributed population structure and a new adaptive learning rate considering population diversity. • Extensive experiments on knapsack problems show that DAOSNPS gains much better and more stable solutions than OSNPS, AOSNPS and other two optimization algorithms. An optimization spiking neural P system (OSNPS) aims to obtain the approximate solutions of combinatorial optimization problems without the aid of evolutionary operators of evolutionary algorithms or swarm intelligence algorithms. To develop the promising and significant research direction, this paper proposes a distributed adaptive optimization spiking neural P system (DAOSNPS) with a distributed population structure and a new adaptive learning rate considering population diversity. Extensive experiments on knapsack problems show that DAOSNPS gains much better solutions than OSNPS, adaptive optimization spiking neural P system, genetic quantum algorithm and novel quantum evolutionary algorithm. Population diversity and convergence analysis indicate that DAOSNPS achieves a better balance between exploration and exploitation than OSNPS and AOSNPS. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. Resilient Penalty Function Method for Distributed Constrained Optimization under Byzantine Attack.
- Author
-
Xu, Chentao, Liu, Qingshan, and Huang, Tingwen
- Subjects
- *
CONSTRAINED optimization , *DISTRIBUTED algorithms , *PARALLEL programming , *MATHEMATICAL optimization , *ALGORITHMS - Abstract
Distributed optimization algorithms have the advantages of privacy protection and parallel computing. However, the distributed nature of these algorithms makes the system vulnerable to external attacks. This paper presents two penalty function based resilient algorithms for constrained distributed optimization under static and dynamic attacks. The objective function of the optimization problem is extended to nonsmooth ones and the convergence of the proposed algorithms in this case are proved under some mild conditions. Simulation experiments are performed and compared with some existing resilient primal-dual optimization algorithms using median-based mean estimator. For static attack, the proposed algorithm has better performance and faster convergence rate in the simulation experiments. For dynamic attack, the proposed algorithm has better performance and robustness in the simulation experiments, which illustrate that the proposed algorithms are more effective. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. A novel multi-objective optimization framework for optimal integrated energy system planning with demand response under multiple uncertainties.
- Author
-
Dong, Yingchao, Wang, Cong, Zhang, Hongli, and Zhou, Xiaojun
- Subjects
- *
CARBON emissions , *MATHEMATICAL optimization , *OPERATING costs , *RENEWABLE energy sources , *COEVOLUTION , *INPAINTING - Abstract
• A multi-objective IES planning model with DR and uncertainty is constructed. • A novel multi-objective optimization framework (MOOF) for IES planning is proposed. • A constrained multi-objective coevolutionary algorithm is developed and verified. • The effectiveness of the proposed MOOF is verified by case studies. The planning of integrated energy systems (IES) faces significant challenges due to the presence of multiple uncertainties caused by stochastic demands and renewable energy generation. Particularly, how to balance different conflicted objectives in IES planning under multiple uncertainties is a major obstacle with few studies. Thus, this paper proposes a novel multi-objective optimization framework (MOOF) for uncertain IES planning with demand response to achieve the synergistic enhancement of multiple performance indicators of the system while ensuring its flexibility and safety. The proposed MOOF encompasses several key steps. Firstly, a multi-objective optimization model under various uncertainties is constructed, with the minimization of investment and operating costs, maximization of exergy efficiency, and minimization of carbon emissions as optimization objectives. Secondly, the chance constraint approach is used to convert the constraint conditions into deterministic ones. Subsequently, the Pareto dominance concept is incorporated into robust, interval, and opportunistic optimization techniques to obtain three deterministic transformation methods for multi-objective optimization with uncertainty. Further, a high-efficiency constrained multi-objective coevolutionary algorithm (CMCA) is developed to solve the proposed planning model, which has the characteristics of nonlinearity, and high-dimensional complexity. Finally, the effectiveness of the proposed MOOF and CMCA is verified through numerous case studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.