Ameliorated grey wolf optimizer with the best and worst orthogonal opposition-based learning

SOFT COMPUTING(2023)

Cited 0|Views2
No score
Abstract
Grey wolf optimizer (GWO) is a type of swarm intelligence optimization algorithm. The GWO has strong local search ability and fast convergence. Moreover, GWO is easily implemented due to its small number of parameters and unique population structure. Nevertheless, GWO still has the shortcoming of low accuracy, and it is easy to fall in local optima. To overcome this shortcoming, an ameliorated grey wolf optimizer (AGWO) is proposed in this paper. In the AGWO, an opposition-based learning strategy is employed to optimize the GWO algorithm for making a balance between exploration and exploitation. This process can efficiently prevent the AGWO from falling into a local optimum. In addition, to solve the problems of dimensional degradation and computational complexity in opposition-based learning, orthogonal design is introduced to reverse in certain dimensions. At the same time, to reduce the calculation time of the algorithm, two most representative individuals (the best individual and the worst individual) are selected to perform orthogonal opposition-based learning. The proposed AGWO is evaluated on 23 benchmark functions and the well-known IEEE CEC2022 benchmark suite. Based on the simulation results, it is observed that AGWO achieves superior solution compared to GWO and most algorithms. Moreover, AGWO is utilized to optimize the parameters of three engineering design problems, and the results show that AGWO can solve real-world engineering problems.
More
Translated text
Key words
grey wolf optimizer,learning,opposition-based
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined