Subscribe Now Subscribe Today
Research Article
 

An Enhanced Particle Swarm Optimization Algorithm



Xue-yao Gao, Li-quan Sun and Da-song Sun
 
Facebook Twitter Digg Reddit Linkedin StumbleUpon E-mail
ABSTRACT

Particle Swarm Optimization (PSO) algorithm is often used for finding optimal solution, but it easily entraps into the local extremum in later evolution period. Based on improved chaos searching strategy, an enhanced particle swarm optimization algorithm is proposed in this study. When particles get into the local extremum, they are activated by chaos search strategy, where the chaos search area is controlled in the neighborhood of current optimal solution by reducing search area of variables. The new algorithm not only gets rid of the local extremum effectively but also enhances the precision of convergence significantly. Experiment results show that the proposed algorithm is better than standard PSO algorithm in both precision and stability.

Services
Related Articles in ASCI
Search in Google Scholar
View Citation
Report Citation

 
  How to cite this article:

Xue-yao Gao, Li-quan Sun and Da-song Sun, 2009. An Enhanced Particle Swarm Optimization Algorithm. Information Technology Journal, 8: 1263-1268.

DOI: 10.3923/itj.2009.1263.1268

URL: https://scialert.net/abstract/?doi=itj.2009.1263.1268
 

INTRODUCTION

The purpose of swarm intelligence method is to build a random optimization algorithm by simulating nature biology group behavior. Kennedy and Eberthart (1995) gained enlightenment from bird and fish group behavior and proposed the particle swarm optimization algorithm. The PSO algorithm has fewer parameters and can resolve the complicated optimization problems efficiently (Parsopoulos and Vrahatis, 2004). But PSO also can get into the local extremum and has slow search speed in later period, which is similar to other intellective algorithm.

The key to PSO algorithm is to deal with the problem of slow search speed and premature convergence. Kennedy and Eberhart (1997) used a discrete binary version of particle swarm optimization to resolve combinatorial optimization problems in engineering practice. TVAC was given to control local search and converged to global optimum solution efficiently (Ratnaweera et al., 2004). Zheng et al. (2007) proposed a method of changing velocity rate to enhance searching speed. Test experiments of domain topology were conducted (Kennedy, 1999), in which the best form of topology should be designed based on actual situation. Distance from the global best position to other position was calculated to adjust the velocity suitably of each particle (Kennedy, 2000), in order to improve the performance of PSO and maintain the diversities of particles. These methods adjust position and velocity of particles to enhance search speed. Dou et al. (2006) proposed the swarm-core evolutionary in dynamic optimization environments. Kwok et al. (2007) proposed an alternative social interaction scheme among swarm members. He et al. (2005) put forward an improved particle swarm optimization based on self-adaptive velocity to speed up the convergence, but the algorithm will still converge to local optimum. A new adaptive mutation particle swarm optimizer was proposed by Lv and Hou (2004), which is based on variance of the population’s fitness. Meng et al. (2006) introduced chaotic series into PSO algorithm to avoid local optimization, but searching process is blind and slow. Ni et al. (2009) introduced multi-cluster structure into particle swarm optimization algorithm to escape from local optimal solutions, which enhanced global search ability.

Chaos is one of nonlinear phenomenon, which is stochastic and ergodic and can be used in optimization algorithm (Li and Jiang, 1997). Niu et al. (2009) proposed chaotic differential evolution algorithm for multi-objective optimization. In this study, an enhanced particle swarm optimization algorithm based on improved chaos search strategy is proposed (EPSO), in order to deal with premature convergence in later evolution. From testing resu lts of the benchmark functions, the results of EPSO are obviously better than that of standard particle swarm optimization algorithm (SPSO).

PARTICLE SWARM OPTIMIZATION ALTORITHM

Particle swarm algorithm starts from random initialization of a population of particles in the searching space and works on the social behavior of particles in the swarm. It finds the global best solution by adjusting simply the trajectory of each individual toward its own best location and toward the best particle of the entire swarm at each step.

Each particle in the swarm flies with a velocity in the D-dimensional problem space, which is dynamically adjusted according to the flying experiences of its own and its colleagues. Let m represent the total number of particles. The location of the ith particle is represented as Xi = (xi1, xi2,…, xiD; i = 1,2,…m). The best previous position of the ith particle is recorded and represented as Pi = (pi1, pi2,…, piD; i = 1,2,…m), which is also called pbest. The index of the best particle among all particles in the population is represented by symbol g and Pg = (pg,1, pg,2,… pg,D), where g ε {1,2,…m}. The location Pg is also called gbest. The velocity for the ith particle is represented as Vi = (vi1, vi2,…, viD; i = 1,2,…m) and is clamped to a maximum velocity Vmax = (vmax,1, vmax,2,…, vmax,d), which is specified by the user. The concept of particle swarm optimization includes, at each step, changing the velocity and location of each particle toward its pbest and gbest locations according to Eq. 1 and 2, respectively:

Image for - An Enhanced Particle Swarm Optimization Algorithm
(1)

Image for - An Enhanced Particle Swarm Optimization Algorithm
(2)

where, 1≤i≤m, 1≤d≤D, c1 and c2 are acceleration constants, r1 and r2 are random functions in the range (0, 1). In order to control particles in the search area in iteration, take the boundary value as its value if position and velocity of particles go beyond its given area. w is inertia weight, which is reduced according to Eq. 3 in iterative change:

Image for - An Enhanced Particle Swarm Optimization Algorithm
(3)

where, k is current iteration time, N is total iteration number, wmin and wmax are the lower and upper bounds for w.

CHAOS SEARCH STRATEGY

Chaos is one of the most popular phenomena that exist in nonlinear system and theory of chaos is one of the most important achievements of nonlinear system research. It is now widely recognized that chaos is almost a fundamental mode of motion in natural phenomena.

There has no strict definition of chaos and there are many chaos models now. Logistic mapping function is widely used to generate chaos, but research shows that its sequence isn’t symmetrical, which affects capability of chaos search. Here logic self-mapping function is used to produce chaos variables because of its uniformity and ergodicity:

Image for - An Enhanced Particle Swarm Optimization Algorithm
(4)

Chaos will occur as long as initial iterative value does not equal to 0. This mapping is simple and easy to compute by computer.

If the target function f (xi) is continuous, object problem to be optimized is shown in Eq. 5:

Image for - An Enhanced Particle Swarm Optimization Algorithm
(5)

The basic process of chaos optimization strategy can be described as follows:

Step 1: Algorithm initialization. Let k = 0, create D different chaos variables xi (k) randomly and xi (k)≠0, I = 1,2,…D. k is the iterative symbol of chaos parameters. Let xi denote the current best chaos variable, f is the current best solution and initialized as a biggish number
Step 2: Map the chaos variable xi (k) to optimization variable area, signed as mxi (k):

Image for - An Enhanced Particle Swarm Optimization Algorithm
(6)

Step 3: Calculate f (mxi (k)) and if f (mxi (k))<f*, f* = f (mxi (k)), x*i = xi (k)
Step 4: Let k = k+1, xi (k) = 1-2 (xi (k))2, repeat from step 2 to step 4 until f* keep unchanged in certain steps or iterative time reaches the given one and x*i is the best chaos variable and f* is the best solution

EPSO ALGORITHM

In later evolution, convergence rate of PSO algorithm will reduce much and entrap into local minimum most likely, which affects precision and efficiency of the algorithm. So, efficient variety should be adopted to improve capability of the PSO algorithm.

Chaos search strategy is introduced into PSO, in order to enhance the global search capability and get rid of the attraction of local minimum. The main idea is that, when particles get into the local extremum, gbest in particle swarm is searched by chaos search strategy for a decided step, which guides the particle swarm toward the best solution.

In order to avoid particle swarm getting into local minimum and leading the algorithm stagnant, premature estimate mechanism is introduced into searching process, which uses swarm fitness variance to estimate whether particle swarm gets into local minimum. Swarm fitness variance is calculated as:

Image for - An Enhanced Particle Swarm Optimization Algorithm
(7)

where, fi is fitness of particle i, Image for - An Enhanced Particle Swarm Optimization Algorithmis average fitness of particle swarm and f is unitary gene:

Image for - An Enhanced Particle Swarm Optimization Algorithm

when σ2<mxβ, chaos search strategy is used to deal with premature. Where, m is particle number, which is used to balance magnitude on both sides of the inequation. β is a given smallish constant in area of (0, 0.2).

When particles get into local extremum, chaos search strategy is used to arouse the particles. Logic self-mapping function is used to initialize particle swarm again. Because of the symmetrical characteristic of this function, it is used to initialize particle swarm at the beginning of the algorithm, to distribute particles more symmetrically. The best solution is searched blindly, because chaos search is random. In order to avoid particles searching loss some area, chaos search process is modified by reducing search area of variables around the current best solution. The chaos search area is controlled in the neighborhood of local extremum, to enhance precision and efficiency of chaos search. Following equations are used to reduce area:

Image for - An Enhanced Particle Swarm Optimization Algorithm
(8)

where, Cε (0,05) is adjustment coefficient. The rest of particle search area remains invariably.

On the basis of above definitions and analysis, process of EPSO is described as follows:

Step 1: Parameter initialization: acceleration constants c1 and c2, maximal inertia weight wmax and minimal inertia weight wmin, constant β, adjustment coefficient C, total iteration number N and chaos search time A
Step 2: Particle swarm initialization: randomly take D variables in area (-1, 1) exclude 0, namely Xi = (xi1, xi2,… xiD), iterate m-1 times using Eq. 4 to produce position variables of particle swarm. Produce velocity variables by the same method. Map position variables to optimization variable area according to Eq. 6
Step 3: Calculate fitness of every particle and take the best one among all particles as the current global best g
Step 4: Estimate whether the terminate condition is satisfied, if true, turn to step 10 else turn to next step
Step 5: Calculate wk according to Eq. 3
Step 6: Manipulate particles according to Eq. 1 and 2 respectively, calculate fitness of every particle and update pi of every particle and g of the swarm
Step 7: Estimate whether the terminate condition is satisfied, if true, turn to step 10 else turn to next step
Step 8: Estimate whether σ2<mxβ, if true, turn to next step else turn to step 5
Step 9: Guide particles to get away from the local extremum by improved chaos search strategy:
Set current iteration counter G = 1 and initialize the total iteration time A
Revert optimization variable mxi (k) to chaos variable:

Image for - An Enhanced Particle Swarm Optimization Algorithm

Create chaos variable according to Eq. 4
This chaos variable is mapped to optimization variable area according to Eq. 6. Calculate fitness and update f* and x*i
Estimate whether the terminate condition is satisfied, if true, turn to step 6 else turn to next step
G = G+1, if G>A, turn to step 7, else reduce area according to Eq. 8. If value of ai or bi goes beyond the boundary value, take the boundary value as its value. Then turn to 3
Step 10: Evolution is over and the global best solution is got

EXPERIMENT RESULTS AND ANALYSIS

In this section, three typical functions are used to evaluate the performance of EPSO. Actual minimum values of these functions are all zero.

Parameter initialization of the algorithm is as follows: parameters c1 and c2 are set to 1.49, wmin is set to 0.4,wmax is set to 0.95, total iteration number N is set to 500, adjustment coefficient C is set to 0.4, chaos search time A is set to 50, the number of particles m is set to 20, constant β is adjusted appreciably in area of (0, 0.2) for different problems and maximum velocity vmax,i of each particle is set to twenty percent of optimization variable area. For each function, 50 trials are carried out. At the same time, minimum optimal value, maximum optimal value, average optimal value and average computing time are recorded.

Table 1: Compared results of f1
Image for - An Enhanced Particle Swarm Optimization Algorithm

Table 2: Compared results of f2
Image for - An Enhanced Particle Swarm Optimization Algorithm

Table 3: Compared results of f3
Image for - An Enhanced Particle Swarm Optimization Algorithm

Image for - An Enhanced Particle Swarm Optimization Algorithm

The division of f1 is set to 2 and 10, respectively. The experiment results are shown in Table 1. When D is set to 2, both algorithms can converge to optimal value. But precision of EPSO is much better than that of SPSO. When D is set to 10, in 50 trials, SPSO can not converge to optimal value. But EPSO can converge to optimal value with high precision.

Figure 1 is comparison of convergence results of SPSO and EPSO when D = 2. The horizontal ordinate indicates iteration number and vertical ordinate indicates logarithm of magnitude for average optimal value. From Fig. 1, it can be seen that SPSO entraps into the local extremum in later evolution, but EPSO can get rid of the attraction of local extremum and find the better solutions.

Image for - An Enhanced Particle Swarm Optimization Algorithm

The division of f2 is 2. The experiment results are shown in Table 2. In 50 trials, SPSO can not converge to optimal value for the most, but EPSO can converge to optimal value with high precision. The maximal fitness of EPSO is much small than that of SPSO.

Figure 2 is comparison of convergence results under different iteration number. The horizontal ordinate indicates iteration number and vertical ordinate indicates logarithm of magnitude for average optimal value.

Image for - An Enhanced Particle Swarm Optimization Algorithm
Fig. 1: Convergence results of f1 when D is set to 2

Image for - An Enhanced Particle Swarm Optimization Algorithm
Fig. 2: Comparison of convergence results of f2

From Fig. 2, it can be seen that EPSO can get rid of local extremum and find the best solutions.

Image for - An Enhanced Particle Swarm Optimization Algorithm

The division of f3 is set to 2 and 10 respectively. The experiment results are shown in Table 3. When D is set to 2, results of SPSO are closer to optimal value. But EPSO can converge to optimal value with high precision. When D is set to 10, SPSO can not converge to optimal value on the whole, while results of EPSO are closer to optimal value. Convergence of EPSO is improved than that of SPSO.

Figure 3 is comparison of convergence results under different iteration number when D is set to 2. The horizontal ordinate indicates iteration number and vertical ordinate indicates logarithm of magnitude for average optimal value. Figuer 3 shows that the capability of EPSO is obviously better than that of SPSO.

From the testing results of the benchmark functions, it can be seen that for these three benchmark functions, the results of the proposed algorithm is obviously better than that of the SPSO. EPSO is better than SPSO in aspects of convergence precision and stability.

Image for - An Enhanced Particle Swarm Optimization Algorithm
Fig. 3: Convergence results of f3 when D is set to 2

This is because that EPSO can find the better solutions in optimization process and local extremum will be avoided in searching process.

In compared experiments, it is found that parameter selections of the algorithm are very important. In order to control the algorithm to carry out chaos search process when particle swarm get into the local optimal, value of β should be proper. If value of β is set too small, the algorithm will implement chaos search prematurely. But the algorithm will be hard to carry out chaos search if value of β is set much bigger. From different comparison, it is found that capability of the algorithm is preferable when β is set to 0.07. For chaos search, selection of adjustment coefficient C is important, too. When C is smaller, search area reduces faster. From different experiments, capability of the algorithm is preferable when C is set to 0.4. Finally, from above tables, average computing time of EPSO is much more than that of SPSO, because chaos search strategy is introduced into PSO algorithm. But from above figures, when total iteration number is set to 200 and chaos search time is set to 40, the optimization results are actually so preferable in the proposed algorithm. But the searching time of EPSO will be decreased.

CONCLUSION

In this study, an enhanced particle swarm optimization algorithm is proposed. When particles get into local extremum, they are activated by improved chaos search strategy to search the best solution. From experiment results, it can be seen that the proposed algorithm can effectively solve premature convergence. Moreover, precision and stability of the algorithm are both obviously better than that of SPSO. In future study, the algorithm stability and searching speed will be improved further.

ACKNOWLEDGMENT

This study is supported by National Nature Science Foundation of China under Grant No. 60173055.

REFERENCES

1:  Dou, Q.S., C.G. Hou, Z.Y. Xu and G.Y. Pan, 2006. Swarm-core evolutionary particle swarm optimization in dynamic optimization environments. J. Comput. Res. Dev., 43: 89-95.

2:  He, R., Y.J. Wang, Q. Wang, J.H. Zhou and C.Y. Hu, 2005. An improved particle swarm optimization based on self-adaptive escape velocity. J. Software, 16: 2036-2044.

3:  Kennedy, J. and R. Eberhart, 1995. Particle swarm optimization. Proceedings of the International Conference on Neural Networks, Volume 4, November 27-December 1, 1995, Perth, WA., USA., pp: 1942-1948
CrossRef  |  

4:  Kennedy, J. and R.C. Eberhart, 1997. A discrete binary version of the particle swarm algorithm. Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Computational Cybernetics and Simulation, Volume 5, October 12-15, 1997, Orlando, FL., USA., pp: 4104-4108
Direct Link  |  

5:  Kennedy, J., 1999. Small worlds and mega-minds: Effects of neighborhood topology on particle swarm performance. Proceedings of the Congress on Evolutionary Computation, Jul. 6-9, Washington, USA., pp: 1931-1938

6:  Kwok, N.M., G. Fang, Q.P. Ha and D.K. Liu, 2007. An enhanced particle swarm optimization algorithm for multi-modal functions. Proceedings of the IEEE on Mechatronics and Automation, Aug. 5-8, Harbin, China, pp: 457-462

7:  Li, B. and W.S. Jiang, 1997. Chaos optimization method and its application. Control Theor. Applic., 14: 613-615.
Direct Link  |  

8:  Lv, Z.S. and Z.R. Hou, 2004. Particle swarm optimization with adaptive mutation. Acta Electronica Sinica, 32: 416-420.

9:  Meng, H.J., P. Zheng, G.H. Mei and Z. Xie, 2006. Particle swarm optimization algorithm based on chaotic series. Control Decision, 21: 263-266.

10:  Ni, Q.J., Z.Z. Zhang, Z.Z. Wang and H.C. Xing, 2009. Dynamic probability particle swarm optimization based on varying multi-cluster structure. J. Software, 20: 339-349.

11:  Niu, D.P., F.L. Wang, D.K. He and M.X. Jia, 2009. Chaotic differential evolution for multiobjective optimization. Control Decision, 24: 361-364.

12:  Parsopoulos, K.E. and M.N. Vrahatis, 2004. On the computation of all global minimizers through particle swam optimization. IEEE Trans. Evol. Comput., 8: 211-224.
CrossRef  |  Direct Link  |  

13:  Ratnaweera, A., S.K. Halgamuge and H.C. Watson, 2004. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans. Evol. Comput., 8: 240-255.
CrossRef  |  Direct Link  |  

14:  Zheng, S.F., S.L. Hu, S.X. Su, C.F. Lin and X.W. Lai, 2007. A modified particle optimization algorithm and application. Proceedings of the 6th International Conference on Machine Learning and Cybernetics, Aug. 19-22, Hong Kong, China, pp: 945-951

15:  Kennedy, J., 2000. Stereotyping: Improving particle swarm performance with cluster analysis. Evol. Comput., 2: 1507-1512.
CrossRef  |  

©  2022 Science Alert. All Rights Reserved