INTRODUCTION
The purpose of swarm intelligence method is to build a random optimization
algorithm by simulating nature biology group behavior. Kennedy
and Eberthart (1995) gained enlightenment from bird and fish group behavior
and proposed the particle swarm optimization algorithm. The PSO algorithm has
fewer parameters and can resolve the complicated optimization problems efficiently
(Parsopoulos and Vrahatis, 2004). But PSO also can get
into the local extremum and has slow search speed in later period, which is
similar to other intellective algorithm.
The key to PSO algorithm is to deal with the problem of slow search speed and
premature convergence. Kennedy and Eberhart (1997) used
a discrete binary version of particle swarm optimization to resolve combinatorial
optimization problems in engineering practice. TVAC was given to control local
search and converged to global optimum solution efficiently (Ratnaweera
et al., 2004). Zheng et al. (2007)
proposed a method of changing velocity rate to enhance searching speed. Test
experiments of domain topology were conducted (Kennedy,
1999), in which the best form of topology should be designed based on actual
situation. Distance from the global best position to other position was calculated
to adjust the velocity suitably of each particle (Kennedy,
2000), in order to improve the performance of PSO and maintain the diversities
of particles. These methods adjust position and velocity of particles to enhance
search speed. Dou et al. (2006) proposed the
swarm-core evolutionary in dynamic optimization environments. Kwok
et al. (2007) proposed an alternative social interaction scheme among
swarm members. He et al. (2005) put forward an
improved particle swarm optimization based on self-adaptive velocity to speed
up the convergence, but the algorithm will still converge to local optimum.
A new adaptive mutation particle swarm optimizer was proposed by Lv
and Hou (2004), which is based on variance of the populations fitness.
Meng et al. (2006) introduced chaotic series
into PSO algorithm to avoid local optimization, but searching process is blind
and slow. Ni et al. (2009) introduced multi-cluster
structure into particle swarm optimization algorithm to escape from local optimal
solutions, which enhanced global search ability.
Chaos is one of nonlinear phenomenon, which is stochastic and ergodic and can
be used in optimization algorithm (Li and Jiang, 1997).
Niu et al. (2009) proposed chaotic differential
evolution algorithm for multi-objective optimization. In this study, an enhanced
particle swarm optimization algorithm based on improved chaos search strategy
is proposed (EPSO), in order to deal with premature convergence in later evolution.
From testing resu lts of the benchmark functions, the results of EPSO are obviously
better than that of standard particle swarm optimization algorithm (SPSO).
PARTICLE SWARM OPTIMIZATION ALTORITHM
Particle swarm algorithm starts from random initialization of a population of particles in the searching space and works on the social behavior of particles in the swarm. It finds the global best solution by adjusting simply the trajectory of each individual toward its own best location and toward the best particle of the entire swarm at each step.
Each particle in the swarm flies with a velocity in the D-dimensional problem space, which is dynamically adjusted according to the flying experiences of its own and its colleagues. Let m represent the total number of particles. The location of the ith particle is represented as Xi = (xi1, xi2,
, xiD; i = 1,2,
m). The best previous position of the ith particle is recorded and represented as Pi = (pi1, pi2,
, piD; i = 1,2,
m), which is also called pbest. The index of the best particle among all particles in the population is represented by symbol g and Pg = (pg,1, pg,2,
pg,D), where g ε {1,2,
m}. The location Pg is also called gbest. The velocity for the ith particle is represented as Vi = (vi1, vi2,
, viD; i = 1,2,
m) and is clamped to a maximum velocity Vmax = (vmax,1, vmax,2,
, vmax,d), which is specified by the user. The concept of particle swarm optimization includes, at each step, changing the velocity and location of each particle toward its pbest and gbest locations according to Eq. 1 and 2, respectively:
where, 1≤i≤m, 1≤d≤D, c1 and c2 are acceleration constants, r1 and r2 are random functions in the range (0, 1). In order to control particles in the search area in iteration, take the boundary value as its value if position and velocity of particles go beyond its given area. w is inertia weight, which is reduced according to Eq. 3 in iterative change:
where, k is current iteration time, N is total iteration number, wmin and wmax are the lower and upper bounds for w.
CHAOS SEARCH STRATEGY
Chaos is one of the most popular phenomena that exist in nonlinear system and theory of chaos is one of the most important achievements of nonlinear system research. It is now widely recognized that chaos is almost a fundamental mode of motion in natural phenomena.
There has no strict definition of chaos and there are many chaos models now. Logistic mapping function is widely used to generate chaos, but research shows that its sequence isnt symmetrical, which affects capability of chaos search. Here logic self-mapping function is used to produce chaos variables because of its uniformity and ergodicity:
Chaos will occur as long as initial iterative value does not equal to 0. This mapping is simple and easy to compute by computer.
If the target function f (xi) is continuous, object problem to be optimized is shown in Eq. 5:
The basic process of chaos optimization strategy can be described as follows:
• |
Step 1: Algorithm initialization. Let k = 0, create
D different chaos variables xi (k) randomly and xi
(k)≠0, I = 1,2,
D. k is the iterative symbol of chaos parameters. Let
xi denote the current best chaos variable, f is the current best
solution and initialized as a biggish number |
• |
Step 2: Map the chaos variable xi (k) to
optimization variable area, signed as mxi (k): |
• |
Step 3: Calculate f (mxi (k)) and if f (mxi
(k))<f*, f* = f (mxi (k)), x*i
= xi (k) |
• |
Step 4: Let k = k+1, xi (k) = 1-2 (xi (k))2, repeat from step 2 to step 4 until f* keep
unchanged in certain steps or iterative time reaches the given one and x*i is the best chaos variable and f* is the best solution |
EPSO ALGORITHM
In later evolution, convergence rate of PSO algorithm will reduce much and entrap into local minimum most likely, which affects precision and efficiency of the algorithm. So, efficient variety should be adopted to improve capability of the PSO algorithm.
Chaos search strategy is introduced into PSO, in order to enhance the global search capability and get rid of the attraction of local minimum. The main idea is that, when particles get into the local extremum, gbest in particle swarm is searched by chaos search strategy for a decided step, which guides the particle swarm toward the best solution.
In order to avoid particle swarm getting into local minimum and leading the algorithm stagnant, premature estimate mechanism is introduced into searching process, which uses swarm fitness variance to estimate whether particle swarm gets into local minimum. Swarm fitness variance is calculated as:
where, fi is fitness of particle i,
is
average fitness of particle swarm and f is unitary gene:
when σ2<mxβ, chaos search strategy is used to deal with premature. Where, m is particle number, which is used to balance magnitude on both sides of the inequation. β is a given smallish constant in area of (0, 0.2).
When particles get into local extremum, chaos search strategy is used to arouse the particles. Logic self-mapping function is used to initialize particle swarm again. Because of the symmetrical characteristic of this function, it is used to initialize particle swarm at the beginning of the algorithm, to distribute particles more symmetrically. The best solution is searched blindly, because chaos search is random. In order to avoid particles searching loss some area, chaos search process is modified by reducing search area of variables around the current best solution. The chaos search area is controlled in the neighborhood of local extremum, to enhance precision and efficiency of chaos search. Following equations are used to reduce area:
where, Cε (0,05) is adjustment coefficient. The rest of particle search area remains invariably.
On the basis of above definitions and analysis, process of EPSO is described as follows:
• |
Step 1: Parameter initialization: acceleration constants
c1 and c2, maximal inertia weight wmax
and minimal inertia weight wmin, constant β, adjustment
coefficient C, total iteration number N and chaos search time A |
• |
Step 2: Particle swarm initialization: randomly take
D variables in area (-1, 1) exclude 0, namely Xi = (xi1,
xi2,
xiD), iterate m-1 times using Eq.
4 to produce position variables of particle swarm. Produce velocity
variables by the same method. Map position variables to optimization variable
area according to Eq. 6 |
• |
Step 3: Calculate fitness of every particle and take
the best one among all particles as the current global best g |
• |
Step 4: Estimate whether the terminate condition is
satisfied, if true, turn to step 10 else turn to next step |
• |
Step 5: Calculate wk according to Eq.
3 |
• |
Step 6: Manipulate particles according to Eq.
1 and 2 respectively, calculate fitness of every particle
and update pi of every particle and g of the swarm |
• |
Step 7: Estimate whether the terminate condition is
satisfied, if true, turn to step 10 else turn to next step |
• |
Step 8: Estimate whether σ2<mxβ,
if true, turn to next step else turn to step 5 |
• |
Step 9: Guide particles to get away from the local
extremum by improved chaos search strategy: |
• |
Set current iteration counter G = 1 and initialize the total
iteration time A |
• |
Revert optimization variable mxi (k) to chaos variable: |
• |
Create chaos variable according to Eq. 4 |
• |
This chaos variable is mapped to optimization variable area
according to Eq. 6. Calculate fitness and update f* and x*i |
• |
Estimate whether the terminate condition is satisfied, if
true, turn to step 6 else turn to next step |
• |
G = G+1, if G>A, turn to step 7, else reduce area according
to Eq. 8. If value of ai or bi goes
beyond the boundary value, take the boundary value as its value. Then turn
to 3 |
• |
Step 10: Evolution is over and the global best solution
is got |
EXPERIMENT RESULTS AND ANALYSIS
In this section, three typical functions are used to evaluate the performance of EPSO. Actual minimum values of these functions are all zero.
Parameter initialization of the algorithm is as follows: parameters c1 and c2 are set to 1.49, wmin is set to 0.4,wmax is set to 0.95, total iteration number N is set to 500, adjustment coefficient C is set to 0.4, chaos search time A is set to 50, the number of particles m is set to 20, constant β is adjusted appreciably in area of (0, 0.2) for different problems and maximum velocity vmax,i of each particle is set to twenty percent of optimization variable area. For each function, 50 trials are carried out. At the same time, minimum optimal value, maximum optimal value, average optimal value and average computing time are recorded.
Table 1: |
Compared results of f1 |
 |
Table 2: |
Compared results of f2 |
 |
Table 3: |
Compared results of f3 |
 |
The division of f1 is set to 2 and 10, respectively. The experiment
results are shown in Table 1. When D is set to 2, both algorithms
can converge to optimal value. But precision of EPSO is much better than that
of SPSO. When D is set to 10, in 50 trials, SPSO can not converge to optimal
value. But EPSO can converge to optimal value with high precision.
Figure 1 is comparison of convergence results of SPSO and EPSO when D = 2. The horizontal ordinate indicates iteration number and vertical ordinate indicates logarithm of magnitude for average optimal value. From Fig. 1, it can be seen that SPSO entraps into the local extremum in later evolution, but EPSO can get rid of the attraction of local extremum and find the better solutions.
The division of f2 is 2. The experiment results are shown in Table 2. In 50 trials, SPSO can not converge to optimal value for the most, but EPSO can converge to optimal value with high precision. The maximal fitness of EPSO is much small than that of SPSO.
Figure 2 is comparison of convergence results under different iteration number. The horizontal ordinate indicates iteration number and vertical ordinate indicates logarithm of magnitude for average optimal value.
|
Fig. 1: |
Convergence results of f1 when D is set to 2 |
|
Fig. 2: |
Comparison of convergence results of f2 |
From Fig. 2, it can be seen that EPSO can get rid of local extremum and find the best solutions.
The division of f3 is set to 2 and 10 respectively. The experiment results are shown in Table 3. When D is set to 2, results of SPSO are closer to optimal value. But EPSO can converge to optimal value with high precision. When D is set to 10, SPSO can not converge to optimal value on the whole, while results of EPSO are closer to optimal value. Convergence of EPSO is improved than that of SPSO.
Figure 3 is comparison of convergence results under different
iteration number when D is set to 2. The horizontal ordinate indicates iteration
number and vertical ordinate indicates logarithm of magnitude for average optimal
value. Figuer 3 shows that the capability of EPSO is obviously
better than that of SPSO.
From the testing results of the benchmark functions, it can be seen that for
these three benchmark functions, the results of the proposed algorithm is obviously
better than that of the SPSO. EPSO is better than SPSO in aspects of convergence
precision and stability.
|
Fig. 3: |
Convergence results of f3 when D is set to 2 |
This is because that EPSO can find the better solutions in optimization process
and local extremum will be avoided in searching process.
In compared experiments, it is found that parameter selections of the algorithm are very important. In order to control the algorithm to carry out chaos search process when particle swarm get into the local optimal, value of β should be proper. If value of β is set too small, the algorithm will implement chaos search prematurely. But the algorithm will be hard to carry out chaos search if value of β is set much bigger. From different comparison, it is found that capability of the algorithm is preferable when β is set to 0.07. For chaos search, selection of adjustment coefficient C is important, too. When C is smaller, search area reduces faster. From different experiments, capability of the algorithm is preferable when C is set to 0.4. Finally, from above tables, average computing time of EPSO is much more than that of SPSO, because chaos search strategy is introduced into PSO algorithm. But from above figures, when total iteration number is set to 200 and chaos search time is set to 40, the optimization results are actually so preferable in the proposed algorithm. But the searching time of EPSO will be decreased.
CONCLUSION
In this study, an enhanced particle swarm optimization algorithm is proposed. When particles get into local extremum, they are activated by improved chaos search strategy to search the best solution. From experiment results, it can be seen that the proposed algorithm can effectively solve premature convergence. Moreover, precision and stability of the algorithm are both obviously better than that of SPSO. In future study, the algorithm stability and searching speed will be improved further.
ACKNOWLEDGMENT
This study is supported by National Nature Science Foundation of China under Grant No. 60173055.