HOME JOURNALS CONTACT

Journal of Software Engineering

Year: 2015 | Volume: 9 | Issue: 4 | Page No.: 838-847
DOI: 10.3923/jse.2015.838.847
Particle Swarm Optimization Research Based on Bacterial Foraging Algorithm
Yubao Hou

Abstract: Partical Swarm Optimization (PSO) is a algorithm for solving optimization problems. To avoid fall into the local minimum in the standard PSO algorithm, based on the operator of chemotaxis in the bacterial foraging algorithm, a new algorithm has been proposed in this study. The algorithm made use of the advantage that the bacterial foraging algorithm was easy to search for the optimal value in region. Tests show that the new algorithm can compensate for the defects of low precision and falling into local minimum. The new algorithm is a global optimization algorithm for solving the optimization of complex functions.

Fulltext PDF Fulltext HTML

How to cite this article
Yubao Hou , 2015. Particle Swarm Optimization Research Based on Bacterial Foraging Algorithm. Journal of Software Engineering, 9: 838-847.

Keywords: bacterial foraging, PSO, global optimization, chemotaxis and multimodal irregular

INTRODUCTION

Since the 1940s, people have been inspired by the use of biological systems to solve high-dimensional, multi-peak, non-linear and non-differential practical engineering problems and then many bionic optimization algorithm were constructed and devised. Kennedy and Eberhart (1995) proposed Particle Swarm Optimization (PSO) algorithm in 1995 (Eberhart et al., 1996; Kennedy and Eberhart, 1995; Kennedy et al., 2001) which is one of the iterative optimization algorithm and it has foraging behavior with simulated birds. This algorithm has the self-awareness and cognitive ability of particle groups, the global optimal solution is obtained after several iterations and updated. This algorithm is easy to implement and it has high operating efficiency without requiring gradient information and with less adjustable parameters. Now, PSO has been successfully in function optimization, engineering, system control, shop scheduling and other areas. With the increasing function dimension, PSO algorithm is easy to fall into local optimal solution. To overcome this shortcoming in PSO algorithm, many researchers have explored the algorithm and it has been improved.

Life thought should be integrated into the PSO by Zhang and Lin (2008), the life cycle is set up for a global optimal particle. When there is end of the life cycle, the global optimal particle is re-selected, this method can overcome the shortcomings that the PSO algorithm is easy to fall into the local optimum. The simulated annealing idea was introduced to PSO algorithm with a hybrid and Gaussian mutation (Gao and Xie, 2004), it is the effective algorithm to avoid local optimum. A new hybrid algorithm was proposed (Li and Quan, 2010) with the combines of the PSO and GT algorithm (Zhang et al., 2008), this algorithm has changed the way of the updated particles, it reduced the reliance on the initial particle and it enhanced the algorithm capacity to escape from local optimum. An improved particle swarm optimization algorithm was proposed with a cross factor (Xu, 2010), the new algorithm improved the global search ability but also it increased the search time. The around extreme was introduced in the update way of the particles in the evolutionary process (Ren et al., 2010), particle operated diffusion in an incremental way, the population is made more fully to utilize the information, the nonlinear inertia weight is adjusted at the same time, the diffusion operation is made to guide the extreme change, it enhanced the ability to use information in population.

MATERIALS AND METHODS

Bacterial foraging algorithm: Bacterial Foraging Algorithm (BFA) is a new bionic optimization algorithm which was proposed based on foraging behavior of E. coli swallowed food in the human intestinal tract (Passino, 2002; Berg, 2000; Berg and Brown, 1972). In the BFA model, the solution of the optimization problem is corresponding to the bacteria state in search space. Algorithms include chemotaxis, copy and disperse three steps (Das et al., 2009). Chemokines operations include flips and swimming; replicate is according to "Survival of the fittest" principle, at the end of the bacterial life cycle, in order to hasten the process of the fitness value of each bacterial accumulation as the standard, less than half the bacteria die, it is preferably split into two half to substitute the other half; it is not into the opportunity to reduce the local minimum value, BFA disperse the introduction operation. After the completion of a certain number of bacterial replication, at a certain probability, it randomly disperses into the search space.

In BFA algorithm, chemotactic step is the most important step of a bacterial optimal feeding. It first defines the bacterial movement in any direction, it is inverted unit step, after a bacterial is inverted, if the fitness value is improved, several steps will continue to move in the same direction, until it is no longer improved or the fitness value reaches a predetermined moving number of steps, this process is defined as swimming. In bacterial foraging algorithm, chemotaxis behavior can be described as follows.

Each individual set of bacteria is representative of a solution in the solution space, P (i, j, k, l) represents the spatial position of i-th bacterial vector. Where, j represents the j-th generation chemokine cycle, k represents the k-th generation reproduction cycle, l denotes the l-generation migration cycle, the direction is adjusted by the Eq. 2:

(1)

(2)

where, C(i) represents the selected direction swimming in unit step, Δ(i) is generated by the random vector, φ(i) represents selected direction after reorientation. Chemotactic bacteria foraging optimization step is the essence of the bacteria in the solution space, it is feasible solutions to neighborhood search and it determines whether to continue swimming in that direction or re-direction is adjusted by the fitness function. Chemokines operator is to ensure that the bacteria monomer can always find the best value in its neighborhood.

Standard PSO algorithm: PSO algorithm originated from birds foraging behavior simulation, optimal population is made through collective collaboration between individuals. Standard PSO algorithm initialization generates a group of particles, each particle flight to a certain speed in n-dimensional space, flight speed is dynamically adjusted by individual flight experience and flight experience of groups. Xi = (Xi1, Xi2, …., Xin) is the current position of i-th particle, Vi = (Vi1, Vi2, …., Vin) is the current velocity of i-th particle, Pi = (Pi1, Pi2, …., Pin) is the ever experienced best location of i-th particle, i-th particle has the best fitness in the position. The f (x) is set as the minimized objective function, the best position of i-th particle is decided by the Eq. 3:

(3)

The best position in all particles position is called global best position, it is denoted as Pg(t). That is Eq. 4:

(4)

PSO algorithm evolution equation is presented in Eq. 5 and 6:

(5)

(6)

where, i = (1,2, …, S), S is the size of the population; c1, c2 is accelerating factor; r1, r2ε(0, 1) is a random value of uniformly distributed; w is the inertia weight, if w is large, global search ability is strong, when w is small, the local search ability is more prominent. LWDPSO set w with the linear decrease evolution, w is in the range of (0.4, 0.95), the relationship between w and the number k of evolution can be defined as Eq. 7:

(7)

where, w is the maximum number of evolution.

After the end of the evolution, the optimal location in the PSO algorithm search is the optimal solution of the system requirements.

Particle swarm optimization algorithm based on bacterial foraging: From PSO algorithm principle, it can be seen that from inertia weight w, although the particles in the initial algorithm can have a greater ability to explore and there are a larger development capabilities in the latter part. However, due to the particle velocity is too large, in the exploration process, the particles can easily fly over the local area where it did not find the optimal value of the optimal solution, thus it is falling into local optimum.

In order to avoid particle missed the region better solution which lies in the exploration process, PSO is combined with a bacterial foraging algorithm, bacterial monomer can always find their advantage in the neighborhood of the optimal value, at the appropriate time, chemokines operator is used to adjust the particles location of PSO algorithm, the particles is made to reach a new location in the area to find the optimal solution, thus it is avoiding particle miss better solution in the exploration process.


Fig. 1: Algorithmic algorithm flow

The appropriate time is defined as the iterative process needs when the global optimal solution is updated, because in this time, the possibility of a more optimal solution is large in the particle neighborhood.

Also in the local search process, an attention should be paid to the bacterial chemotaxis operator steps. Step length is too long, the particles is also easy to miss the optimal value; step length is too short, although it is able to accurately locate multiple local optima but it will increase the computational level. So, different step value is required for different test functions, in a function test, it will be carried out using different step, these step are tested and selected (Fig. 1, 2).

RESULTS AND DISCUSSION

Test function: To illustrate the performance of BFA-PSO algorithm, the compare experiments are made with shrink factor PSO (Clerc and Kennedy, 2002) and bacterial foraging algorithm BFA (Passino, 2002; Das et al., 2009), four standard nonlinear test functions are selected, they are Rosenbrock, Rastrigin, Griewank and Ackley:

Fig. 2: Local search algorithm flow

•  Rosenbrock function f1(x):

(8)

The global optimum unimodal function value is 0, the optimal solution is for X (1, 1, …, 1). The function region of close to the most advantageous is for the banana-shaped and there is a strong correlation between variables and the gradient information is often misleading search direction of algorithm:

•  Rastrigin function f2(x):

(9)

The best global multimodal function value is 0, the optimal solution is for X (0, …, 0). There are a large array of sine inflection, deep local minima in the multimodal function, the optimization algorithm is very easy to fall into local minima on the path to a global optimum:

•  Griewank function f3(x):

(10)

The best global multimodal function value is 0, the optimal solution is for X (0, …, 0). This function is a complex and multimodal function, there are a lot of local minimum points and tall obstacles, due to the variable points are significantly related, the optimization algorithm is very easy to fall into local optimum:

•  Ackley function f4(x):

(11)

The best global multimodal function value is 0, the optimal solution is for X (0, …, 0). This function is a multimodal function with a large number of local minima and its optimization is very difficult, the algorithm is easy to fall into local optimum which is accessed to the global optimum.

In these four test functions, even with traditional optimization methods, they are difficult to optimize and the optimizing difficulty will increase rather sharply with the test function dimension. For low-dimensional unimodal function, the optimized results are obtained satisfactory by particle swarm optimization algorithm; but for multimodal and high-dimensional function, the optimization results are unsatisfactory with particle swarm optimization algorithm. Based on this, the dimensions of the four test function is set to 30 and 50 dimensions.

To test the performance of the new algorithm while tests are performed in the four high-dimensional multi-peak function to the bacterial foraging algorithm BFA, particle swarm optimization PSO, the BFA-PSO algorithm based on bacterial foraging chemokines operator in this study.

Contrast experimental parameter settings: For standard PSO algorithm, the number of particles N = 40, w = wmax - ((wmax - wmin)/M) t, where wmax = 0 Cheong 9, wmin = 0 Chang 4, learning factor c1 = c2 = 2. Iterations are for 1000 times; for pure bacterial foraging algorithm, the main parameters are the number of bacteria s = 50, the number of migrating Ned = 2, the number of breeding Nre = 4, the number of chemokines Nc = 40, swimming times Ns = 3, migration probability Ped = 0.25, swim step C = 0.001R (R is for the width of the optimizing interval). The mixing parameters in the algorithm are the number of particles N = 40, wmax = 0.9, wmin = 0.4, learning factor c1 = c2 = 2 and in the bacterial chemotaxis operator, maximum chemotactic number is 50, the number of iterations is for 1000, the step size parameter C is different with different test functions.

Table 1: Test results of the algorithms

The step length is 0.01 in the tested function f1(x), the step size is 0.075 in the function f2(x), f3(x) function step length is 1, the function f4(x) step size is 0.5. Of course, if you want to get a more precise value, you can put the step to set a small, this study, a comprehensive account is taken in the accuracy, the computational time and the selected step size.

Experimental results and analysis: The three algorithms are tested with 50 times in the above four functions, test results are shown in Table 1. Wherein, the mean optimization refers to an average value in all optimization results; standard deviation is standard deviation of 50 times successful search optimal value; optimum value refers to the best value which is obtained in the 50 tests.

The results can be seen from Table 1, in the improved PSO algorithm, its optimized results of the four test functions are better than the PSO algorithm and BFA algorithm, whether it is best to the optimizing value or the mean value. Stability of the algorithm is also significant and the standard deviation is generally small. The new algorithm optimization effect is obvious in f1(x), f3(x) and f4(x), it has basically achieved accuracy. Because the density of local optimal value in f2(x) function is large and deep, its optimization is very difficult, if the number of particles is increased in the new algorithm, it is to achieve better results but the effect is limited. In short, no matter for unimodal function f1(x) or multimodal function f2(x), f3(x) and f4(x), the new algorithm convergence speed and its computing accuracy and the algorithm stability has been significantly improved.

Fig. 3(a-c): Convergence effects of three algorithms in 30-dimensional Griewank function (a) BFA-PSO, (b) PSO and (c) BFA

Fig. 4(a-c): Convergence effects of three algorithms in 50-dimensional Ackley function, (a) BFA-PSO, (b) PSO and (c) BFA

From the convergence effect in Fig. 3 and 4, the standard PSO algorithm evolves to some generation numbers, there should be a standstill or iteration at very slow speed, the new algorithm is proposed by using bacterial chemotaxis idea for local search, its convergence is very fast, fitness value decreases with a linear form in some places, chemokines operator can find the local optimal value and it provides a great help to the global optimization, it greatly reduces the chance into the best local value while it is speeding up the convergence rate.

CONCLUSION

In this study, the growth of bacterial colonies simulate evolution PSO process, a new swarm intelligence optimization algorithm is designed which is mainly used for real function optimization problems. Problem solution space is equivalent to inoculum, the algorithm is equivalent to individual bacteria, according to the bacteria seek food and basic laws of growth and reproduction, three kinds movement and sports law of swimming, tumbling and stay are designed to meet the individual algorithm while developing a self-reproduction and death mechanism. When the algorithm is executed, a single or small number of individuals should be placed into solution space, according to the movement of an individual set of search optimal solution, the population should change in accordance with the basic law of the growth of bacterial colonies in the search process. When the bacterial colonies disappear, the algorithm can naturally end.

In this study, the chemokine operator is embedded in particle swarm algorithm by the bacterial foraging algorithm. When the global optimal value is changed in particle swarm algorithm, local search is done in the position of the particle to find the optimal value of the region where the particles are in, thus, it ensures that the particles do not miss the optimal value of the search area, the fine search has been updated in the local area. This improved method can prevent particles flying the existence area of the optimal value in some extent because of particles flying too fast. Both in the search algorithm accuracy or stability, experimental data show that the improved PSO algorithm were significantly better than the single PSO and bacterial foraging algorithm, the effectiveness of the improved algorithm is also confirmed. Direction of future research is more in-depth discussion of this parameter which is set in BFA combined with PSO and it is explored that the methods are used in engineering examples.

REFERENCES

  • Berg, H.C., 2000. Motile behavior of bacteria. Phys. Today, 53: 24-30.
    Direct Link    


  • Berg, H.C. and D.A. Brown, 1972. Chemotaxis in Escherichia coli analysed by three-dimensional tracking. Nature, 239: 500-504.
    CrossRef    PubMed    Direct Link    


  • Clerc, M. and J. Kennedy, 2002. The particle Swarm-explosion, stability and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput., 6: 58-73.
    CrossRef    Direct Link    


  • Das, S., A. Biswas, S. Dasgupta and A. Abraham, 2009. Bacterial Foraging Optimization Algorithm: Theoretical Foundations, Analysis and Applications. In: Foundations of Computational Intelligence, Volume 3: Global Optimization, Abraham, A., A.E. Hassanien, P. Siarry and A. Engelbrecht (Eds.). Springer, Berlin, Germany, ISBN: 9783642010842, pp: 23-55


  • Eberhart, R.C., P.K. Simpson and R. Dobbins, 1996. Computational Intelligence PC Tools. Academic Press Professional, San Diego, CA., USA., ISBN: 0-12-228630-8, pp: 212-223


  • Gao, Y. and S. Xie, 2004. Particle swarm optimization algorithms based on simulated annealing. Comput. Eng. Applic., 40: 47-50.
    Direct Link    


  • Kennedy, J. and R. Eberhart, 1995. Particle swarm optimization. Proc. IEEE Int. Conf. Neural Networks, 4: 1942-1948.
    CrossRef    Direct Link    


  • Kennedy, J., R. Eberhart and Y. Shi, 2001. Swarm Intelligence. 1st Edn., Academic Press, San Diego, CA., ISBN: 1-55860-595-9


  • Li, P. and H. Quan, 2010. Improved hybrid particle swarm optimization algorithm. Comput. Eng. Applic., 46: 29-31.
    Direct Link    


  • Passino, K.M., 2002. Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst., 22: 52-67.
    CrossRef    


  • Ren, X.B., D. Yu, Z.X. Yang and H.W. Ying, 2010. A diffusion particle swarm optimization with fully communicated information. J. Dalian Maritime Univ., 36: 159-161.
    Direct Link    


  • Xu, X., 2010. A new crossed particle swarm optimization. J. Sichuan Univ. Sci. Eng. (Nat. Sci. Edn.), 23: 19-22.
    Direct Link    


  • Zhang, J. and Y. Lin, 2008. A particle swarm optimizer with lifespan for global optimization on multimodal functions. Proceedings of the IEEE Congress on Evolutionary Computation, June 1-6, 2008, Hong Kong, pp: 2439-2445.


  • Zhang, P., T. Li and Z.H. Li, 2008. Improved-GT-operator-based cooperative Co-evolutionary algorithm to function optimization. Comput. Eng., 34: 231-349.
    Direct Link    

  • © Science Alert. All Rights Reserved