**ABSTRACT**

A hybrid adaptive SAGA based on mutative scale chaos optimization strategy (CASAGA) is proposed to solve the slow convergence, incident getting into local optimum characteristics of the Standard Genetic Algorithm (SGA). The algorithm combined the parallel searching structure of Genetic Algorithm (GA) with the probabilistic jumping property of Simulated Annealing (SA), also used adaptive crossover and mutation operators. The mutative scale Chaos optimization strategy was used to accelerate the optimum seeking. Compared with SGA and MSCGA on some complex function optimization and several TSP combination optimization problems, the CASAGA improved the global convergence ability and enhanced the capability of breaking away from local optimal solution.

PDF Abstract XML References Citation

####
**How to cite this article**

*Information Technology Journal, 5: 524-528.*

**DOI:**10.3923/itj.2006.524.528

**URL:**https://scialert.net/abstract/?doi=itj.2006.524.528

**INTRODUCTION**

Genetic Algorithm (GA) is a general stochastic optimization strategy based on the principle of biologic evolution and natural selection. The basic concepts of GA were developed by Holland (1973) who inspired by Darwin’s the survival of the fittest theory. GA includes a class of adaptive searching techniques that are suitable for searching a discontinuous space. The elementary operations are reproduction, crossover and mutation. Genetic algorithms maintain a population of solutions rather than just one current solution. The population is iteratively recombined and mutated to generate successive populations. There are implicit parallelism search characteristics in GA, it can deal with some complex optimization problem that can’t do by traditional methods (Eiben *et al*., 1999). But GA has some disadvantages in dealing with combined optimum problem for complexes structure with large search space, long search time and premature convergence.

SA is based on the idea of neighborhood search. Kirkpatrick *et al*. (1983) suggested a form of SA could be used to solve complex optimization problems. The algorithm works by selecting candidate solutions which are in the neighborhood of the given candidate solution. SA attempts to avoid entrapment in a local optimum by sometimes accepting a move that deteriorates the value of the objective function (Ahmed and Alkhamis, 2002). With the help of the distribution scheme, SA can provide a reasonable control over the initial temperature and cooling schedule so that it performs effective exploration and good confidence in the solution quality.

Chaos movement can go through all states unrepeatedly according to the rule of itself in some area. It was introduced into the optimization strategy to accelerate the optimum seeking operation (Yan *et al*., 2002).

In this paper, combining the parallel search ability of a kind of adaptive GA with the controllable jumping property of SA (Ling and Dazhong, 2003) and combined using mutative scale chaos strategy, a kind of CASAGA hybrid meta-heuristic algorithm with the operators and parameters reasonably designed is proposed. The experiments on some benchmark programs have shown that the CASAGA hybrid meta-heuristic algorithm may significantly reduce the cost and decrease the probability of getting into local optimum comparing to SGA and MSCGA.

**MUTATIVE SCALE CHAOS OPTIMIZATION STRATEGY**

Chaos is one of the most popular phenomenons that exist in nonlinear system and theory of chaos is one of the most important achievements of nonlinear system research. It is now widely recognized that chaos is a fundamental mode of motion underlying almost natural phenomena (Bing and Weisun, 1997).

Logistic equation is brought forward for description of the evolution of biologic populations (Moon, 1992). It is the most common and simple chaotic function:

(1) |

Where, L is a control parameter which is between 0 and 4.0. When L = 4.0, the system is proved to be in chaotic state. Given arbitrary initial value that is in (0,1) but not equal with 0.25, 0.5 and 0.75, chaos trajectory will finally search non-repeatedly any point in (0,1).

If the target function of continuous object problem that to be optimized is:

(2) |

Then the basic process of the mutative scale chaos optimization strategy can be described as follows:

**ADAPTIVE SAGA BASED ON CHAOS STRATEGY**

There exist some weaknesses in simple GA and SA. SA has a continuous characteristic, so real-designed SA often suffers from the difficulty of proper control over the process and prohibitive time-consumption required for equilibrium. GA presents implicit parallelism and can retain useful redundant information about what is learned from previous searches by its representation in individuals in the population. But GA may lose solutions and substructures due to the disruptive effects of genetic operators and it is not easy to be premature and results in poor solutions. Based on the suitable cooling schedule, SA has good convergence property and the ability to probabilistically escape from local optima can be controlled.

Chaos has three important dynamic properties: the sensitive dependence on initial conditions, the intrinsic stochastic property and ergodicity. Chaos is in essence deeply related with evolution. In chaos theory, biologic evolution is regarded as feedback randomness, while this randomness is not caused by outside disturbance but intrinsic element (Tong *et al*., 1999). So it is believed that chaos is the source of information and system evolution. Taking advantage of chaos, a new search algorithm called chaotic search is presented which has the better capacity of climbing hill comparing with simulated annealing and random search. But in must almost search the whole solution space using an intrinsic stochastic property to get optimum solution, which needs more computation time, another shortage is that chaos search operates only one to one.

Thus, a hybrid meta-heuristic algorithm combined of GA and SA based on chaos optimization strategy is presented as follows:

Algorithm CASAGA: | |

During the hybrid search process, GA provides a set of initial solutions for SA at each temperature to perform Metropolis sample for each solution until equilibrium condition is reached and GA uses the solutions found by SA to continue parallel evolution. The optimization operators, such as mutation operator and the new solution generator of SA, can be different or hybrid used to yield a large neighborhood and efficiently explore better solutions among the solution space.

**Fitness function and annealing function: **The fitness function was used to measure the fitness of the individual; it’s the only joint of meta-heuristic algorithm and automatic software test case generation.

In Annealing function construction, exponential cooling schedule is used to adjust the temperature t_{k+1} = ω•t_{k}, where, ω∈(0,1) is a decrease rate. It is often believed to be an excellent cooling method, because it provides a rather good compromise between a computationally fast schedule and the ability to reach low-energy state.

**Selection strategy:** The main selection operators are tournament selection and roulette selection. The roulette selection method is selected in the paper. Each individual corresponds to one segment in roulette according to its fitness value percentage. Then the wheel needs to roll for N times to pick N parents. The marked position in wheel being selected becomes next parents after the rolling.

**Crossover strategy: **The arithmetic crossover operation

is implemented for the selected two solutions, where, x_{1} and x’_{2} are parents, x’_{1}and x_{2} are children, P_{c}∈(0,1)is a random variable. Such procedure is repeated M/2 times to generate the new population, then the top M solutions with better objective values from the old population and new solutions are reserved (Srnivas and Patnaik, 1994). The crossover probability P_{c} can be calculated using the following equation:

(6) |

Where, f_{max} denotes the maximum fitness value of the current generation; f_{avg} denotes the average fitness value; f_{c} denotes the fitness value of the chromosome which carry out the crossover operation.

**Mutation strategy:** By incorporating SA into the search structure, SA serves a type of mutation with adaptive probability controlled by temperature. So SA can be regarded as a type of mutation operation with adaptive rate to avoid being trapped in certain solutions at a **high temperature** and serves as a fine neighbor search at a low temperature. Thus, mutation rate is set to one to perform a fine chemotactic neighbor search and all these operators are designed as local search operators, which can be conducted by appending random noise for each parameter w’ = w+η•ξ, where, ξ is a random variable subjected to Gaussian distribution N(0,1) and η is a scale parameter. The mutation probability P_{m} can be calculated using the following equation:

(7) |

Where, f_{m} denotes the fitness value of the chromosome which carry out the mutation operation.

**CONVERGENCE ANALYSIS**

The hybrid algorithm CASAGA that presented based on the chaos optimization can reduce the probability of getting into the local optimum and it can guarantee to convergence to overall optimum.

According to literature of Yadong and Shaoyuan (2002), let x^{3} = arg min f(x) for the logistic map is used in the optimization search. {x_{1}^{k}} is the solution list produced by CASAGA, {x_{2}^{k}} is the solution list produced by annealing genetic algorithm. For i≤j, we can get f(x_{1}^{i})≥f(x_{1}^{j}), f(x_{2}^{i})≥f(x_{2}^{j}). For {f(x_{2}^{k})} is a convergence list and lim P{f(x_{2}^{k} = f(x^{3}} = 1, f(x_{1}^{k})≤f(x_{2}^{k}), f(x_{2}^{k})≥f(x_{1}^{k})≥f(x^{3}), so f(x_{1}^{k}) is also a convergence list according to converging and approximating theorem.

**RESULTS AND DISCUSSION**

**Complex functions: **Some benchmark functions (F_{1} to F_{3}) were carried to compare CASAGA with SGA and MSCGA (Goldberg, 1989). These functions get into local optimum easily. There are numerous local optima around the global optimum (Fig. 1 and 2).

Fig. 1: | Figures of complex testing function F_{1} |

Table 1: | A comparison of three algorithm on benchmark functions |

Table 2: | Experimental results on three symmetry TSP problems |

Table 3: | Experimental results on three asymmetry TSP problems |

(8) |

(9) |

(10) |

CASAGA outperforms SGA and MSCGA in the mass. The CASAGA can convergent to global optimum 100 percent, but SGA can't. From the result of feasible solutions, we can see CASAGA is superior to other two algorithms. It proved the CASAGA that presented in the study is efficient and promising (Table 1).

**TSP problem: **The travelling salesman problem (TSP) has been proved as a NP-Hard problem (Garey and Johnson, 1979). It represent a kind of combination optimization problem and has a lot of application in practical engineering.

Fig. 2: | Average fitness value curve of function F_{1} |

It has been used to study the performance of algorithm according to its important engineering and academic value.

TSP is the problem of finding a shortest closed tour which visits all the cities in a given set. Its mathematical description is follow: Given a cities set C = (c_{1}, c_{2},...c_{n}), where, distance of every pair cities is d(c_{i}, c_{j})∈R^{+} The problem is finding a tour (c_{π1}, c_{π2},...c_{πn}) visits all the cities once and makes

(11) |

Where, π1, π2,...πn is the permutation of (1,2,...,n).

Several symmetry and asymmetry TSP prolems from TSP general standard library TSPLIB (TSPLIB, 2005) was used to validate the validity of the algorithm. Generation size is 50, the maximum iterative number is 500. The crossover and mutate probability of CASAGA changs adaptively. The initial temperature of annealing process is 2000, the end temperature is 0 and the annealing speed is 0.98. The results of experiment has been shown in Table 2 and 3.

The average solutions of CASAGA is close to the known best solution and need less iteration on the symmetry and asymmetry TSP prolems. One disadvantaeg is that it needs more search time for the annealing operation was introduced into each individual. But the damnify is acceptable relative to the improvement of the performance. Above experiments were carried on Pentium4-2.4G/512M PC platform with MATLAB Tools.

**CONCLUSIONS**

This study discusses a kind of CASAGA hybrid meta-heuristic algorithm which combines the characteristics of GA, SA and Chaos.

Take no account of chaos search, the hybrid algorithm combines SA and GA through SA-Mutation (SAM) and SA-Recombination (SAR) operations. The algorithm translates into standard GA if the probability of SAM and SAR is zero. Chaos movement can go through all states unrepeated according to the rule of itself in some area. It was used to seek for optimum in population which was operated by SAGA. It can guide the population evolve rapidly.

Experimental results based on some benchmark functions show that CASAGA is quite flexible with satisfactory results than SGA and MSCGA. It’s an efficient and promising optimization strategy.

**ACKONWLEDGEMENTS**

The authors would like to thank the anonymous reviewers for their careful reading of this paper and for their helpful and constructive comments. This work was supported in part by the National High Technology Development Plan of China (863) under grant No. 2003AA1Z2610.

####
**REFERENCES**

- Ahmed, M.A. and T.M. Alkhamis, 2002. Simulation-based optimization using simulated annealing with ranking and selection. Comput. Operat. Res., 29: 387-402.

CrossRef - Eiben, A.E., R. Hinterding and Z. Michalewicz, 1999. Parameter control in evolutionary algorithms. IEEE Trans. Evol. Comput., 3: 124-141.

CrossRef - Kirkpatrick, S., C.D. Gelatt Jr. and M.P. Vecchi, 1983. Optimization by simulated annealing. Science, 220: 671-680.

CrossRefDirect Link - Ling, W. and Z. Dazhong, 2003. An effective hybrid heuristic for flow shop scheduling. Int. J. Adv. Manufact. Technol., 21: 38-44.

CrossRef - Srnivas, M. and L.M. Patnaik, 1994. Adaptive probability of crossover and mutation in genetic algorithms. IEEE Trans. Syst. Man Cybernet., 24: 656-667.

Direct Link - Tong, Z., W. Hongwei and W. Zicai, 1999. Mutative scale chaos optimization algorithm and its application. Control Decision, 14: 285-288.

Direct Link - Yadong, L. and L. Shaoyuan, 2002. A new genetic chaos optimization combination method. Control Theor. Appl., 19: 143-145.

Direct Link