HOME JOURNALS CONTACT

Information Technology Journal

Year: 2013 | Volume: 12 | Issue: 5 | Page No.: 1035-1039
DOI: 10.3923/itj.2013.1035.1039
An Adaptive Memory Evolution Algorithm
Dan Song, Gang Yan, Sha Fu, Yingfang Zhu, Caihong Wu and Ruifeng Du

Abstract: Memory mechanism is applied to the optimization algorithm by more study. In order to improve the algorithm of adaptive ability, introducing memory ability in evolutionary algorithm framework, an Adaptive Memory Evolution Algorithm (AMEA) is proposed. The algorithm set matrix to record the exploring experiences and exploring results of the individual parent. The algorithm uses these records to guide the generation of offspring. And thus AMEA can adaptively select the dimension to mutate and exploring radius. In addition, to improve the algorithm accuracy, the algorithm raises the best opportunities by using super-variation operator. In the simulation test, compared with similar algorithms, the results show that AMEA has fast convergence speed and optimum performance of global convergence.

Fulltext PDF Fulltext HTML

How to cite this article
Dan Song, Gang Yan, Sha Fu, Yingfang Zhu, Caihong Wu and Ruifeng Du, 2013. An Adaptive Memory Evolution Algorithm. Information Technology Journal, 12: 1035-1039.

Keywords: precision, cross, information feedback, Evolutionary algorithm, super-mutation, memory effect and optimization problems

INTRODUCTION

Human beings are highly intelligent creatures in the real world and human society is the highest form of swarm intelligence. Humans and other intelligent life in the midst of real-world environment can adaptively adjust itself to adapt to society. In the study of intelligent algorithm, adaptation features get more and more attention of scholars. The clonal selection algorithm, particle swarm algorithm and clustering algorithm can be used respectively to improve the overall performance of the algorithm (Yuan-Yuan et al., 2009; Zhu and Wu, 2010; Chen et al., 2010).

Individual human not only get genetic gene from their parents, more importantly but also acquire knowledge in human society by education and learning. Memory is the basis of education and learning and an important prerequisite. Therefore, a number of scholars introduce this in the intelligent processing algorithms to improve the intelligence of the algorithm (Shang et al., 2009; Niu et al., 2009; Liu et al., 2007; Li et al., 2008; Peng and Gao, 2010). For multi-objective constrained optimization problem, set the memory cells in the algorithm close to the feasible region of infeasible solutions for local search, the optimal solution of the algorithm to better approximate the constrained Pareto front (Shang et al., 2009). For single objective optimization problems the introduction of immune memory in the immune clonal algorithm to enhance the convergence rate of population (Niu et al., 2009; Liu et al., 2007). Li et al. (2008) proposed a local-based memory effect anomaly detection algorithm which effectively reduces the running time. Peng and Gao (2010) proposed a kind of memory estimation algorithm for solving the distribution of binary-coded dynamic optimization problems. In addition, Some scholars carried out the study and practice of the application of evolutionary algorithms (Guerra-Gomez et al., 2012; Polanco-Martagon et al., 2012).

Inspired by education and study of human society, an Adaptive Memory Evolution Algorithm (AMEA) is proposed in this study. AMEA records the exploring experiences and exploring results of the individual parent and use these records to guide the future generation of individuals. So individuals adaptively adjust to the new exploration methods.

In addition, according to point of Wang and Yu (2006), a single-dimensional cross shows more effective in high-dimensional optimization. Therefore, AMEA use a single-dimensional cross in the cross operator.

In the simulation experiment, AMEA Algorithm runs a number of random independent trials in the real parameter optimization problem of high dimensional function. Compared with the classical algorithms (Tao et al., 2010), the simulation results show that the introduction of memory mechanisms and adaptive characteristics conducive can enhance the search performance of evolutionary algorithms and improve the global convergence speed.

A NEW ADAPTIVE MEMORY EVOLUTION ALGORITHM

AMEA scale of variation is divided into a number of levels, the lower level, on behalf of larger scale variation. The algorithm set up the matrix M to remember father of the variation level and other information and then effectively guide the subsequent evolution. In addition, for single-objective optimization problem, the best individual in the population should be given more opportunity to produce offspring which shows in AMEA by super mutation operator to improve the accuracy of algorithm results. Cross is an important part of evolutionary algorithms for high-dimensional optimization problem, the individuals in the algorithm use single dimensional cross between two individuals to generate offspring.

In the global solution space, the variation level of rank-scale settings shown in Table 1.

In this algorithm, the use of low-level variation in a larger search area is conducive to jump out of local optimal solution. The high-level variation, in the small search area, can get more accurate searching.

Flow of algorithm: In the past, traditional algorithms only considered genetic effect while ignored the evolutionary information and experiences of the ancestors.

Table 1: Set of the rank

Fig. 1: Flow chart of AMEA

This algorithm based on the preserved genetic role, focuses on the memory of the ancestors of useful information to guide the future evolutionary process. To put it concretely, we set the matrix to retain the level of parent variation, mutation results, the variation frequency in the dimension and use these information to guide future generations of mutation process: adaptive selection of dimensions, the variation scale adaptive generation and so on (Fig. 1).

Sketch of algorithm, Initialization and order: Algorithm uses real number coding and uses chaotic sequences to increase the initial population distribution in the global solution space. The first antibody group G0 expresses a set of the initial candidate solutions of problem. Utilizing chaotic sequences the algorithm generates m antibodies (every antibody includes n dimension vector) in search space which constitute G0. G0 = {amn}.

Initialize memory matrix variation Matrix (N, d, 3) and set its all elements to 0. Matrix structure is shown as type 1 below:

(1)

Variable m represents the number of individuals in the population and {Mpq1, Mpq2, Mpq3 } record exploring information of the antibody p on the dimension q.

The algorithm calculates the affinity of each individual and prioritizes individuals with their affinity.

Termination condition: In the algorithm, maximum evolution generation is set as the termination conditions. The evolution generation reaches the set value, the algorithm terminates.

Variation: According to affinity-ranking AMEA calculates individual variation frequencies. The more pre-ranking the more number of individual variation times. For high-dimensional optimization problem, AMEA uses two-way one-dimensional variation. Each individual selects only one dimension on which the mutation happens and in the opposite direction it takes the same step of the variation. Finally we select the best among in the two new individuals and the original individual. This process is divided into two steps: (1) Adaptive selection dimension, according to the current rank of dimensions recorded in the matrix M and (2) Adaptive mutation scale generation, considering the rank and variation results of the parent.

Matrix M remembers the previous generation of variation information. {Mpq1, Mpq2, Mpq3} record exploring information of the antibody p on the dimension q. Specifically, Mpq1 record rank of parents’ variation; Mpq2 record the result of variability (0 for false, 1 for success). Mpq3 record the number of parents’ variation.

ADAPTIVE SELECTION OF DIMENSIONS

For the individual p, according to each dimension of the individual records in the matrix M AMEA determines the mutation probability of the current dimension q which used in the way of roulette. The smaller rank (the larger the scale of the last variation), indicating that the greater the demand for its further variation, the greater probability of being selected. Specifically, the variation probability of individual p in the q-dimensional is showed in Eq. 2: (c is equal to 80 in the algorithm):

(2)

The variation scale adaptive generation (assuming variation the antibody p on the dimension q).

If it is the first time variation or variation times more than threshold l1 (trials: l1 = 8) reassemble the variation, randomly generated new value for new variation scale.

If it is the second or second later variation, then divides two kind of situations:

If the parent variant success, the probability of generating in the same rank is equal to 50%; the probability of generating in the small rank is equal to 25%; the probability of generating in the big rank is equal to 25%
If the parent variant not success, the probability of generating in the same rank is equal to 30%; the probability of generating in the small rank is equal to 70%
Of course, the matrix M should be promptly recorded and updated all through the variation process

From adaptive selection of dimensions and the variation scale adaptive generation, the biggest difference between AMEA and the traditional evolutionary algorithm can be seen that: AMEA kept the previous generation of variation information and use this information to effectively guide the next generation of the individual generation. When adaptive selection of dimensions, because of high-dimensional functions of the degree of uneven evolution of different dimensional characteristics, so the dimensions of the lower level of variation can be sure that their accuracy is not enough, should increase the probability of variation, to remove the "shortcomings" in the evolution.

On the other hand, the variation scale adaptive generation not only considered successful exploration information on previous generations but also think over the information failed to explore of previous generations. Therefore, the algorithm can make more effective the next generation of variation, effectively strengthening the local searching speed.

Finally, to the complexity of the different functions, set the threshold l1. If the variation frequency is more than the threshold, we re-generate a new random variation of scale, to avoid falling into local optimal solution.

Super-variation: The algorithm set the super-variation operator that implements a local IABEV to optimal antibody. Given threshold l2, the optimal antibody implements l2 times variation with the same strategy (l2 = 28). At the same time, memory and update the Matrix M. We can effectively improve the algorithm's accuracy by super mutation operator.

Cross: Because cross operator may be broke the fine gene blocks in evolution, so AMEA use a smaller probability to cross. The algorithm set thresholds l3 (l3 = 12) and run cross operator when the interval generation is greater than l3. Using the one-dimensional cross that is cross between the best antibody and a random selection of a non-optimal antibody. If the new is better than the old, the new replace the old and update the corresponding matrix M.

NUMERICAL EXPERIMENTS

Benchmark functions: To analyze the global search performance, convergence speed and stability of the algorithm, we select the three Benchmark functions for numerical experiments:

Rosenbrock function:


Griewank function:

Rastrigin function:

In order to better contrast test, the dimension D and the maximum evolution generation (20000) are same with the Tao et al. (2010).

Table 2: Comparison of F1 in 1000 runs (D = 10)

Table 3: Comparison of F2 in 1000 runs (D = 20)

Table 4: Comparison of F3 in 1000 runs (D = 10)

Fig. 2: F1 (D = 2)

Fig. 3: F2 (D = 2)

F1, F3: population size is 4, much less than 50 in reference (Tao et al., 2010), F2: population size is 40, slightly less than reference (Tao et al., 2010). This shows that, in the case of the same number of iterations, the evaluation of function of AMEA is less than the number of comparison algorithms (Fig. 2-4).

Performance comparison of single model function: From Table 2, it shows the results that Rosenbrock function independently runs 1000 times. The mean and variance of AMEA (adaptive memory evolution algorithm) are much smaller than DPSO (dissipative particle-swarm optimization), MCPSO (multi-swarm cooperative PSO), EEPSO (PSO coordinating the exploration and the exploitation) which indicate that the global convergence ability and stability of AMEA than other three algorithms. Maximum value is the worst in 1000 trials, its value is far better than the comparison algorithms. The minimum indicates the optimal result in the 1000 trials, the value of 8.75E-8 states clearly that AMEA can achieve high accuracy of optimization results.

Performance comparison of multiple model function: From Table 3 and 4, multi-modal functions (Griewank function and Rastrigin function) run independently 1000 times and the comparison results from the reference (Tao et al., 2010).

As can be seen from Table 3 and 4, for AMEA, in 1000 trials, the average, maximum, minimum all converge to the global optimal solution which perform better than other three algorithms. The mean and variance of AMEA are much smaller than DPSO, MCPSO and EEPSO which indicate that the global convergence ability and stability of AMEA than other three algorithms. Maximum value is far better than the comparison algorithms. The minimum indicates the optimal result in the 1000 trials, the value of states clearly that AMEA can achieve high accuracy of optimization results.

Fig. 4: F3 (D = 2)

This is because the algorithm introduces a memory mechanism that useful information in the evolution is remembered and used. In addition, combined with effective dimensions selection strategy and mutation strategy, the algorithm can speed up the convergence rate and improve the global convergence. Variance indicates that the algorithm showed a strong stability in repeated tests.

CONCLUSION

Education and experience of human plays an important role for individual growth. Inspired by this idea, this article propose AMEA algorithm. Introducing memory mechanism to evolutionary algorithm can enhance the intelligence of the algorithm and formulating appropriate Dimension strategy and mutation strategy can reduce blindness of variation and improve the overall performance of the algorithm. The simulation results show that, compared with other algorithms, AMEA shows the better global convergence, faster convergence speed, higher precision and greater stability. Therefore, to evolutionary algorithms, the introduction of memory mechanism can improve the intelligence of algorithms and show greater search capabilities for optimization problems.

ACKNOWLEDGMENTS

This study was supported by the Science and Technology Plan Project of Science and Technology Department of Hunan Province (grant code: 2011FJ3047) and also supported by the Scientific Research Fund of Hunan Provincial Education Department (grant code: 12B021).

REFERENCES

  • Chen, L.F., G.D. Guo and Q.S. Jiang, 2010. Adaptive algorithm for soft subspace clustering. J. Software, 21: 2513-2523.
    Direct Link    


  • Guerra-Gomez, I., E. Tlelo-Cuautle, M.A. Duarte-Villasenor and C. Sanchez-Lopez, 2012. Analysis, Design and Optimization of Active Devices. In: Integrated Circuits for Analog Signal Processing, Tlelo-Cuautle, E. (Ed.). Springer, New York, USA., pp: 1-30


  • Li, J., B.P. Yan and J. Li, 2008. Memory-effect-based local outlier detection algorithm. Comput. Eng., 34: 4-6.
    Direct Link    


  • Liu, R., J. Jia, M. Zhao and L. Jiao, 2007. An immune memory dynamic clonal strategy algorithm. Control Theory Appl., 24: 777-784.
    Direct Link    


  • Niu, Y., J. Sun, Y. Wang and L. Tao, 2009. Immune memory based quantum clone algorithm for joint estimation of 3-dimensional parameters. J. Xian Jiaotong Univ., 43: 75-79.
    Direct Link    


  • Peng, X. and X. Gao, 2010. Memory enhanced estimation of distribution algorithm in dynamic environments. Control Decis., 25: 339-345.
    Direct Link    


  • Polanco-Martagon, S., G. Reyes-Salgado, G. Flores-Becerra, I. Guerra-Gomez, E. Tlelo-Cuautle, L.G. de la Fraga and M.A. Duarte-Villasenor, 2012. Selection of MOSFET sizes by fuzzy sets intersection in the feasible solutions space. J. Applied Res. Technol., 10: 472-483.
    Direct Link    


  • Shang, R.H., L.C. Jiao, W.P. Ma and M.G. Gong, 2009. An immune memory clone algorithm for constrained multi-objective optimization. Acta Electron. Sin., 37: 1289-1294.
    Direct Link    


  • Tao, X., J. Xu L. Yang and Y. Liu, 2010. Particle-swarm algorithm coordinating the exploration and the exploitation. Control Theory Appl., 27: 636-640.


  • Wang, X.Z. and S.Y. Yu, 2006. Improved evolution strategies for high-dimensional optimization. Control Theory Appl., 23: 148-151.


  • Yuan-Yuan, W. T.Chao-Li and H. You-Rui, 2009. Adaptive clonal selection algorithm and its simulation. Pattern Recognit. Artif. Intell., 22: 202-207.


  • Zhu, H. and Y. Wu, 2010. A PSO algorithm with high speed convergence. Control Decis., 25: 20-24.
    Direct Link    

  • © Science Alert. All Rights Reserved