HOME JOURNALS CONTACT

Information Technology Journal

Year: 2013 | Volume: 12 | Issue: 5 | Page No.: 959-966
DOI: 10.3923/itj.2013.959.966
Influence of Jumping Rate on Opposition-based Differential Evolution Using the Current Optimum
Lihua Zhao, Qingzheng Xu and Jin Pan

Abstract: Diverse forms of opposition are already existent virtually everywhere and utilizing opposite numbers to accelerate an optimization method is a new idea. In this study, Differential Evolution (DE) and opposition-based differential evolution using the current optimum (COODE) are compared for different jumping rates. Experiments on 58 widely used benchmark problems show that, the higher jumping rate leads to faster convergence to global optimum and smaller success rate for most problems.

Fulltext PDF Fulltext HTML

How to cite this article
Lihua Zhao, Qingzheng Xu and Jin Pan, 2013. Influence of Jumping Rate on Opposition-based Differential Evolution Using the Current Optimum. Information Technology Journal, 12: 959-966.

Keywords: Opposition-based learning, differential evolution, opposition-based differential evolution using the current optimum and jumping rate

INTRODUCTION

Many examples of opposition permeate in most fields around the world, in some form or another. The opposition concept has sometimes been called by different names, such as, opposite particles in physics, absolute or relative complement in set theory, antonyms in languages and dualism in religions and philosophies. The interaction between individual and its corresponding opposite individual is apparently fundamental for maintaining universal balance (Tizhoosh et al., 2008). In fact, the opposition concept is applied, whether consciously or unconsciously, in most of our daily affairs.

After the basic concept of Opposition-Based Learning (OBL) was original introduced by Tizhoosh (2005), in a very short period of time, it has been utilized in different soft computing areas. The main idea of this optimization is the simultaneous consideration of an estimate and its corresponding opposite estimate for finding a better candidate solution which is closer to the global optimum. A mathematical proof was proposed by Rahnamayan et al. (2008a) to show that, in general, the opposite candidate solution is more advantageous than a second pure random one. Recently, this proof was simplified from the perspective of distance to the optimal solution and generalized to N-dimensional search spaces for black-box problems (Rahnamayan et al. 2012).

As an acceleration strategy, a novel opposition-based learning strategy using the current optimum (COOBL) was first proposed to solve function optimization problem (Xu et al., 2011a, b). What’s different with opposition-based learning is that, the optimum in the current generation, not the geometric center of variables’s initial range or current interval, was served as a symmetry point between the current point and the corresponding opposite point. The nice improvement measure, unexpectly, can lead to a high rate of opposite population usage, especially in the later stage of population evolution. The contribution of opposite point using the current optimum to make the intelligent algorithm faster was clearly confirmed by experiments results and was not reproducible by a uniformly generated random point.

The performance of population-based heuristic algorithm highly depends on the control parameters. Generally speaking, different parameter settings will lead to different performances. Three algorithms were compared for different problem dimensions and different population sizes (Xu et al., 2012). The heuristic algorithms are Differential Evolution (DE), opposition-based differential evolution using the current optimum (COODE) and improved version of opposition-based differential evolution (IODE). Experiment results show that, COODE performs better than DE for larger population size which is usually required for more complex and high-dimensional optimization problems. In this study, the effects of jumping rate on the speedup of COODE are investigated completely.

RECENT ADVANCES IN OPPOSITION-BASED LEARNING

While it is original introduced to accelerate the convergence rate of standard differential evolution, the opposition-based learning is universal in the sense that it does not depend on the specific algorithm used. The general OBL idea has been successfully applied to reinforcement learning (Tizhoosh, 2005, 2006), genetic algorithm (Tizhoosh, 2005; Lin and Wang, 2010), differential evolution (Rahnamayan et al., 2006a, 2008b), artificial neural networks (Ventresca and Tizhoosh, 2006, 2007a), ant colony optimization (Montgomery and Randall, 2002; Malisia and Tizhoosh, 2007), window memorization (Khalvati et al., 2007), simulated annealing (Ventresca and Tizhoosh, 2007b), particle swarm optimization (Han and He, 2007; Wang et al., 2007), fuzzy sets (Tizhoosh, 2009; Al-Qunaieer et al., 2010) and biogeography-based optimization (BBO) (Ergezer et al., 2009; Bhattacharyaa and Chattopadhyay, 2010).

To obtain more chances closely to the global optimum, Quasi-Opposition-Based Learning (QOBL) was proposed by Rahnamayan which was successfully used in DE (Rahnamayan et al., 2007), PSO (Zhang et al., 2009; Tang and Zhao, 2009) and BBO (Ergezer et al., 2009). Mathematically speaking, a quasi-opposite point is a uniformly generated random point between the middle point and the opposite point. What’s more, based on the concept of OBL, Wang proposed a generalized OBL (GOBL) (Wang et al., 2009a). By simultaneously evaluating solutions in the current search space and transformed space, it will provide more chance of finding better solutions (Wang et al., 2011; Wang et al., 2009b). The most of the methods are often utilized to solve the numerical optimization problem, such as large scale optimization problem (Wang et al., 2011; Wang et al., 2009b; Rahnamayan and Wang, 2008), constrained optimization (Omran and Salman, 2009; Omran, 2010), optimization of noisy problem (Han and He, 2007; Rahnamayan et al., 2006b), multi-objective optimization (Peng et al., 2008; Dong and Wang, 2009). In addition, there are many possible uses and application domains of opposition concept: traveling salesman problem (Malisia and Tizhoosh, 2007; Ventresca and Tizhoosh, 2008), data mining (Kanzawa et al., 2007; Rashid and Baig, 2010; Ventresca and Tizhoosh, 2009), nonlinear system identification (Subudhi and Jena, 2011), chess evaluation function tuning (Boskovic et al., 2011) and image processing and understanding (Khalvati et al., 2007; Al-Qunaieer et al., 2010; Sahba et al., 2007; Rahnamayan and Tizhoosh, 2008; Tizhoosh, 2009; Sahba and Tizhoosh, 2008).

Compared with the successful application of classical and extended oppositional concepts to numerical optimization and real world problems, theory results are poor and it is worthy of further investigation. From the experiences of other intelligent computing technologies, purely random selection of solutions from a given population has the chance of visiting or even revisiting unproductive regions of the search space. A mathematical and experimental proof has been proposed to show that, in general, opposite numbers can increase the population diversity and are more likely to be closer to the optimal solution than independent random ones (Rahnamayan et al., 2008a; Ventresca and Tizhoosh, 2008). In Rahnamayan et al. (2007) proved that a quasi-opposite point is more likely to be closer to the global optimum than the opposite point. Furthermore, he proved how much quasi-opposition is better than opposition by Ergezer et al. (2009). Generally speaking, the quasi-opposite points are closer to the center-point compared to the opposite-points. By this way, QOBL can be a promising evidence to support center-based sampling theory proposed by Rahnamayan and Wang (2009).

PROPER SETTING OF JUMPING RATE IN COODE

Opposition-based differential evolution using the current optimum: The original DE is chosen as a parent algorithm and the COODE studied in this paper is exactly the same as the ODE described by Rahnamayan et al. (2008b) except for the opposition-based learning which is replaced by opposition-based learning using the current optimum.

Corresponding pseudo-code for the proposed approach, COODE, is shown in Fig. 1, where, P0 is the initial population, COOP0 is the opposite of initial population after using COOBL, P is the current population, COOP is the opposite population after using COOBL, V is the noise vector, U is the trial vector, Pi is the ith individual in P, COOPi is the ith individual in COOP, [aj, bj] is the range of the jth variable, BFV is the best fitness value so far, VTR is the value to reach, NFC is the number of function calls, MAXNFC is the maximum number of function calls, F is the mutation constant, rand (0, 1) is the uniformly generated random number, Cr is the crossover rate, f(·) is the objective function, P’ is the population of next generation, Jr is the jumping rate, Np is the population size, D is the dimension size, x0coj is the optimum value of the jth variable in the initial population, xcoj is the optimum value of the jth variable in the current population.

Fig. 1: Pseudo code for COODE

Steps 12-31 are classical DE/rand/1/bin, steps 1-11 and 32-44 are implementation of opposition-based using the current optimum initialization and generation jumping, respectively and steps 6-8 and 38-40 (highlighted in boldface) are the difference between COODE and ODE.

Experimental setup: A comprehensive set of numerical benchmark functions, including 58 different well-known global optimization problems (Rahnamayan et al., 2008b), has been employed for performance verification. The utilized test suite includes unimodal as well as highly multimodal minimization problems. The dimensionality of problems varies from 2-30 to cover a wide range of problem complexity.

For all conducted experiments, parameter settings are as follows. These values have been chosen according to reported setting in the previous literature (Rahnamayan et al., 2008b) and so there has no new attempts to obtain better values for them. The termination criterion is to find a value smaller than the Value-To-Reach (VTR) before reaching the maximum Number of Function Calls (NFC):

Population size, Np = 100
Differential amplification factor, F = 0.5
Crossover probability constant, Cr = 0.9
Jumping rate constant, Jr = 0.3
Mutation strategy: DE/rand/1/bin (classic version of DE)
Maximum NFC, MAXNFC = 106
Value to reach, VTR = 10-8

The convergence speed is compared by measuring the number of function calls which is the most commonly used metric in literatures. A smaller NFC means higher convergence speed. In order to minimize the effect of the stochastic nature of the algorithms on the measured metric, the reported NFC for each function is the average over 100 trials.

The number of times, for which the algorithm successfully reaches the VTR for each test function, is measured as the Success Rate (SR):

(1)

Experiment results: Here, a time varying jumping rate model is investigated. According to this model, the jumping rate increases/decreases linearly during the evolution based on the number of function evaluations. In order to support as fair as possible comparison between these four different jumping rate settings, the average jumping rate should be the same for all of them. Four proposed settings for the current investigation are as follows: Jr (constant) = 0.3, Jr (JrUp) = Jr (JrDown) = Jr (constant) = 0.6, where MAXNFC and NFC are the maximum number of function calls and the current number of function calls, respectively. Figure 2 shows the corresponding diagrams (jumping rate vs. NFC) for four settings. Jr (JrDown) represents higher jumping rate during exploration and lower jumping rate during exploitation (tuning); Jr (JrUp) performs exactly in reverse manner.

Fig. 2: Jumping rate vs. NFC

By these settings, the effects of generation jumping of COODE during optimization process are investigate completely.

Results of applying DE, COODE (Jr = 0.3), COODE (JrUp), COODE (JrDown) and COODE (Jr = 0.6) to solve 58 test problems are given in Table 1. The best success performance for each function is highlighted in boldface. The last rows of the table present the sum for NFC and the average SR. These algorithms are ranked as COODE (Jr = 0.3) (best), COODE (Jr = 0.6), COODE (JrDown), COODE (JrUp) and DE with the respect to the total NFC to solve 58 problems. COODE (JrDown) presents the lowest average success rate (0.78); while DE and ODE (JrUp) show the highest one (0.88).

Pair comparison of these algorithms is presented in Table 2. Given number in each cell shows on how many functions the algorithm in the table's row outperforms the corresponding algorithm in the table's column. The last column of the table shows the total numbers (number of cases which the algorithm can outperform other competitors). By comparing these numbers, the following ranking result is obtained: COODE (Jr = 0.6) (best), COODE (JrDown), COODE (Jr = 0.3), COODE (JrUp) and DE. Results for Jr = 0.3 and Jr = 0.6 confirm that the constant higher jumping rate reduces the overall success rate.

According to Table 1 and 2, it is noted that the performance of COODE (JrDown) is similar to that of COODE (Jr = 0.6) in terms of both NFC and SR. As seen above, the number of function calls for most benchmark functions is negligible compared with pre-determined MAXNFC, 106. From Fig. 2, the jumping rate of COODE (JrDown) changes marginally during evolution process in this case. Since, all control parameters but jumping rate are the same between COODE (JrDown) and COODE (Jr = 0.6), they exhibit the similar results.

Table 1: Comparison of DE, COODE (Jr = 0.3), COODE (JrUp), COODE (JrDown) and COODE (Jr = 0.6)
DE: Differential evolution, COODE: Opposition-based differential evolution using the current optimum, NFC: No. of function calls and SR: Success rate

For the same reason, there is no difference for the experiments results of COODE (JrUp) and DE. It is noting that, COODE degrades DE when the jumping rate is equal to zero.

Table 2: Pair comparison of DE, COODE (Jr = 0.3), COODE (JrUp), COODE (JrDown) and COODE (Jr = 0.6)
DE: Differential evolution, COODE: Opposition-based differential evolution using the current optimum

CONCLUSION

Utilizing opposite points to accelerate an optimization method is a new idea. In this paper, the influence of the jumping rate was studied. Experiment results confirm that the higher jumping rate leads to faster convergence to global optimum and smaller success rate for most problems. The future work should be done on applying the COOBL scheme in different fields of soft computing in order to enhance evolutionary algorithms, artificial neural networks, ant colony algorithms, particle swarm optimization, etc.

ACKNOWLEDGMENT

This study was supported by the National Natural Science Foundation of China (Nos. 61001202 and 61003199).

REFERENCES

  • Al-Qunaieer, F.S., H.R. Tizhoosh and S. Rahnamayan, 2010. Oppositional fuzzy image thresholding. Proceedings of the IEEE International Conference on Fuzzy Systems, July 18-23, 2010, Barcelona, Spain, pp: 1-7.


  • Bhattacharyaa, A. and P.K. Chattopadhyay, 2010. Solution of economic power dispatch problems using oppositional biogeography-based optimization. Electr. Power Components Syst., 38: 1139-1160.
    CrossRef    


  • Boskovic, B., J. Brest, A. Zamuda, S. Greiner and V. Zumer, 2011. History mechanism supported differential evolution for chess evaluation function tuning. Soft Comput., 15: 667-683.
    CrossRef    


  • Dong, N. and Y.P. Wang, 2009. Multiobjective differential evolution based on opposite operation. Proceedings of the International Conference on Computational Intelligence and Security, December 11-14, 2009, Beijing, China, pp: 123-127.


  • Ergezer, M., D. Simon and D.W. Du, 2009. Oppositional biogeography-based optimization. Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, October 11-14, 2009, San Antonio, TX., USA., pp: 1009-1014.


  • Han, L. and X.S. He, 2007. A novel opposition-based particle swarm optimization for noisy problems. Proceedings of the 3rd International Conference on Natural Computation, August 24-27, 2007, Haikou, China, pp: 624-629.


  • Kanzawa, Y., Y. Endo and S. Miyamoto, 2007. Fuzzy c-means algorithms for data with tolerance based on opposite criterions. IEICE Trans. Fundamen. Electron. Commun. Comput. Sci., E90: 2194-2202.
    CrossRef    Direct Link    


  • Khalvati, F., H.R. Tizhoosh and M.D. Aagaard, 2007. Opposition-based window memoization for morphological algorithms. Proceedings of the IEEE Symposium on Computational Intelligence in Image and Signal Processing, April 1-5, 2007, Honolulu, USA., pp: 425-430.


  • Lin, Z.Y. and L.L. Wang, 2010. A new opposition-based compact genetic algorithm with fluctuation. J. Comput. Inform. Syst., 6: 897-904.


  • Malisia, A.R. and H.R. Tizhoosh, 2007. Applying opposition-based ideas to the ant colony system. Proceedings of the IEEE Swarm Intelligence Symposium, April 1-5, 2007, Honolulu, USA., pp: 182-189.


  • Montgomery, J., and M. Randall, 2002. Anti-pheromone as a tool for better exploration of search space. Proceedings of the 3rd International Workshop on Ant Algorithms, September, 2002, Brussels, Belgium, pp: 100-110.


  • Omran, M.G.H., 2010. CODEQ: An effective metaheuristic for continuous global optimization. Int. J. Metaheuristics, 1: 108-131.
    CrossRef    


  • Omran, M.G.H. and A. Salman, 2009. Constrained optimization using CODEQ. Chaos Solitons Fractals, 42: 662-668.
    CrossRef    


  • Peng, L., Y.Z. Wang and G.M. Dai, 2008. A novel opposition-based multi-objective differential evolution algorithm for multi-objective optimization. Adv. Comput. Intell., 5370: 162-170.
    CrossRef    Direct Link    


  • Rahnamayan, S. and H.R. Tizhoosh, 2008. Image thresholding using micro opposition-based (Micro-ODE). Proceedings of the IEEE Congress on Evolutionary Computation, June 1-6, 2008, Hong Kong, China, pp: 1409-1416.


  • Rahnamayan, S., H.R. Tizhoosh and M.M.A. Salama, 2006. Opposition-based differential evolution algorithms. Proceeding of the IEEE Congress on Evolutionary Computation, July 16-21, 2006, Vancourver, Canada, pp: 2010-2017.


  • Rahnamayan, S., H.R. Tizhoosh and M.M.A. Salama, 2006. Opposition-based differential evolution for optimization of noisy problems. Proceedings of the IEEE Congress on Evolutionary Computation, July 16-21, 2006, Vancouver, Canada, pp: 1865-1872.


  • Rahnamayan, S., H.R. Tizhoosh and M.M.A. Salama, 2007. Quasi-oppositional differential evolution. Proceedings of the IEEE Congress on Evolutionary Computation, September 25-28, 2007, Singapore, pp: 2229-2236.


  • Rahnamayan, S., H.R. Tizhoosh and M.M.A. Salama, 2008. Opposition versus randomness in soft computing techniques. Applied Soft Comput., 8: 906-918.
    CrossRef    


  • Rahnamayan, S., H.R. Tizhoosh and M.M.A. Salama, 2008. Opposition-based differential evolution. IEEE Trans. Evol. Comput., 12: 64-79.
    CrossRef    


  • Rahnamayan, S. and G.G. Wang, 2008. Solving large scale optimization problems by Opposition-based Differential Evolution (ODE). WSEAS Trans. Comput., 7: 1792-1804.
    Direct Link    


  • Rahnamayan, S. and G.G. Wang, 2009. Center-based sampling for population-based algorithms. Proceedings of the IEEE Congress on Evolutionary Computation, May 18-21, 2009, Trondheim, Norway, pp: 933-938.


  • Rahnamayan, S., G.G. Wang and M. Ventresca, 2012. An intuitive distance-based explanation of opposition-based sampling. Applied Soft Comput., 12: 2828-2839.
    CrossRef    


  • Rashid, M. and A.R. Baig, 2010. Improved opposition-based pso for feedforward neural network training. Proceeding of the International Conference on Information Science and Applications, April 21-23, 2010, Seoul, Korea, pp: 1-6.


  • Sahba, F. and H.R. Tizhoosh, 2008. Opposite Actions in Reinforced Image Segmentation. In: Studies in Computational Intelligence: Oppositional Concepts in Computational Intelligence, Tizhoosh, H.R. and M. Ventresca (Eds.). Springer, USA., pp: 287-297


  • Sahba, F., H.R. Tizhoosh and M.M.M.A. Salama, 2007. Application of opposition-based reinforcement learning in image segmentation. Proceedings of the IEEE Symposium on Computational Intelligence in Image and Signal Processing, April 1-5, 2007, Honolulu, USA., pp: 246-251.


  • Subudhi, B. and D. Jena, 2011. A differential evolution based neural network approach to nonlinear system identification. Applied Soft Comput., 11: 861-871.
    CrossRef    Direct Link    


  • Tang, J. and X.J. Zhao, 2009. An enhanced opposition-based particle swarm optimization. Proceedings of the 2009 WRI Global Congress on Intelligent Systems, May 19-21, 2009, Xiamen, China, pp: 149-153.


  • Tizhoosh, H.R., 2005. Opposition-based learning: A new scheme for machine intelligence. Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, Volume 1, November 28-30, 2005, Vienna, Austria, pp: 695-701.


  • Tizhoosh, H.R., 2006. Opposition-based reinforcement learning. J. Adv. Comput. Intell. Intell. Inform., 10: 578-585.
    Direct Link    


  • Tizhoosh, H.R., 2009. Opposite fuzzy sets with applications in image processing. Proceedings of the International Fuzzy Systems Association World Congress and 2009 European Society of Fuzzy Logic and Technology Conference, July 20-24, 2009, Aix-Les-Bains, France, pp: 36-41.


  • Tizhoosh, H.R., M. Ventresca and S. Rahnamayan, 2008. Opposition-Based Computing. In: Studies in Computational Intelligence: Oppositional Concepts in Computational Intelligence, Tizhoosh, H.R. and M. Ventresca (Eds.). Springer, USA., pp: 11-28


  • Ventresca, M. and H.R. Tizhoosh, 2006. Improving the convergence of backpropagation by opposite transfer functions. Proceedings of the International Joint Conference on Neural Networks, July 16-21, 2006, Vancouver, Canada, pp: 4777-4784.


  • Ventresca, M. and H.R. Tizhoosh, 2007. Opposite transfer functions and backpropagation through time. Proceedings of the IEEE Symposium on Foundations of Computational Intelligence, April 1-5, 2007 Honolulu, USA., pp: 570-577.


  • Ventresca, M. and H.R. Tizhoosh, 2007. Simulated annealing with opposite neighbors. Proceedings of the IEEE Symposium on Foundations of Computational Intelligence, April 1-5, 2007, Honolulu, USA., pp: 186-192.


  • Ventresca, M. and H.R. Tizhoosh, 2008. A diversity maintaining population-based incremental learning algorithm. Inform. Sci., 178: 4038-4056.
    CrossRef    Direct Link    


  • Ventresca, M. and H.R. Tizhoosh, 2009. Improving gradient-based learning algorithms for large scale feedforward networks. Proceedings of the International Joint Conference on Neural Networks, June 4-19, 2009, Atlanta, USA., pp: 3212-3219.


  • Wang, H., Y. Liu, S.Y. Zeng, H. Li and C.H. Li, 2007. Opposition-based particle swarm algorithm with cauchy mutation. Proceedings of the IEEE Congress on Evolutionary Computation, September 25-28, 2007, Singapore, pp: 4750-4756.


  • Wang, H., Z.J. Wu, Y. Liu, J. Wang, D.Z. Jiang and L.L. Chen, 2009. Space transformation search: A new evolutionary technique. Proceedings of the ACM/SIGEVO Summit on Genetic and Evolutionary Computation, (GEC '09), Shanghai, China, pp: 537-544.


  • Wang, H., Z.J. Wu and S. Rahnamayan, 2011. Enhanced opposition-based differential evolution for solving high-dimensional continuous optimization problems. Soft Comput., 15: 2127-2140.
    CrossRef    Direct Link    


  • Wang, H., Z.J. Wu, S. Rahnamayan and L.S. Kang, 2009. A scalability test for accelerated de using generalized opposition-based learning. Proceedings of the 9th International Conference on Intelligent Systems Design and Applications, November 30-December 2, 2009, Pisa, Italy, pp: 1090-1095.


  • Xu, Q., L. Wang, B. He and N. Wang, 2011. Modified opposition-based differential evolution for function optimization. J. Computat. Inf. Syst., 7: 1582-1591.
    Direct Link    


  • Xu, Q.Z., L. Wang, B.M. He and N. Wang, 2011. Opposition-based differential evolution using the current optimum for function optimization. J. Applied Sci., 29: 308-315.


  • Xu, Q.Z., N. Wang, and R. Fei, 2012. Influence of dimensionality and population size on opposition-based differential evolution using the current optimum. Inform. Technol. J.


  • Zhang, C., Z.W. Ni, Z.J. Wu and L.C. Gu, 2009. A novel swarm model with quasi-oppositional particle. Proceedings of the International Forum on Information Technology and Applications, May 15-17, 2009, Chengdu, China, pp: 325-330.

  • © Science Alert. All Rights Reserved