Abstract: Diverse forms of opposition are already existent virtually everywhere and utilizing opposite numbers to accelerate an optimization method is a new idea. In this study, Differential Evolution (DE) and opposition-based differential evolution using the current optimum (COODE) are compared for different jumping rates. Experiments on 58 widely used benchmark problems show that, the higher jumping rate leads to faster convergence to global optimum and smaller success rate for most problems.
INTRODUCTION
Many examples of opposition permeate in most fields around the world, in some form or another. The opposition concept has sometimes been called by different names, such as, opposite particles in physics, absolute or relative complement in set theory, antonyms in languages and dualism in religions and philosophies. The interaction between individual and its corresponding opposite individual is apparently fundamental for maintaining universal balance (Tizhoosh et al., 2008). In fact, the opposition concept is applied, whether consciously or unconsciously, in most of our daily affairs.
After the basic concept of Opposition-Based Learning (OBL) was original introduced by Tizhoosh (2005), in a very short period of time, it has been utilized in different soft computing areas. The main idea of this optimization is the simultaneous consideration of an estimate and its corresponding opposite estimate for finding a better candidate solution which is closer to the global optimum. A mathematical proof was proposed by Rahnamayan et al. (2008a) to show that, in general, the opposite candidate solution is more advantageous than a second pure random one. Recently, this proof was simplified from the perspective of distance to the optimal solution and generalized to N-dimensional search spaces for black-box problems (Rahnamayan et al. 2012).
As an acceleration strategy, a novel opposition-based learning strategy using the current optimum (COOBL) was first proposed to solve function optimization problem (Xu et al., 2011a, b). Whats different with opposition-based learning is that, the optimum in the current generation, not the geometric center of variabless initial range or current interval, was served as a symmetry point between the current point and the corresponding opposite point. The nice improvement measure, unexpectly, can lead to a high rate of opposite population usage, especially in the later stage of population evolution. The contribution of opposite point using the current optimum to make the intelligent algorithm faster was clearly confirmed by experiments results and was not reproducible by a uniformly generated random point.
The performance of population-based heuristic algorithm highly depends on the control parameters. Generally speaking, different parameter settings will lead to different performances. Three algorithms were compared for different problem dimensions and different population sizes (Xu et al., 2012). The heuristic algorithms are Differential Evolution (DE), opposition-based differential evolution using the current optimum (COODE) and improved version of opposition-based differential evolution (IODE). Experiment results show that, COODE performs better than DE for larger population size which is usually required for more complex and high-dimensional optimization problems. In this study, the effects of jumping rate on the speedup of COODE are investigated completely.
RECENT ADVANCES IN OPPOSITION-BASED LEARNING
While it is original introduced to accelerate the convergence rate of standard differential evolution, the opposition-based learning is universal in the sense that it does not depend on the specific algorithm used. The general OBL idea has been successfully applied to reinforcement learning (Tizhoosh, 2005, 2006), genetic algorithm (Tizhoosh, 2005; Lin and Wang, 2010), differential evolution (Rahnamayan et al., 2006a, 2008b), artificial neural networks (Ventresca and Tizhoosh, 2006, 2007a), ant colony optimization (Montgomery and Randall, 2002; Malisia and Tizhoosh, 2007), window memorization (Khalvati et al., 2007), simulated annealing (Ventresca and Tizhoosh, 2007b), particle swarm optimization (Han and He, 2007; Wang et al., 2007), fuzzy sets (Tizhoosh, 2009; Al-Qunaieer et al., 2010) and biogeography-based optimization (BBO) (Ergezer et al., 2009; Bhattacharyaa and Chattopadhyay, 2010).
To obtain more chances closely to the global optimum, Quasi-Opposition-Based Learning (QOBL) was proposed by Rahnamayan which was successfully used in DE (Rahnamayan et al., 2007), PSO (Zhang et al., 2009; Tang and Zhao, 2009) and BBO (Ergezer et al., 2009). Mathematically speaking, a quasi-opposite point is a uniformly generated random point between the middle point and the opposite point. Whats more, based on the concept of OBL, Wang proposed a generalized OBL (GOBL) (Wang et al., 2009a). By simultaneously evaluating solutions in the current search space and transformed space, it will provide more chance of finding better solutions (Wang et al., 2011; Wang et al., 2009b). The most of the methods are often utilized to solve the numerical optimization problem, such as large scale optimization problem (Wang et al., 2011; Wang et al., 2009b; Rahnamayan and Wang, 2008), constrained optimization (Omran and Salman, 2009; Omran, 2010), optimization of noisy problem (Han and He, 2007; Rahnamayan et al., 2006b), multi-objective optimization (Peng et al., 2008; Dong and Wang, 2009). In addition, there are many possible uses and application domains of opposition concept: traveling salesman problem (Malisia and Tizhoosh, 2007; Ventresca and Tizhoosh, 2008), data mining (Kanzawa et al., 2007; Rashid and Baig, 2010; Ventresca and Tizhoosh, 2009), nonlinear system identification (Subudhi and Jena, 2011), chess evaluation function tuning (Boskovic et al., 2011) and image processing and understanding (Khalvati et al., 2007; Al-Qunaieer et al., 2010; Sahba et al., 2007; Rahnamayan and Tizhoosh, 2008; Tizhoosh, 2009; Sahba and Tizhoosh, 2008).
Compared with the successful application of classical and extended oppositional concepts to numerical optimization and real world problems, theory results are poor and it is worthy of further investigation. From the experiences of other intelligent computing technologies, purely random selection of solutions from a given population has the chance of visiting or even revisiting unproductive regions of the search space. A mathematical and experimental proof has been proposed to show that, in general, opposite numbers can increase the population diversity and are more likely to be closer to the optimal solution than independent random ones (Rahnamayan et al., 2008a; Ventresca and Tizhoosh, 2008). In Rahnamayan et al. (2007) proved that a quasi-opposite point is more likely to be closer to the global optimum than the opposite point. Furthermore, he proved how much quasi-opposition is better than opposition by Ergezer et al. (2009). Generally speaking, the quasi-opposite points are closer to the center-point compared to the opposite-points. By this way, QOBL can be a promising evidence to support center-based sampling theory proposed by Rahnamayan and Wang (2009).
PROPER SETTING OF JUMPING RATE IN COODE
Opposition-based differential evolution using the current optimum: The original DE is chosen as a parent algorithm and the COODE studied in this paper is exactly the same as the ODE described by Rahnamayan et al. (2008b) except for the opposition-based learning which is replaced by opposition-based learning using the current optimum.
Corresponding pseudo-code for the proposed approach, COODE, is shown in Fig. 1, where, P0 is the initial population, COOP0 is the opposite of initial population after using COOBL, P is the current population, COOP is the opposite population after using COOBL, V is the noise vector, U is the trial vector, Pi is the ith individual in P, COOPi is the ith individual in COOP, [aj, bj] is the range of the jth variable, BFV is the best fitness value so far, VTR is the value to reach, NFC is the number of function calls, MAXNFC is the maximum number of function calls, F is the mutation constant, rand (0, 1) is the uniformly generated random number, Cr is the crossover rate, f(·) is the objective function, P is the population of next generation, Jr is the jumping rate, Np is the population size, D is the dimension size, x0coj is the optimum value of the jth variable in the initial population, xcoj is the optimum value of the jth variable in the current population.
Fig. 1: | Pseudo code for COODE |
Steps 12-31 are classical DE/rand/1/bin, steps 1-11 and 32-44 are implementation of opposition-based using the current optimum initialization and generation jumping, respectively and steps 6-8 and 38-40 (highlighted in boldface) are the difference between COODE and ODE.
Experimental setup: A comprehensive set of numerical benchmark functions, including 58 different well-known global optimization problems (Rahnamayan et al., 2008b), has been employed for performance verification. The utilized test suite includes unimodal as well as highly multimodal minimization problems. The dimensionality of problems varies from 2-30 to cover a wide range of problem complexity.
For all conducted experiments, parameter settings are as follows. These values have been chosen according to reported setting in the previous literature (Rahnamayan et al., 2008b) and so there has no new attempts to obtain better values for them. The termination criterion is to find a value smaller than the Value-To-Reach (VTR) before reaching the maximum Number of Function Calls (NFC):
• | Population size, Np = 100 |
• | Differential amplification factor, F = 0.5 |
• | Crossover probability constant, Cr = 0.9 |
• | Jumping rate constant, Jr = 0.3 |
• | Mutation strategy: DE/rand/1/bin (classic version of DE) |
• | Maximum NFC, MAXNFC = 106 |
• | Value to reach, VTR = 10-8 |
The convergence speed is compared by measuring the number of function calls which is the most commonly used metric in literatures. A smaller NFC means higher convergence speed. In order to minimize the effect of the stochastic nature of the algorithms on the measured metric, the reported NFC for each function is the average over 100 trials.
The number of times, for which the algorithm successfully reaches the VTR for each test function, is measured as the Success Rate (SR):
(1) |
Experiment results: Here, a time varying jumping rate model is investigated. According to this model, the jumping rate increases/decreases linearly during the evolution based on the number of function evaluations. In order to support as fair as possible comparison between these four different jumping rate settings, the average jumping rate should be the same for all of them. Four proposed settings for the current investigation are as follows: Jr (constant) = 0.3, Jr (JrUp) = Jr (JrDown) = Jr (constant) = 0.6, where MAXNFC and NFC are the maximum number of function calls and the current number of function calls, respectively. Figure 2 shows the corresponding diagrams (jumping rate vs. NFC) for four settings. Jr (JrDown) represents higher jumping rate during exploration and lower jumping rate during exploitation (tuning); Jr (JrUp) performs exactly in reverse manner.
Fig. 2: | Jumping rate vs. NFC |
By these settings, the effects of generation jumping of COODE during optimization process are investigate completely.
Results of applying DE, COODE (Jr = 0.3), COODE (JrUp), COODE (JrDown) and COODE (Jr = 0.6) to solve 58 test problems are given in Table 1. The best success performance for each function is highlighted in boldface. The last rows of the table present the sum for NFC and the average SR. These algorithms are ranked as COODE (Jr = 0.3) (best), COODE (Jr = 0.6), COODE (JrDown), COODE (JrUp) and DE with the respect to the total NFC to solve 58 problems. COODE (JrDown) presents the lowest average success rate (0.78); while DE and ODE (JrUp) show the highest one (0.88).
Pair comparison of these algorithms is presented in Table 2. Given number in each cell shows on how many functions the algorithm in the table's row outperforms the corresponding algorithm in the table's column. The last column of the table shows the total numbers (number of cases which the algorithm can outperform other competitors). By comparing these numbers, the following ranking result is obtained: COODE (Jr = 0.6) (best), COODE (JrDown), COODE (Jr = 0.3), COODE (JrUp) and DE. Results for Jr = 0.3 and Jr = 0.6 confirm that the constant higher jumping rate reduces the overall success rate.
According to Table 1 and 2, it is noted that the performance of COODE (JrDown) is similar to that of COODE (Jr = 0.6) in terms of both NFC and SR. As seen above, the number of function calls for most benchmark functions is negligible compared with pre-determined MAXNFC, 106. From Fig. 2, the jumping rate of COODE (JrDown) changes marginally during evolution process in this case. Since, all control parameters but jumping rate are the same between COODE (JrDown) and COODE (Jr = 0.6), they exhibit the similar results.
Table 1: | Comparison of DE, COODE (Jr = 0.3), COODE (JrUp), COODE (JrDown) and COODE (Jr = 0.6) |
DE: Differential evolution, COODE: Opposition-based differential evolution using the current optimum, NFC: No. of function calls and SR: Success rate |
For the same reason, there is no difference for the experiments results of COODE (JrUp) and DE. It is noting that, COODE degrades DE when the jumping rate is equal to zero.
Table 2: | Pair comparison of DE, COODE (Jr = 0.3), COODE (JrUp), COODE (JrDown) and COODE (Jr = 0.6) |
DE: Differential evolution, COODE: Opposition-based differential evolution using the current optimum |
CONCLUSION
Utilizing opposite points to accelerate an optimization method is a new idea. In this paper, the influence of the jumping rate was studied. Experiment results confirm that the higher jumping rate leads to faster convergence to global optimum and smaller success rate for most problems. The future work should be done on applying the COOBL scheme in different fields of soft computing in order to enhance evolutionary algorithms, artificial neural networks, ant colony algorithms, particle swarm optimization, etc.
ACKNOWLEDGMENT
This study was supported by the National Natural Science Foundation of China (Nos. 61001202 and 61003199).