
Research Article


Dualsystem Cooperative Coevolutionary Differential Evolution Algorithm for Solving Nonseparable Function Optimization 

FengZhe Cui,
Lei Wang,
ZhiZheng Xu,
XiuKun Wang
and
HongFei Teng



ABSTRACT

In recent years, researches on highdimensional nonseparable function optimization have made progress. Approaches based on Potter’s Cooperative Coevolutionary (CC) framework have achieved better results and aroused a great attention. However, the computational results are still unsatisfying for most Benchmark functions. Therefore, this study develops a dualsystem (population) cooperative coevolutionary differential evolution (DCCDE) algorithm based on dualsystem Evolutionary Algorithm (EA). This algorithm adopts a variable static grouping pattern and a improved Differential Evolution (DE) algorithm combined with simple crossover (SPX) local search strategy and modifies the migration pattern of the subindividuals (not subpopulations) among the subsystems (subgroups of variables) in the dualsystem. The test results of 20 Benchmark functions (including 17 nonseparable functions, dimension D = 1000) show that the proposed algorithm is better than other algorithms in computational accuracy.





Received: April 21, 2013;
Accepted: May 10, 2013;
Published: July 03, 2013


INTRODUCTION The engineering background of nonseparable function optimization is one of the complicated coupling engineering system optimization problems. The nonseparable functions divide into partially nonseparable functions and fully nonseparable functions. The nonseparable functions and multimodal functions are no different in essentially, but only difference in the angle of view. The former is from the view of coupling relationship between variables; the latter is from multimodal perspective of the functions solution space structure. This kind of functions have more or less correlation (coupling) among variables, so currently the nonseparable functions, are the same as multimodal functions, have become a hotspot receiving wide attention. Especially, the fully nonseparable functions optimization problems are very difficult to solve.
The heuristic algorithms (such as PSO, DE, GA) are generally effective for
nonseparable functions optimization problems with fewer dimensions (Zhang
and Teng, 2009; Vesterstrom and Thomsen, 2004; Wang
et al., 2013; Liu and Li, 2011; Gao
et al., 2006). In recent years, CCEA (Potter CC) has attracted extensive
attention and made encouraging progress in solving highdimensional nonseparable
function optimization problems. Potter and De Jong (1994)
proposed a Cooperative Coevolutionary Genetic Algorithm (CCGA) and in 2000,
developed the cooperative coevolutionary algorithms framework (CCEA). CCGA was
used to solve the nonseparable Rosenbrock function. To overcome the difficulties
presented by interacting variables, they modified the credit assignment algorithm.
They adopted the best collaborator selection, the best and random collaborator
selection to compare. The experimental results illustrate that the latter ways
for evaluating individuals influence algorithm’s performance. It is clear
that the results influenced by the collaborator selection mechanism (Potter
and De Jong, 2000). For a class of coupled functions, they explored an emergence
of coadapted subcomponents to suggest that the evolution of species might need
to be driven by more than the overall fitness of the ecosystem to produce good
decompositions. Sofge et al. (2002) proposed
a blended population approach for cooperative coevolutionary (BCCES). They combined
Cooperative Coevolution Evolutionary Strategy (CCES) and standard Evolutionary
Strategy (ES) in a single evolutionary process, introduced a migration operator
to allow populations to migrate from subpopulations of CCES to the population
of ES. One population is consisted of subspecies and implemented as a Potter
CCEA; the other is a traditional EA. Experimental results indicated that the
blended population model outperformed the CCEA on nonseparable function problems
(D = 2). Shi et al. (2005) presented the Cooperative
Coevolutionary Differential Evolution (CCDE). CCDE adopted Potter’s CC
framework to improve the DE’s performance. By splitting the solution vectors
of DE, CCDE can partition a highdimensional search space into smaller vectors.
The subcomponents of a solution can be coevolved by the multiple cooperating
subspecies (or smaller vectors). The experimental results showed that, for nonseparable
problem (D = 100), CCDE outperformed the traditional DE and CCGA in performance.
Yang et al. (2008) proposed a new cooperative
coevolution framework (DECCG) that could handle large scale nonseparable optimization
problems. A random grouping scheme and an adaptive weighting strategy were used
in problem decomposition to made the variables stronger coupled in the same
group and the variables coupled as weak as possible in different groups. Experimental
results indicated (D = 1000) that DECCG could effectively solve nonseparable
function optimization problems up to 1000 dimensions. Li
and Yao (2012) developed a new cooperative coevolutionary particle swarm
optimization algorithm (CCPSO2) based on the cooperative coevolutionary framework
in order to solve large scale nonseparable function optimization problem .The
algorithm using new PSO model based on the combination of Cauchy and Gaussian
mutation operator update particle personal best (Pbest) and neighborhood best
position (Lbest), improved search ability and by using variable random dynamic
grouping. The approach of variable random dynamic grouping is a kind of breakthrough
research results. Standard test function set (Tang et
al., 2007) test show that the CCPSO2 algorithm can effectively solve
as high as 2000 dimensional nonseparable function F7 and has good computational
accuracy and robustness.
Although, the researches on solving nonseparable problems have made progress in recent years, most of the problems have not yet achieved the optimal solution. For most Benchmark functions, there even is several orders of magnitude away form the optimal solution. In order to improve computational performance of algorithm in CC framework for high dimensional nonseparable problems, we developed dualsystem cooperative coevolutionary differential evolution algorithm (DCCDE) to solve high dimensional nonseparable function problem. DUALSYSTEM COOPERATIVE COEVOLUTIONARY DIFFERENTIAL EVOLUTION
Basic idea: the DCCDE we develop is based on dualsystem variablegrain
cooperative coevolution algorithm (DVGCCEA), also called OboeCCEA (Teng
et al., 2010). And the domain where the algorithm will apply is complex
coupled system optimization problems (e.g., satellite layout optimization).
Below is a brief description of the OboeCCEA. The OboeCCEA decomposes the
original problem or system P (equivalent to nonseparable function) into E subsystems
PPe (e = 1,2,…,E) (equivalent to decomposing function variables into E
groups) and then duplicates system P (copy) as systems A and B, each includes
subsystems (AAe or BBe), respectively. System A is a virtual Potter CC, while
system B is an authentic one. The two systems evolve parallelly. System A is
optimized globally (allinone) and the optimization of system B is achieved
by its subsystems BBe optimizing parallelly. And system A adopts a coarsetofine
variablegrain strategy in order to reduce the time brought by the dualsystem.
It is the subindividual migration between subsystems AAe and BBe on a group
level, rather than the individual migration between systems A and B on a whole
level, that improves the diversity in a population (the ‘subsystem’
here differs from the ‘subpopulation’ in literature Sofge
et al. (2002). System B adopts the implicit coordination mechanism.
The difference between OboeCCEA and traditional CCEA is that BBe are evaluated
by system A rather than system B. OboeCCEA in literature (Teng
et al., 2010) did better in solving satellitemodule layout optimization,
but it did not touch upon nonseparable function optimization problems.
A dualsystem cooperative coevolutionary differential evolution with humancomputer
cooperation (HDCCDE) is developed in Literature (Zhang
et al., 2012). HDCCDE incorporates humancomputer cooperation based
on OboeCCEA, namely, artificial individual is added in the optimization process
of system A.
In this study, DCCDE employs the dualsystem CCEA architecture in OboeCCEA
(Teng et al., 2010). The differences between
DCCDE and OboeCCEA lie in:
• 
DCCDE does not use the variablegrain strategy, because we
mainly focus on functions optimization problems rather than complex engineering
system optimization problems 
• 
DCCDE decomposes the original problem P by static variables grouping,
while OboeCCEA decomposes P according to the physical structure of engineering
systems 
• 
Systems A and B of DCCDE use the improved DE algorithm, which is combined
with a simple cross (SPX) (Tsutsui et al., 1999)
local search strategy, in order to improve the convergence 
• 
Systems A and B evolve parallelly. System A optimizes on a global level
and the optimization of system B is achieved by the cooperative coevolutionary
among its subsystems BBe. AAe in system A are the virtual subsystems corresponding
to the subsystems BBe in system B. Individuals migration between systems
A and B is achieved by the subindividuals migration between AAe and BBe.
It can improve the diversity of a population. The subindividuals migration
process between AAe and BBe is the same as the one in OboeCCEA 
In this study, we focus on dualsystem CC coordination mechanism and improving the space searching ability of EA, in order to enhance the ability of DCCDE to solve nonseparable functions. Variable grouping: In this study, variable static grouping is employed to solve fully nonseparable functions. We set a fixed number of variables groups and randomly distribute the variables into each group. The grouping principle is that the coupled relationships (usually strong correlation) of variables in the same group are kept the same as in original system and the variables in different groups are independent of each other. To illustrate the variables grouping, we take the partially nonseparable function F_{14} as an example. F14 is shown as follows:
where, D is the dimension of variable; assume that E is the number of group
and m = D/E is the dimension of variables in each group; P is a D dimensional
random position vector. The decision variables of the kth group x(P_{(k1)*m+1}:P_{k*m})
are shown in Fig. 1. For example, when D = 100, E = 5, thus
m = 20.
Coordination mechanism of DCCDE and information exchange between systems
A and B: The implicit overall coordination mechanism in CCEA relates to
collaborator selection. Some common methods for collaborators selection are:
random individual selection, multiple individuals selection, the best individual
selection (Wiegand et al., 2001) and archive
method (Panait et al., 2006). In this study,
system B decomposes into several subsystems BBe, which should maintain the coordination
consistency in the evolution process if there are coupled relationships among
subsystems BBe.
The coordination mechanism we improve based on the OboeCCEA is described as
follows. Before migrating from BBe (variable groups) to AAe (variable groups),
the elite subindividuals X^{k*}_{BBe} should be evaluated.
The approach we adopt is that: according to the best collaborator selection,
the collaborators
is selected from the remaining (E1) subsystems (variable groups) in system
B to constitute the complete individuals X^{k}_{BBe} = {X^{k*}_{BBe},
},
which is evaluated by system A rather than system B in traditional CCEA. If
the migration criteria is satisfied, X^{k*}_{BBe} and
migrate to the corresponding AAe to replace the worst individuals X^{k, worst}_{AAe}
and the X^{k}_{BB} in system A will iterate for m generations
by the survival of the fittest. It is noteworthy that there are general not
one single elite individual, but rather several subpopulations, migrating from
BBe to AAe. The process that the elite individuals X^{k*}_{AAe}
in AAe migrate to BBe is idem. It is worth stressing that the individual migrations
between systems A and B are achieved by the subindividual migrations between
AAe and BBe in order to increase the diversity of the population. Systems A
and B adopt the elitist preserving strategy and utilize their synergies to increase
the convergence of the algorithm.

Fig. 1: 
The decomposition of function F_{14} 
Finally system B approximates system A and system A approximates original system
P (original problem).
Exploration ability of the improved heuristic algorithm: Exploration
ability of an algorithm is mainly reflected in two aspects of diversity and
convergence. DCCDE fully utilized the migration information between AAe and
BBe to increase the diversity of the system and employs improved DE as the solving
algorithm. In the improved DE, the individuals have good diversity in the early
stage of evolution, so their exploration abilities are stronger; but with the
increasing evolution generations, the differences among individuals decrease
and the convergence rate slow in the later stage. The improved DE is an algorithm
that combines the traditional DE with a simplex crossover (SPX) (Tsutsui
et al., 1999) local search strategy. And the SPX local search strategy
will be implemented every few generations in the optimization process. What
we should note here is that a proper search length is very important for SPX
local search strategy. A small length is unuseful to improve the search quality,
while a longer length may bring additional unnecessary evaluations, which takes
a lot of time. Considering the function complexity and the allowable maximum
evaluation times, we set the maximum search times of SPX local search strategy
T = D/5 (D is variable dimension). In addition, in order to avoid premature
convergence and save evaluation times, SPX local search stops once a better
fitness is obtained.
In the classical DE algorithm, the CR and F generally take fixed values. CR
relates to the nature and complexity of problems and F relates to the convergence
rate. The two are closely related to the considered problems and different problems
have different optimal control parameters CR and F. Therefore, several algorithms
that can adaptively adjust the control parameters of DE in the evolution process
have been proposed in many literatures, such as SaDE (Qin
et al., 2009), jDE (Brest et al., 2006).
In this study, the adaptive strategy is also employed. We set CR = 0.5 (1+rand
(0, 1), where rand (0,1) is a uniformly distributed random number between 0
and 1. F takes values by the following rules (Bdack et
al., 1991): the success rate of mutations should be maintained at 1/5.
If it is greater than 1/5, F increases; Otherwise, F decreases. Hence:
where, F^{t} and F^{t+k} are mutation factors in tth and (t+k)th generations, respectively, Cd = 0.82, Ci = 1/Cd, p^{t}_{s} is the success rate of mutations and measured over intervals of k trials. DCCDE algorithm procedure: According to above algorithm description, the pseudocode of DCCDE algorithm and improved DE algorithm are shown in Algorithm 1 and 2, respectively.
Algorithm 1: The pseudocode of DCCDE: 

Algorithm 2: The pseudocode of Improved DE (IDE): 

NUMERICAL SIMULATION
Test functions: In this study, we use 20 representative Benchmark functions
presented by literature (Tang et al., 2010) to
test the proposed DCCDE algorithm.
Definition 1 (Tang et al., 2010): A function
f(x) is separable if:
Definition 2 (Tang et al., 2010): A nonseparable
function f(x) is called mnonseparable function if at most m of its parameters
xi are not independent. A nonseparable function f(x) is called fullynonseparable
function if any of its two parameters xi is dependent with each other.
According to the above two definitions, the 20 highdimensional functions in
(Tang et al., 2010) can divide into 3 categories:
(1) separable functions F1F3; (2) partially nonseparable functions F4F18;
(3) fullynonseparable functions F19F20.
The experiment is to compare the performance of DCCDE with the performance
of CCDE Shi et al. (2005), DECCG (Yang
et al., 2008) and SDENS (Wang et al.,
2010) in solving nonseparable function optimization problems. And the evaluation
index is computational accuracy.
Experimental setup: The experimental parameters are set as follows.
For the 20 test functions in (Tang et al., 2010),
the variable dimension was set to 1000, the population size of systems A and
B was set to 30, the population size of subsystems BBe and AAe was set to 30
and the number of subpopulations (variable groups) was set to 5. We allocated
the 1000 variables into the 5 subpopulations. The control parameters CR and
F were initialized with 0.9 and 1.2, respectively. In this study, the DE/rand/1/exp
strategy was adopted by DCCDE. The maximum number of fitness evaluations (MAX_FES)
is calculated by:
MAX_FES = 3000×D
where, D is the number of dimensions. In order to eliminate the influence brought by random operation in the initialization on the performance, every algorithm ran 25 times on each test function. For each test function, the averaged results of 25 independent runs were recorded. The function error for a solution x is defined as: where, x* is the global optimum. Experimental results and analysis: The computational results of the 25 independent runs of DCCDE on the 20 Benchmark functions are listed in Table 1. The averages of function error Δf(x) in 25 independent runs of DCCDE, CCDE, DECCG and SDENS are shown in Table 2. The MAX_FES in CCDE, DECCG and SDENS were set to (3x10^{6}). Because DCCDE algorithm adopts dualsystem architecture, therefore, MAX_FES in each system of DCCDE was set to (1.5x10^{6}) for fair. From the experimental results of the 20 Benchmark functions in Table 2, we can find that DCCDE apparently outperforms SDENS. Compared with CCDE, DCCDE performs better on most of the test functions, specifically, on the fullynonseparable function F19. This owes to the Potter CC framework and subindividuals migration pattern in dualsystem of DCCDE. The coordination mechanism in DCCDE reflects the interactions between the subindividuals in one variable group (subsystems) and the subindividuals in other variable groups and it also provides a potential environmental pressure, which guides the evolution direction. The coordination consistency of evolution among subpopulations in CCDE is maintained by the collaborator selection. For weak coupled problems (or separable problems), the traditional coordination mechanism does well in guiding each subpopulation toward the optimal solutions, while for the strong coupled problems (or fullynonseparable problems), it does poorly. After the subindividuals in BBe migrating to AAe, system A optimizes on global level, thus the coupling relationships in original problem can be considered on the system level. After m iterations with the elitist strategy, system A migrate the global optimal solutions in AAe backwards to the corresponding BBe to conduct the evolution of each subsystem in system B. Compared with CCDE, for the mnonseparable functions, DCCDE performed worse than CCDE on F7 and F12. And the reason may be that the variable groupings for the two functions are improper. For F18 and F20, there is no significant difference between the two. Most importantly, DCCDE performed significantly better than CCDE on the remaining mnonseparable functions. In a word, the proposed DCCDE algorithm achieved better results on the 12 out of the 17 mnonseparable functions. Compared with DECCG, DCCDE performed better on 11 out of 17 functions (F5, F6, F7, F8, F10, F11, F13, F16, F18, F19 and F20) according to Table 2. DECCG is a DECC with single system, adopts a new dynamic random grouping mechanism to increase the chance of allocating the strong coupled variables into the same subpopulation (variable group). Thus, it improves the ability of CCEA for solving nonseparable problems by using adaptive weighted strategy.
The test results show that DCCDE simultaneously outperformed CCDE and SDENS
on 7 out of 17 nonseparable Benchmark functions (D = 1000) and outperformed
DECCG on 11 out of the 17 functions.
Table 1: 
Experimental results of DCCDE of 25 independent runs for
F1F20, with dimension D = 1000 

F_{1}F_{3}: Separable functions, F_{4}F_{18}:
Partially nonseparable functions and F_{19}F_{20}: Fully
nonseparable functions 
Table 2: 
Comparison between CCDE and DECCG, SDENS and DCCDE of 25
independent runs on function F1F20, with dimension D = 1000 

F_{1}F_{3}: Separable functions, F_{4}F_{18}:
Partially nonseparable functions and F_{19}F_{20}: Fully
nonseparable functions that the results of DECCG and SDENS are cited from
literatures Yang et al. (2008) and Wang
et al. (2010) and the results of CCDE and DCCDE are worked out
in this study, The number in the bracket denotes sort order, The number
1 denotes best, the number 4 denotes worst 
In addition, DCCDE is compared with BCCES. BCCES was tested on 6 functions
(D = 2) in literature Sofge et al. (2002) and
there is only a convergent curve without any specific data Table. F1, F2, F3
in the convergent curve are comparable. The results show that DCCDE outperformed
BCCES on F1, F3 and for F2, there is no obvious difference between the two algorithms.
It indicates that, for 2D functions, the dualsystem (dual population) algorithm
is better than the BCCES in literature Sofge et al.
(2002), where the highdimensional functions were not involved.
ACKNOWLEDGMENT This study was supported by National Natural Science Foundation (Grant No. 50975039 and 51005034) and National Ministries Project (Grant No. A0920110001) of P.R. China. CONCLUSION On the basis of OboeCCEA, we develop DCCDE based on dualsystem CC framework. In the dualsystem of DCCDE, system A is a virtual Potter CC, while system B is the authentic Potter CC. We mainly focus on variable static grouping, dualsystem CC framework and its coordination mechanism and improving the space searching ability of heuristic algorithms. The reasons why DCCDE can improve the diversity of populations are: (a) a dualsystem Potter CC framework and a new migration pattern of the subpopulations (or subindividuals) between AAe and BBe are adopted; (b) the improved DE is employed, therefore the searching ability in the early stage of optimization is good and the synergistic effect among subsystems BBe is played. The reasons why DCCDE can improve the convergence are: (a) a variable static grouping strategy that uses fixed number of groups to group the variables randomly is used to keep the coupled relationships of variables in a variable group the same as they are in original problem (original system) and each variable group (subsystem) is independent with others; (b) due to the improved DE, the searching ability in the later stage of optimization is good; (c) system A optimizes on the global level and system B optimizes based on Potter CC, synchronously, the elitist preserving strategy is employed, thus system B approximates system A, which approximates system P (the original problem). The test results show that, for most of the 17 nonseparable Benchmark functions (D = 1000), the proposed DCCDE is better than other algorithms in computational accuracy. However, the proposed variable grouping pattern is improper for some functions; thereby our work will focus on the variable dynamic grouping pattern in the near future.

REFERENCES 
1: Zhang, Y.N. and H.F. Teng, 2009. Detecting particle swarm optimization. Concurrency Comput. Pract. Exp., 21: 449473. CrossRef  Direct Link 
2: Vesterstrom, J. and R. Thomsen, 2004. A comparative study of differential evolution, particle swarm optimization and evolutionary algorithms on numerical benchmark problems. Proceedings of the Congress on Evolutionary Computation, Volume 2, June 1923, 2004, Portland, OR., USA., pp: 19801987 CrossRef 
3: Wang, C.X., C.H. Li, H. Dong and F. Zhang, 2013. An efficient differential evolution algorithm for function optimization. Inform. Technol. J., 12: 444448. CrossRef  Direct Link 
4: Liu, Y. and S. Li, 2011. A new differential evolutionary algorithm with neighborhood search. Inform. Technol. J., 10: 573578. CrossRef  Direct Link 
5: Gao, H., B. Feng, Y. Hou, B. Guo and L. Zhu, 2006. Adaptive SAGA based on mutative scale chaos optimization strategy. Inform. Technol. J., 5: 524528. CrossRef  Direct Link 
6: Potter, M.A. and K.A. De Jong, 1994. A cooperative coevolutionary approach to function optimization. Proceedings of the International Conference on Evolutionary Computation and the 3rd Conference on Parallel Problem Solving from Nature, October 914, 1994, Jerusalem, Israel, pp: 249257 CrossRef 
7: Potter, M.A. and K.A. De Jong, 2000. Cooperative coevolution: An architecture for evolving coadapted subcomponents. Evol. Comput., 8: 129. CrossRef  Direct Link 
8: Sofge, D., K.A. De Jong and A. Schultz, 2002. A blended population approach to cooperative coevolution for decomposition of complex problems. Proceedings of the 2002 Congress on Evolutionary Computation, Volume 1, May 1217, 2002, Honolulu, HI., USA., pp: 413418 CrossRef 
9: Shi, Y.J., H.F. Teng and Z.Q. Li, 2005. Cooperative coevolutionary differential evolution for function optimization. Proceedings of the 1st International Conference on Advances in Natural Computation, August 2729, 2005, Changsha, China, pp: 10801088 CrossRef  Direct Link 
10: Yang, Z., K. Tang and X. Yao, 2008. Large scale evolutionary optimization using cooperative coevolution. Inform. Sci., 178: 29852999. CrossRef 
11: Li, X. and X. Yao, 2012. Cooperatively coevolving particle swarms for large scale optimization. IEEE Trans. Evol. Comput., 16: 210224. CrossRef 
12: Tang, K., X. Yao, P.N. Suganthan, C. MacNish, Y.P. Chen, C.M. Chen and Z. Yang, 2007. Benchmark functions for the CEC'2008 special session and competition on large scale global optimization. Technical Report, Nature Inspired Computation and Applications Laboratory, USTC, China, pp: 118.
13: Zhang, Z.H., Y.J. Shi and H.F. Teng, 2012. Dualsystem cooperative coevolutionary differential evolution with humancomputer cooperation for satellite module layout optimization. J. Convergence Inform. Technol., 7: 346355. Direct Link 
14: Tsutsui, S., M. Yamamura and T. Higuchi, 1999. Multiparent recombination with simplex crossover in real coded genetic algorithms. Proceedings of the Genetic and Evolutionary Computation Conference, June 1214, 2009, China, pp: 657664
15: Wiegand, R.P., W.C. Liles and K.A. De Jong, 2001. An empirical analysis of collaboration methods in cooperative coevolutionary algorithms. Proceedings of the Genetic and Evolutionary Computation Conference, August 6, 2001, China, pp: 12351242
16: Panait, L., S. Luke and J.F. Harrison, 2006. Archivebased cooperative coevolutionary algorithms. Proceedings of the 8th Annual Conference on GENETIC and Evolutionary Computation, July 0812, 2006, Washington, USA., pp: 345352 CrossRef 
17: Qin, A.K., V.L. Huang and P.N. Suganthan, 2009. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evolut. Comput., 13: 398417. Direct Link 
18: Brest, J., S. Greiner, B. Boskovic, M. Mernik and V. Zumer, 2006. Selfadapting control parameters in differential evolution: A comparative study on numerical benchmark problems. IEEE Trans. Evolut. Comput., 10: 646657. Direct Link 
19: Bdack, T., F. Hoffmeister and H.P. Schwefel, 1991. A survey of evolution strategies. Proceedings of the 4th International Conference on Genetic Algorithms, July 1316, 1991, San Diego, CA., USA., pp: 29
20: Teng, H.F, Y. Chen, W. Zeng, S. Yanjun and H. QingHua, 2010. A dualsystem variablegrain cooperative coevolutionary algorithm: Satellitemodule layout design. IEEE Trans. Evol. Comput., 14: 438455. CrossRef 
21: Tang, K., X. Li, P.N. Suganthan, Z. Yang and T. Weise, 2010. Benchmark functions for the CEC'2010 special session and competition on large scale global optimization. Proceedings of the IEEE Congress on Evolutionary Computation, July 1823, 2010, Barcelona, Spain, pp: 123
22: Wang, H., Z. Wu, S. Rahnamayan and D. Jiang, 2010. Sequential DE enhanced by neighborhood search for large scale global optimizatio. Proceedings of the IEEE Congress on Evolutionary Computation, July, 1823, 2010, Barcelona, Spain, pp: 17



