Multi-objective Optimization Problems (MOPs) are very common in the real world,
especially in engineering applications, like production scheduling, design of
complex software system and potential function parameter optimization (Coello
et al., 2004; Parsopoulos and Vrahatis, 2002;
Ahmed and Zamli, 2011; Hernane et
al., 2010). These problems involve optimizing multiple competing and
incommensurable criteria that reflect various design specifications and constraints.
The use of evolutionary algorithms for multi-objective optimization has significantly
grown in recent years (Coello et al., 2004; Zitzler
et al., 2003; Chen, 2011). The most common
used population-based evolutionary computation techniques are inspired by the
evolution of nature.
Particle Swarm Optimization (PSO) is a relatively recent heuristic technique
motivated from the simulation of social behavior (Lu and
Chen, 2011). PSO has been shown to offer higher convergence speed for multi-objective
optimization problems as compared to canonical evolutionary algorithms. The
ability to handle complex problems, involving features such as discontinuities,
multimodality, disjoint feasible spaces and noisy function evaluations, reinforces
the potential effectiveness of PSO in MOPs (Fonseca and
Fleming, 1995; Kennedy and Eberhart, 1995). Many
PSO based optimizations for MOPs have been developed (Coello
et al., 2004; Liu et al., 2007; Zitzler
et al., 2002; Deb et al., 2002; Deb,
1999; Cui et al., 2006) but the experimental
effect depends on the initial population distribution greatly. It cannot get
acceptable results when the distribution doesnt reflect the population
Chaos is one of the most popular phenomenon that exist in nonlinear system,
whose action is complex and similar to that of randomness. Chaotic variables
can go through all states in certain ranges according to their own regularity
without repetition. Due to the ergodic and dynamic properties of chaos variables,
chaos search is more capable of hill-climbing and escaping from local optima
than random search and thus has been applied to the optimization (Zuo
and Fan, 2006). It is now widely recognized that chaos is a fundamental
mode of motion underlying almost natural phenomena. Given an energy or cost
function, a chaotic dynamic system may eventually reach the global optimum or
its best approximation with high probability. Chaos was introduced into the
optimization strategy to accelerate the optimum seeking operation. In this study,
a chaos based PSO strategy MCPSO was developed to solve MOPs.
Multi-objective optimization: The MOP methodology is to finding optimal
solutions to multivariate problems with multiple but often conflicting, objectives
(Fonseca and Fleming, 1995; Yang
et al., 2009). In most cases, its impossible to find the global
optimal at the same time for all objectives. The family of solutions is composed
of all those potential solutions such that the components of the corresponding
objective vectors cannot be all simultaneously improved. The goal of MOP is
to provide a set of Pareto optimal solutions to the mentioned problem (Fang
et al., 2011).
General MOPs can be defined as following. Let X be an n-dimensional search space and fi (x), i = 1,
, m be m objective functions defined over X. Supposing:
be p inequality constraints, the MOP can be described as finding a vector:
that satisfies the constraints and optimizes the vector function:
Let p = (p1,
, pm) and q = (q1,
qm) be two vectors. It means that p dominates q if and only if pi≤qi,
i = 1,
, m and pi<qi for at least one member.
Its defined as Pareto dominance. A solution
is said to be Pareto optimal if for every ,
or, there is at least one i = 1,
, m such that:
That is to say,
is Pareto optimal if there exists o feasible vector which
would decrease some criterion without causing a simultaneous increase in at
least one other criterion. There is not a solution y such that f (y) dominates
f (x) (Parsopoulos and Vrahatis, 2002).
The set of all Pareto optimal solutions of an MOP is called Pareto optimal set and it can be presented by P*. The Pareto front is defined as follow:
A Pareto front is called convex if and only if:
And it is called concave if and only if:
In the general case, it is impossible to find an analytical expression of the line or surface that contains these points. The aim of an Evolutionary Multi-objective Optimization (EMO) is to find a set of Pareto-optimal solutions approximating the true Pareto-optimal front.
Chaos based PSO: PSO is a population based search strategy where individuals, considered as particles, are grouped into a swarm. Each particle represents a potential solution to the optimization problem and each particle has its own fitness value and velocity. These particles fly through the n-dimensional problem space by adjusting their positions and velocities in search space according to their own historical experiences and that of neighboring particles.
In order to handle multi-objective optimization problems, the standard PSO
must be modified before being used on MOPs. There have been developed several
methods that applied PSO on MOP (Coello et al., 2004;
Liu et al., 2007; Parsopoulos
and Vrahatis, 2002). The most well known is that Coello developed a grid-based
nBest selection process (Coello et al., 2004).
In this study, a different chaos optimization strategy was used to handle multiple
objectives. The population was disarranged using chaos when it stuck in the
local optimum. After several rounds of running, multiple optimums were founded.
Chaos optimization strategy: Chaos has three important dynamic properties:
the sensitive dependence on initial conditions, the intrinsic stochastic property
and ergodicity. Chaos is in essence deeply related with evolution. In chaos
theory, biologic evolution is regarded as feedback randomness, while this randomness
is not caused by outside disturbance but intrinsic element (Tong
et al., 1999). The chaos based PSO strategy is proposed by integrating
chaos optimization algorithm and PSO algorithm. First, the solution space of
the optimization problem is mapped to the ergodic space of chaotic variables
and thus optimization variables are represented by chaotic variables which are
coded into a particle. Then, taking full advantages of the ergodic and stochastic
properties of chaotic variables, a chaos search is performed in the neighborhoods
of particles to exploit the local solution space and the motion of chaotic variables
in their ergodic space is used to explore the whole solution space. Detailed
operations of the MCPSO are developed as follows.
Step 1: Initialization: The chaotic variables are generated using classic Logistic mapping equation defined by:
where, L is a control parameter which is between 0 and 4.0 (Moon,
1992). If L ε(3.56, 4.0), then the above system enters into a chaos
state and the chaotic variable x is produced. Given arbitrary initial value
that is in (0, 1) but not equal with 0.25, 0.5 and 0.75, chaos trajectory will
search non-repeatedly any point in the neighborhoods of particles by iteration.
Utilizing the chaos characteristic, it can get n chaos variables of different
orbits by setting n different initial values to xk in Eq.
Step 2: Solution space transformation: Each chaotic variable corresponds to an optimization particle variable. The range of chaotic variables is (0, 1) and maybe different from that of optimization variables. It can be solved by mapping the ergodic space of the chaos system to the solution space of the optimization problem by Eq. 10:
is the ith optimization variable of the problem and xi is the ith
chaotic variable. The continuous function of the optimization problem is treated
as the objective function and the particles can be calculated.
Step 3: Chaotic search in particle neighborhoods: Each particle i has a position defined by Xi = (xi1, xi2,
, xin) and a velocity defined by Vi = (vi1, vi2,
, vin) in the variable space. Each particle has its own best position Pid, corresponding to the personal best objective value obtained so far. The global best particle is denoted by Pgd which represents the best particle found so far. The velocity and position of each particle i is updated as below:
is a chaotic vector and
is the value of the jth chaotic variable at the kth iteration. G1
and G2 are constants called acceleration coefficients, ω is
the inertia weight of the particle, r1 and r2 are random
numbers uniformly distributed in the range of (0,1). λ is a constriction
factor which controls and constricts the chaotic range.
is a chaotic vector whose ergodic space is a m-dimensional hypercube. When the
chaotic vector xk+1 updates in its ergodic space through chaotic
updates in the neighborhood of xid to perform chaotic search.
Convergence analysis: Let x3 = agr min f (x) be the logistic
mapping used in the optimization search.
is the solution list produced by MCPSO,
is the solution list produced by chaos. For I≤j, it can get
is a convergence list and
is also a convergence list according to converging and approximating theorem
(Yadong and Shaoyuan, 2002).
Metrics of performance: In the case of multi-objective optimization
problem, comparing different optimization techniques involves the notion of
performance metrics. Zitzler et al. (2003) suggested
an infinite number of performance metrics to compare two or more solutions.
In this study, the coverage of two sets, the convergence metric and spacing
metric were taken into consideration in order to provide a quantitative assessment
(Deb et al., 2002; Heng et
Let A, B be two approximate Pareto-optimal sets. The coverage of two sets gives how many decision vectors in B are dominated by A. The function Ic maps the ordered pair (A, B) to the interval (0,1):
where, ≥ means weakly dominate. The value Ic (A, B) means no decision vector in B is weakly dominated by A.
The convergence metric represents the gap between the set of approximate Pareto-optimal solutions (Pfapp, ai εPfapp) and the true Pareto-optimal fronts (Pftrue, ai εPftrue). For each point ai in PFapp, the smallest normalized euclidean distance to PFtrue is given by:
are the maximum and minimum values of the m-th objective in PFtrue,
respectively and k is the number of objective functions. The convergence metric
is the average value of the normalized distance for all points in Pfapp:
The metric of spacing desires to measure the spread distribution of vectors throughout the non-dominated vectors found so far. It gives an indication of how evenly the solutions are distributed in objective space. This metric is defined as:
is the average value of all dI.
Benchmark problems and experimental results: For multi-objective optimization
problems, the benchmark problems should possess several features such as multimodality,
convexity, discontinuity and non-uniformity which may challenge the algorithms
capability to converge or to preserve good population diversity (Deb
et al., 2002). In this study, CMOPSO (Coello
et al., 2004), SPEA2 (Zitzler et al., 2002)
and NSGA-II (Deb et al., 2002) were compared
with MCPSO in solving six well-known multi-objective optimization benchmark
problems. These problems include three low-dimensional problems (DEB, FON and
KUR) and three ZDT problems (ZDT1, ZDT2 and ZDT3) (Deb et
al., 2002). These problems pose sufficient difficulty to impede the
ability in searching for the Pareto optimal solutions and have been applied
by many researchers to examine their proposed algorithms (Liu
et al., 2007; Parsopoulos and Vrahatis, 2002).
The definitions of these six problems are summarized in Table
The group of ZDT benchmark problems defined below is structured in the same manner and consists of three functions fi, g and h.
The function f1 is a function of the first decision variable only,
g is function of the remaining n-1 variables and the parameters of h are the
function values of f1 and g.
The first three low-dimensional benchmark functions (DEB, FON and KUR) are
simple in scale. They do not have mathematical defined Pareto-optimal fronts.
However, their approximate Pareto-optimal fronts have been illustrated in some
literature (Deb, 1999). The other three ZDT problems
have 30 decision variables. Their Pareto-optimal fronts illustrated by a set
of 200 uniform points are shown in Fig. 1. The Pareto-optimal
fronts of the three ZDT problems have been mathematically defined by Zitzler
et al. (2003).
A population of size 100 and archive size 100 are used in four algorithms,
15 bits for each variable in SPEA2 and NSGA-II, real number representation in
CMOPSO and MCPSO. The crossover method is uniform crossover and the crossover
rate is 0.85 in SPEA2 and NSGA-II. The number of evaluations is 20,000 for all
algorithms. The other parameters are selected according to the proposed literature
(Coello et al., 2004; Liu
et al., 2007; Zitzler et al., 2002; Deb
et al., 2002). Twenty independent simulation runs were performed
for each algorithm on each test problem in order to study the statistical performance.
Each test problem was initialized with a random population.
|| Definitions of the test problems
|| The Pareto optimal fronts of the six test problems
|| The statistical values of the coverage of the two sets obtained
by MCPSO (C) and SPEA2 (S)
The notched box plots are used to illustrate the results distribution Fig. 2. The notches represent a robust estimate of the uncertainty about the medians for box-to-box comparison. The box has lines at the lower quartile, median and upper quartile values. The whiskers are lines extending from each end of the box to show the extent of the rest of the data. Outliers (the crosses +) are data with values beyond the ends of the whiskers.
In each plot, the left box represents the distribution of Ic (C, S) and the right box represents Ic (S, C).
Ic (C, S) denotes the ratio of the number of solutions obtained
by SPEA2 which are weakly dominated by the solutions obtained by MCPSO to the
total number of the solutions obtained by SPEA2 in a single run. Ic
(S, C) denotes the ratio of the number of solutions obtained by MCPSO which
are weakly dominated by the solutions obtained by SPEA2 to the total number
of the solutions obtained by MCPSO in a single run. The comparison between MCPSO
and SPEA2 in the coverage of two sets shows that the values of Ic
(C, S) and Ic (S, C) for DEB, FON and three ZDT problems are smaller
than 0.05. Therefore, only minority solutions are weakly dominated by each other.
The box plots of Ic (C, S) are higher than that of Ic
(S, C) in DEB, FON and three ZDT problems, while the box plot of Ic
(S, C) is higher than that of Ic (C, S) only in KUR.
||The statistical values of the coverage of the two sets obtained
by MCPSO (C) and CMOPSO (M) in solving the six problems
||The statistical values of the coverage of the two sets obtained
by MCPSO (C) and NSGA-II (N) in solving the six problems
It means MCPSO did better than SPEA2 in DEB, FON and the three ZDT problems
while SPEA2 did better than MCPSO in KUR as far as the coverage is concerned
In each plot, the left box represents the distribution of Ic (C, M) and the right Ic (M, C).
The comparison between MCPSO and CMOPSO in the coverage of two sets shows that the values of Ic (C, M) are greater than that of Ic (M, C) in all six benchmark problems, it means MCPSO did better than CMOPSO.
The box plots of Ic (C, N) are higher than that of Ic (N, C) in DEB, FON, KUR, ZDT1 and ZDT3 problems, while the box plot of Ic (N, C) is higher than that of Ic (C, N) only in ZDT2. It means MCPSO did better than NSGA-II in DEB, FON, KUR, ZDT1 and ZDT3 problems while NSGA-II did better than MCPSO in ZDT2 problem on the coverage of the two sets.
In each plot of Fig. 4, the left box represents the distribution of Ic (C, N) and the right Ic (N, C).
||The statistical values of the convergence metric obtained
by MCPSO (C), CMOPSO (M), SPEA2 (S) and NSGA-II (N)
|| The statistical values of the spacing metric obtained by
MCPSO (C), CMOPSO (M), SPEA2 (S) and NSGA-II (N)
Figure 5 illustrate the convergence metric boxes over 20 independent runs for the six problems. Convergence metric represents the gap between the set of approximate Pareto-optimal solutions and the true Pareto-optimal fronts. It can be concluded that all algorithms can obtain the values less than 0.05 in almost all the 20 independent runs. MCPSO and NSGA-II did better than other two algorithms but MCPSO did best in all four algorithms overall.
Figure 6 illustrate the spacing metric boxes over 20 independent
runs for the six problems. The spacing metric means how evenly the solutions
are distributed along the discovered front. Figure 6 shows
that, for DEB, FON and three ZDT problems, all algorithms obtain the values
in 10-3 order of magnitude. MCPSO, SPEA2 and NSGA-II did better than
CMOPSO in FON, KUR and three ZDT problems and MCPSO did best in four algorithms.
Overall, it can be concluded that all the four algorithms can approximate the true Pareto optimal fronts for the three low-dimensional problems (DEB, FON and KUR). According to the coverage performance metric, MCPSO did better than SPEA2 in DEB, FON and three ZDT problems. MCPSO did better than CMOPSO in all six test problems. And MCPSO did better than NSGA-II in DEB, FON, KUR, ZDT1 and ZDT3 problems. MCPSO and NSGA-II did better than other two algorithms and MCPSO did best in all four algorithms on the metrics of convergence. MCPSO did best in four algorithms on the metrics of spacing performance.
In this study, a chaos based PSO strategy is proposed by integrating chaos optimization algorithm and PSO algorithm to solve multi-objective optimization problems. Taking advantages of the ergodic and stochastic properties of chaotic variables, the chaos search is performed in the neighborhoods of particles to exploit the local solution space. The motion of chaotic variables in their ergodic space is used to explore the whole solution space. The coverage of two sets, the convergence metric and spacing metric three performance metrics were used to demonstrate the effectiveness and efficiency of the proposed MCPSO strategy. This study did an attempt for combining promising aspects of different algorithms into an integrated approach that shows good performance on benchmark test problems. This study represented only the first step in the investigation of solving MOP using PSO combined with chaos strategy. Future work will include investigation of the performance of PSO in more complicated higher dimension problems and designing better evolutionary algorithms.
The authors would like to thank the reviewers for their careful reading of this study and for their helpful and constructive comments. This study was supported by the Program for New Century Excellent Talents in University under grant No. NCET-08-0450.