In multi objective decision making environments, one of general problem
is to select a set of input conditions (the X`s as independent variables)
which results in a product with a desirable set of outputs (the Y`s as
response variables). In fact this problem is simultaneous optimization
of the response variables, each of which depends upon a set of independent
variables X1,..., Xn. In this study we wish to select
the levels of the independent variables in such a way that all the response
variables become optimized.
For instance, in quality control environments maybe the goal is to find
the levels of the input variables (quality characteristics) of the process
so that the quality of the product (response) will have the desired characteristics.
Moreover, in Response Surface Methodology (RSM), we adjust the levels
of the input variables to the point that the set of outputs become optimized
Multi-response optimization in the framework of RSM is an attempt to
this end. Different optimization methods have been presented before but
we present a hybrid method for optimizing statistical multi-response problems.
The important characteristic of our method is that it uses simulation
in generating problem`s input data and performs a combination of genetic
algorithm with local search for optimization of model in order to improve
Classical methods such as bounded objectives method or a combination
of bounded objectives and lexicography methods are used in a multi-response
Loss function method was used by Pignatiello (1993), Winston (2003),
Artiles-Leon (1996), Ross (1996) and Kapur and Cho (1996). The basis of
this method is on Taguchi`s loss function. The basic idea in this method
was transforming multi objective problem to a single objective one.
Later Cheng et al. (2002) extended Zimmermann`s method for optimizing
statistical multi-response problems but the solution`s length was the
weakness of their method. Pasandideh modeled statistical multi-response
optimization problems using desirability function and also four different
GA methods by simulation method to solve it (Pasandideh and Niaki, 2006).
Kim and Rhee (2004) proposed a method based on the desirability function
and GA. They applied this method to optimize a welding process.
Khoo and Chen (2001) combined genetic algorithm with RSM and also used
genetic algorithm in three different scenarios for optimizing optimization
parameter of one and two response.
Avello et al. (2004) used simulated annealing method for optimizing
multi-response problems. In each stage of solution, they used a strategy
for selection of basic function, so in each stage a function would be
chosen as a basic function and others as constraints. Ozselik and Erzurumlu
(2005) presented an optimization method using RSM and GA to minimize the
bend of slim layer of plastic.
This was an expecting model for capability of plastic bending using RSM
by three level DOE. This model was used with GA to find optimum amount
of process parameter.
To analyze several qualitative attributes simultaneously, Koksoy and
Yalcinoz (2006) used RSM and compared its results with solution of GA
method. Correia et al. (2005) offered a comparison between RSM
and GA in optimization of welding process (GMAW). Suresh et al.
(2002) presented second level model for the expecting roughness degree
of steel parts. Their study tries to optimize roughness degree of the
parts by GA. Oktem et al. (2005) offered a GA method using RSM
in which RSM presents an effective model to determine the level of parameter
and GA optimizes these levels.
The problem has been solved using a Pareto approach by assigning different
weights to prioritize the objectives. However, when the problem becomes
large or when the structure is complex, this GP approach is totally impractical.
Chao-Fang and et al. (2007) offered a fuzzy goal programming by
giving different priority to objectives by DM. Gen et al. (1993)
used 0-1 fuzzy goal programming approach in order to solve optimization
Several works have been done on genetic algorithm by applying goal programming.
Mirrrazavi et al. (2001) analyzed the use of genetic algorithms
to solve Integer Goal Programming models in a general frame. Chen and
Liu (1994) used genetic algorithm to solve a Weighted Goal Programming
The following notations will be used throughout this study:
||Upper bound of objective function Zj
||Lower bound of objective function Zj
||Access level to the optimum of objective function Zj
||The importance or weight of objective function Zj from
the view point of the decision maker
||The fuzzy membership function of objective function Zj
||The optimal value of objective function Zj
||The optimal value of variable Xi.
||The tolerance of objective function Zj Δj
Definition 1: The mathematical model of the multi objective problem
is defined as following:
Definition 2: If in a multi objective problem with m objectives,
each objective function is solved independently then we will have m independent
optimal solutions. By replacing each optimal solution in the other objective
functions, we will gain a lower and upper limit for each objective function.
For the jth objective function, we solve the following problem separately
(j = 1,2,..., m). Table 1 shows the results.
||The values of the jth objective function in terms of optimal variables
of the ith objective function problem
||The optimal value of variable Xj in the ith objective
Definition 3: For any objective function, there is a fuzzy membership
function. Figure 1 shows fuzzy membership function.
|| The range of objective functions
Here, in this study, we consider following assumptions:
All the factors, which make up the input of the problem, are the independent
variables X1, X2,..., Xn.
The lower and the upper bounds of the independent variables are -1 and
1 where Xi is a coded variable in such a way that -1 ≤ Xi
The output of the problem is the response variables given by Y1,
Y2,....,Yk. For every objective function Zj
we have: Lj ≤ Zj ≤ Uj.
The one-sided or two-sided transformation for each response depends (on
the nature of) the objective of the problem. The mathematical model of
the problem becomes:
Theorem 1: Consider the following response variables problem:
Assuming an identical importance for the objectives and the identical
access level to the optimal point of each objective, the decision maker`s
desirable solution will be found by solving the following mathematical
||The function positive deviation Zj /Δj
||The function negative deviation Zj /Δj
||Access level to the optimum of any objective function
Proof: In Fig. 1 we have :
For the constraints of section a, suppose that:
Now we have:
For the constraints of section b, suppose that:
Now we have:
Now we combine constraint (1) and (2) and finally:
Corollary 1: If the access level to the optimum state of each
objective is different and the objectives are not identically important
from the view point of decision maker, then the desirable solution of
the decision maker will be obtained by solving the following mathematical
Proof: This is the result of theorem 1 with the following assumptions:
Corollary 2: If the response variables have the best nominal value
(two-sided transformation) and the objectives are not identically important
from the view point of the decision maker and the access level to the
optimum status of each objective is different then the decision maker`s
desirable solution will be obtained by solving the following mathematical
Tj = Nominal value of objective function Zj
Proof: This is the result of theorem 1 with the following assumptions.
Figure 2 shows the two-sided fuzzy membership function
αj ≤ μ(Zj)
Corollary 3: If there are k response variables on the more -the
better (one-sided transformation) and m-k response variables on the better
nominal value (two-sided transformation) and the goals are not identically
important from the viewpoint of the decision maker, then the desired solution
from the viewpoint of the decision maker will be obtained by the following
The proof is obtained from Corollaries 1 and 2. Considering the proof
of this theorem, we propose an algorithmic procedure for calculating the
Step 1:Define the following response variables problem:
Step 2:Change the response variables problem as following:
Step 3:Solve the problem for each objective function separately
and obtain the solutions. Put the solutions in the objective functions
for each objective function two lower limit (Lj) and upper
limit (Uj) as the best and the worst case and then obtain Δj=Uj-Lj
Step 4:Obtain the objective weights Wj from the decision
maker and then solve the following mathematical programming model:
Now in order to obtain optimum or near optimum solution, we use a metaheuristic
algorithm, a genetic local search, which searches for best solution in
an extended space.
Genetic local search algorithm: Genetic algorithm is one of the
random optimization algorithms which is suitable for complex problems
in an unknown search space. It is also one of the evolutionary algorithms
which was inspired by inheritance science. Its difference with other techniques
lays in its initialization with random chromosome.
Initial conditions: Required initial information to start the
GA method is as following:
||Population size: It is the number of chromosomes or
scenarios which we will keep in each generation and we will give it
||Number of replications: It is the number of simulated iterations
of each scenario and we will give it n
||Crossover rate: This is the probability of performing crossover
in the GA method, which is given Pc
||Mutation rate: This is the probability of performing mutation in
the GA method which is given PM
Chromosome: In GA method, chromosomes or scenarios are defined
as a set of values for x1, . . . ,xp which is a
solution to the problem. Figure 3 shows chromosome.
Initial population: Generating an initial population of solutions
is the first step to start the optimization process. The number of solutions
is N and we select solutions randomly to cover a wide range of solution
space. So we define the following parameters to simulate each gen:
||The input variable i in scenario j, i = 1,. . . ,p
||j = 1,. . . ,N
||Lower limit for input variable
||Upper limit for input variable
||Random number between 0 and 1
If α has value 0, it means the related gen is in its upper bound
and if it has value 1, the related gen is in its lower bound.
Crossover, mutation and reproduction: Genetic algorithm initiates
with a population of chromosomes and also produces new chromosomes using
crossover, mutation and reproduction operators. In a crossover process,
it is necessary to mate pairs of chromosomes to create offspring. We implement
it by selecting randomly a pair of chromosomes with probability of Pc
from the population. In this study we apply two points crossover. In order
to implement mutation operation we use pair wise exchange operator in
which two genes are chosen randomly and exchanged. The reproduction operator
chooses chromosomes with better values of fitness and puts them in a new
population for the next generation.
Evaluation: After producing the new chromosomes by crossover and
mutation operators, we evaluate each chromosome by suitable fitness function
and use the result to reproduce a new generation.
Chromosomes selection: We have chosen roulette wheel approach
form different methods of selection. In this method parents choosing depend
on their fitness values thus better chromosomes have more opportunity
for being selected.
Parameters tuning: There are several different factors which affect
the algorithm in reaching to a suitable and desirable solution and parameters
values is one of them. Whereas different parameters combinations would
lead to different solutions, we try to find the most appropriate set of
parameters in GA. This is called parameter tuning which is considered
to be the most important and perhaps the most difficult task. Reaching
to a set of fixed values for each parameter of GA is a very challenging
and difficult task while considering large size tests because of the multimodality
and nonlinearity of different kind of objective functions.
To achieve this goal firstly we carry out an extensive experiment in order
to determine effective parameters using design of experiments (DOE). Then an
empirical study of various possible combinations of parameters will be done
to define the best levels of those parameters.
Solution algorithm: Our suggested algorithm comprises two cycles:
principal and internal. Outputs of the internal cycle, definition of lower
and upper limit of each response, are inputs of principal cycle. One of
the best features of this proposed algorithm is its long term memory.
In each iteration the number of chromosomes with a low fitness value,
would be saved in this memory which is used to fix a high diversity in
the solution space. Another feature of this algorithm is that it uses
shock operator. Since a small search space generally would lead to quick
convergence into a local optima, we use shock operator to prevent it from
getting stuck in a local optima. This operator highly influences the improvement
of final solution. After some iteration, in case of having no improvement
on the last solution, it will simulate some chromosomes and insert them
in the population. In this way the search space would be extended because
of generating diversity. Integration of genetic algorithm and local search
is another feature of our proposed algorithm. This proposed algorithm
has also a short term memory which is responsible for keeping the best
|| Proposed algorithm`s pseudo code
It would be updated in two situations. First situation occurs
when current iteration`s solution is better than the solutions in this
memory and the second situation happens when a fitness value in neighbors
is better than the solutions of memory. Pseudo code of proposed algorithm
is as it is shown in Fig. 4.
Consider the following example with four design variables:
The ammoniac(X1), the thickness of lead and expulsion alloys
on the radiator pipe(X2), temperature(X3) and the
percentage of expulsion in the alloy of the radiator pipe(X4).
The responses are corrosion (Y1) and adhesiveness (Y2).
The design is a central composite design. Required data are shown in Table
Correlation coefficient and the covariance matrix of the responses (Y1
Y2) are as following:
Therefore the responses Y1 and Y2 are highly correlated.
At first, we coded the above data and then we fitted a second-order model
for both responses. The response surface curves for Y1 and
Y2 is as following:
|| Sampling data
|| Factorial levels
|| Optimal result of the existing methods and proposed
The studied multi-response problem is as follows:
Four factors (Population size, Number of iteration, Crossover and Mutation
rate) are considered as effective parameters with lower and upper limits
which are shown in Table 3.
To tune parameters (effective factors) central composite design is used.
Set of Results after optimization are presented below:
|Number of iteration
We also implemented a comparison between the proposed and existing algorithms
which is shown in Table 4.
As shown above, our proposed method has significant preference rather
than other methods and presents a better solution.
In this study, we modeled a multi-response statistical optimization problem
through the fuzzy goal programming approach. Input data are generated
through simulated system and we presented a methodology which is the combination
of genetic and local search algorithm.
Our proposed algorithm is comprised of some distinctive characteristics
such as having long and short term memory and shock operator. We further
applied central composite design approach to tune parameters which is
effective on algorithm`s performance. Eventually results are compared
with other recently reported methods, presented in literature and we showed
our hybrid GLSA algorithm produced better performance. For future researches
in this area, we recommend the followings:
||Applying other methods for generating input data like
neural networks instead of simulation which may produce closer data
to real life problems
||Using more factors for evaluating and using MCDM techniques to measure
the performance of algorithm
||Using different crossover and mutation operators in the GA methods
may lead to different conclusions
||Using other metaheuristic methods may be more efficient for this
||Performing other methods; for generating benchmark problem like