INTRODUCTION
Various optimization methods and algorithms have been developed over the past
few years for solving realworld optimization problems. Some of the most wellknown
optimization algorithms stated here are Particle Swarm Optimization (PSO) (Kennedy
and Eberhart, 2001), Artificial Immune Systems (AIS) (De
Castro and Zuben, 2002) and Genetic Algorithms (GA) (Chen
et al., 2001). PSO, AIS and GA are widely used for solving complex
optimization problems but each method has strengths and restrictions.
GA was first suggested by John Holland in early 1970s. While conventional algorithms
are derived from higher order statistics or gradient of a singular measurement
function, GA is a random searching algorithm motivated by nature (Chen
et al., 2001). Sustaining its role as a global optimization technique,
it uses a coding scheme to act on numerical string represented by either binaries
or decimals. The main strength of the evolutionary process for the GA is the
crossover operator and mutation operator. Furthermore, GA provides an efficient
search method and has been used in many practical instances of optimization
and decisionmaking problems (Michalewicz, 1996). Conversely,
GA is prone to premature occurrence that results in convergence at the local
optima. This weakness is generally caused by the poor diversity of the population.
The PSO make use of social behaviour (Kennedy and Eberhart,
2001; Pant et al., 2008) of individuals living
together in groups. Each individual tries to imitate other better group members
in order to improve itself. That way, optimization procedure is being carried
out by the group members as described by Pant et al.
(2008). The performance of PSO depends on the way the particle move in the
search space with a velocity that is constantly updated. Much to the amazement
of the research community, PSO has very few parameters and values to tweak,
which makes it exceptionally easy to implement and has higher speed of convergence.
Nevertheless, as reported by Angeline (1998) and Riget
and Vesterstrøm (2002), the performance of PSO degrades when the
numbers of generations are increased as it tends to converge prematurely given
the aptitude constraint of the model and the diminished diversity of the search
space.
Alternatively, AIS, a new branch of computational intelligence, is a new intelligent
problemsolving technique which finds applications in optimization, computer
security, data clustering, pattern recognition or even fault tolerance (Dasgupta,
2006). The natural immune system uses a diversity of evolutionary and adaptive
techniques to protect organisms from foreign pathogens and misbehaving cells
in the body (De Castro and Zuben, 2005). AIS seek to capture
some aspects of the natural immune system in a computational framework for solving
engineering problems (Glickman et al., 2005).
Moreover, immune based applications are based on immune memory. Thus, the diversity
during the process of evolution is preserved and the convergence of global optimization
is assured. While, the convergence to the global optima is ascertained, the
rate and speed of convergence is relatively slow and should be improved. Clonal
selection algorithm that is used here in the simulation uses the principles
of clonal expansion and affinity maturation as the main forces of the evolutionary
process (De Castro and Timmis, 2002a, b).
As a result, an improved AIS algorithm based on the PSO is established in this study to improve the performance of diversity and convergence. This new algorithm, HAIS (Hybrid AIS), is expected to enhance global search ability and to prevent itself from being trapped in the local optimum when small population size is used. In addition, the ease of implementation attribute is sustained in the proposed algorithm.
GENETIC ALGORITHM METHODOLOGY
Genetic Algorithm (GA) is a search technique that has a representation of the
problem states (represented using a set of chromosomes) and also has a set of
operations to move through the search space. Each chromosome represents an individual
solution (genes) to the problem. The set of individual solutions or genes forms
a population. Genes in the population are improved across generation through
a set of operation that GA uses during the search process. During each generation,
the genes will go through the process of selection, crossover and mutation (Emmeche,
1994).
In selection, an objective function (fitness function) is used to assess the
quality of the solution. The fittest solutions from each generation are kept.
The function of crossover generates new solutions given a set of selected members
of the current population. Crossover exchanges genetic material between two
single chromosome parents. Lastly, mutation would cause a sudden change in chromosomes
in an unanticipated manner. As such, some values of a chromosome are altered
by adding random values to the current values, therefore producing a different
solution. The mutation operation is a smart technique to escape from local optima
trap in which statespace search algorithms may fall into. The operation of
a standard GA is summarized as follows (Michalewicz, 1996):
Step 1: 
Generate a random initial population of individuals 
Step 2: 
Calculate the fitness values of the individuals in the current population 
Step 3: 
Select individuals for reproduction 
Step 4: 
Apply crossover and mutation operator 
Step 5: 
Compute the fitness values of the individuals 
Step 6: 
Select the best individuals to create the new population 
Step 7: 
Steps 3 to 6 are repeated until a predefined stopping criterion is attained 
In the present study, the classical GA is used, with binary codification, scattered
crossover, one individual elitism and stochastic uniform selection. Initial
population size and number of iterations are fixed at 50.
PARTICLE SWARM OPTIMIZATION
Particle Swarm Optimization (PSO) was originally proposed by Kennedy
and Eberhart (1995) as a simulation of social behaviour of organisms that
live and interact within large groups. The essence of PSO is that particles
move through the search space with velocities which are dynamically adjusted
according to their historical behaviours. This mimics the social behaviour of
animals such as bird flocking and fish schooling. With several alterations on
the original model, the algorithm could optimize complex functions based on
the concept of Swarm Intelligence methods (Kennedy and Eberhart,
2001).
PSO algorithm starts with a group of random particles that searches for optima by updating each generation. Each particle is represented by a volumeless particle in the ndimensional search space. The ith particle is denoted as X_{i }= (x_{i1},x_{i2},….,x_{in}). At each generation, each particle is updated by ensuing two best values. They are the best solution that has been achieved (mbest) and the global best value (gbest) that is obtained so far by any particle in the population.
With the inclusion of the inertia factor, ω, the equations are (Shi
and Eberhart, 1998);
where, rnd() is a random number independently generated within the range [0,1] and α_{1} and α_{2} are two learning factors which control the influence of the individual’s knowledge and that of the neighbourhood respectively. The PSO optimization algorithm can be written as follows:
Generate a random initial swarm of particles, assigning each one a random position
and velocity.
Step 1: 
Compute the fitness values of the N particles 
Step 2: 
Update the values of the best position of each particle and the best position
found by the swarm 
Step 3: 
Update the position and the velocity of every particle according to Eq.
1 and 2 
Step 4: 
Steps 2 to 4 are repeated until a predefined stopping condition is reached 
ARTIFICIAL IMMUNE SYSTEMCLONAL SELECTION FOR OPTIMIZATION
Artificial Immune Systems (AIS) are inspired by theoretical immunology and
observed immune functions, principles and models, which are applied to engineering
problem solving (De Castro and Timmis, 2002a, b).
The clonal selection algorithm is a branch of AIS with principles extracted
from clonal expansion and affinity maturation (De Castro
and Zuben, 2002). The clonal selection theory explains that when an antigen
(Ag) is detected, antibodies (Ab) that best recognize this Ag will proliferate
by cloning. This immune response is specific to each Ag.
The immune cells will reproduce in tandem with a replicating Ag until it is
successful in recognizing and fighting against this Ag. Some of the new cloned
cells will be differentiated into plasma cells and memory cells. The plasma
cells produce Ab and will undergo mutation that will promote their genetic variation.
The memory cells are responsible for the immunologic response for future Ag
invasion. Subsequently, the selection mechanism will keep the cells with the
best affinity to the Ag in the next population. The process of a standard clonal
selection algorithm is described as (De Castro and Timmis,
2002a, b):
Step 1: 
Generate a random initial population of Ab 
Step 2: 
Compute the fitness of each Ab 
Step 3: 
Generate clones by cloning all cells in the Ab population 
Step 4: 
Mutate the clone population to produce a mature clone population 
Step 5: 
Evaluate affinity values of the clones’ population 
Step 6: 
Select the best Ab to compose the new Ab population 
Step 7: 
Steps 3 to 6 are repeated until a predefined stopping condition is reached 
Hybrid artificial immune system: The good traits of the PSO especially
in terms of convergence speed and coding simplicity are incorporated into AIS.
Alternatively, AIS is able to prevent the population from being trapped in the
local optimum. The process of the HAIS can be characterized in pseudocode format
in Fig. 1.
The working of HAIS is explained below:
Step 1: 
Select the best particles, N_{1} (consisting of N/2
particles) from PSO 
Step 2: 
Generate a random initial population of Ab, N_{2} (consisting
of the other half, N/2 particles and regard every particle as an antibody) 
Step 3: 
Combine N_{1} with N_{2} and compute the fitness of each
Ab 
Step 4: 
Generate clones by cloning all cells in the Ab population 
Step 5: 
Mutate the clone population to produce a mature clone population 
Step 6: 
Evaluate affinity values of the clones’ population 
Step 7: 
Select the best Ab to compose the new Ab population 
Step 8: 
Steps 4 to 7 are repeated until a predefined stopping condition is reached 
MATHEMATICAL FUNCTIONS AND ENGINEERING PROBLEM
In order to compare the performance of GA, AIS and HAIS, four mathematical
test functions would be used (Suganthan et al., 2005).
These test functions provide a firm benchmarking method in testing the efficiency
of the global optimization algorithms. For the next subsections, the details
of each of the test functions used in this study would be discussed. In addition,
an engineering problem is presented at the end of the subsection.

Fig. 1: 
Pseudocode of the HAIS Algorithm 
Rastrigin’s function: Rastrigin’s function is mathematically
defined as in Eq. 3:
where, 5.12≤x_{i}≤5.12. The global minimum is located at the origin and its function value is zero. The twodimensional Rastrigin’s function is shown in Fig. 2.
Griewangk’s function: Griewangk’s function is mathematically defined as in Eq. 4:
where,600≤x_{i}≤600. The global minimum is located at the origin and its function value is zero. The twodimensional Griewangk’s function is shown in Fig. 3.

Fig. 2: 
A twodimensional plot of Rastrigin’s function 

Fig. 3: 
A twodimensional plot of Griewangk’s function 
Rosenbrock’s function: Rosenbrock’s function is mathematically
defined as in Eq. 5:
where, 2.048≤x_{i}≤2.048. The global minimum is located at (1,1) and its function value is zero. The twodimensional Rosenbrock’s function is shown in Fig. 4.

Fig. 4: 
A twodimensional plot of Rosenbrock’s function 

Fig. 5: 
A twodimensional plot of Ackley’s function 
Ackley’s function: Ackley’s function is mathematically defined
as in Eq. 6:
where, 32.768≤x_{i}≤32.768. The global minimum is located at the origin and its function value is zero. The twodimensional Ackley’s function is shown in Fig. 5.

Fig. 6: 
Compound gear train 
Engineering problem: design of a gear train: The compound gear train
problem (Kannan and Kramer, 1994) shown in Fig.
6 was presented by Sandgren (1988). It is to be
designed such that the gear ratio is as close as possible to 1/6.931. Each gear
must consist of between 12 and 60 number of teeth. The variables must be integers,
since the number of teeth is to be an integer. The mathematical model is stated
as in Eq. 7:
where, x_{1}, x_{2}, x_{3} and x_{4} are the teeth of the gears and 12≤x_{i}≤60.
RESULTS AND DISCUSSION
The comparison in simulation performance between GA, AIS and HAIS was carried
out towards the end of March 2010 using a computer in the Computer Lab., with
AMD Phenom 9600 B QuadCore CPU running at 2.30 GHz, 2 GB of RAM and Windows
Vista Enterprise operating system. For both the HAIS and AIS, which were based
on clonal selection algorithm (De Castro and Zuben, 2002),
mutation probability is 0.01, clone size factor is 2, while initial population
size and number of iterations fixed at 50. The simulation results in terms of
best fitness value, mean best fitness and standard deviation are shown in Table
1, whereas the performance of GA, AIS and HAIS are compared and shown in
Fig. 710 for each of the fitness function
used.
From Fig. 710, GA and HAIS converged
faster than AIS while HAIS gave better fitness value which is closer to the
global minimum among the three. The increased in HAIS convergence and accuracy
clearly show the benefit of injecting PSO qualities to improve performance.
The HAIS was found to be better in obtaining the lowest fitness value of the
functions of Eq. 35 as compare to (Edge
et al., 2006). The study done by Valdez and Melin
(2008) clearly showed that HAIS achieved lower fitness value for the Ackley’s
function and Griewank’s function compare to the standard GA and PSO algorithms.
Another hybrid algorithm based on Clonal Selection was proposed by Mitra
and Venayagamoorthy (2008). However, the HAIS outperformed their results
for the Rosenbrock’s function and Rastrigin’s function. Same goes
to the improved PSO algorithm suggested by Pant et al.
(2008), where HAIS performed better for the Rastrigin’s function and
Griewank’s function.
The engineering problem simulation was executed using the same parameters as in the mathematical function simulation. Table 2 shows the results for the Gear Train Design. As expected, HAIS performs better in terms of accuracy compare to GA and AIS.

Fig. 7: 
Fitness value comparison for Rastrigin’s function 

Fig. 8: 
Fitness value comparison for Griewangk’s function 

Fig. 9: 
Fitness value comparison for Rosenbrock’s function 
Table 1: 
Comparison of fitness value results using GA, AIS and HAIS


Table 2: 
Best gear train design results obtained from the simulation 


Fig. 10: 
Fitness value comparison for Ackley’s function 
CONCLUSION
The classical PSO is stochastic in nature and simple to implement. But as the search goes on, the population diversity becomes worst which in turn leads to premature convergence. Even GA is also affected by premature convergence at the local optima. Thus, in order to minimize this problem, a hybrid AIS was presented in this study to combine the good qualities of PSO with the properties of AIS. The simulation results of GA, AIS and HAIS was presented and compared. In contrast to GA and AIS on four multimodal test functions and an engineering problem, HAIS performed better in terms of accuracy, convergence rate, stability and robustness. Future work may include comparison with other evolutionary algorithms and on more complicated test function.
ACKNOWLEDGMENT
This study was supported in part by grant number FRGS/2010/FKEKK/TK03/1F0085 from the Ministry of Higher Education of Malaysia.