INTRODUCTION
A direct adaptive control algorithm is developed for a class of nonlinear
plants. No restriction has been imposed on the plant structure. The condition
imposed is that the plant must satisfy the instantaneous inputoutput
gain and should be positive. Many Artificial Neural networks (ANNs), for
nonlinear control structure, have been employed (Kosko, 1997) and (Passino
and Yurkovich, 1998). In line with the gain scheduling principle, however,
the controller also has pseudo linear timevarying structure with the
parameters being functions of the operating point (Loannou and Fidan,
2006).
It is well known that the response of a nonlinear plant generally can’t
be shaped into a desired pattern using a linear controller. Consequently,
a nonlinear controller is required for such plants (Murray et al.,
2003; Xiaohong and Shen, 2005). One of the main difficulties in designing
the nonlinear controller, however, is the lack of a general structure
for it. Nonlinear controller design may be viewed as a nonlinear function
approximation problem. In recent years, ANNs become an important element
in approximating a nonlinear function (Wu, 1994), their ability to learn
arbitrary nonlinear mappings can be effectively utilized to represent
the controller nonlinearity. The other advantages inherent in ANNs such
as their robustness, parallel architecture and fault tolerant capability
act as further incentives (Raun, 1997).
Many researchers (Diao and Passion, 2002; Nounou and Passion, 2004; Ge and
Wang, 2002) have considered adaptive neural network to control nonlinear dynamical
systems. All of these articles provide only local convergence analysis. In this
study we developed the analysis of local convergence further toward to the global
convergence, where for such purpose we use ANNs with many hidden layers and
different number of neurons.
DIRECT ADAPTIVE CONTROL
The direct adaptive control means that the controller parameters are
directly updated, without the need for the implementation of an identifier
of the plant. Such method of control depends on recursive LeastMeanSquare
(LMS) and gradient approximation to estimate the plant parameters (Murray
et al., 2003; Xiaohong and Shen, 2005). The basic idea of direct
adaptive control can be shown in Fig. 1 (Murray et
al., 2003).
This problem can be solved using Kaczmarz^{’}s projection algorithm
(Chen and Kalil, 1992). Consider the unknown parameter as an element ,
in the consider range (R^{n}), such that:

Fig. 1: 
Plant estimator using LMS approximation 
where, φ^{T}(t) are known functions that may depend on other
known variable and
are unknown parameters.
From the interpretation it is clear that the n measurements, of the elements
φ_{1}(t),φ_{2}(t),......,φ_{n}(t)
must span the space R^{n} so that the parameter vector
is unique (Wu, 1994). Assume that an estimate
is available and that a new measurement is obtained and since the measurement
contains information only in the direction of the vector φ(t), in
parameter space, we can update the estimate
as:
where, the parameter β is chosen so that .
Thus from Eq. 2 and 1, we obtain:
Thus we have:
And the error has the form:
From Eq. 3, 4 and 2
we obtain:
Also from Fig. 2 we can conclude that
and thus,
Since the term φ^{T}(t)φ(t) is not depend on f(.) then
we have:
and thus
Using Eq. 14, 15 and 9
we have
To avoid potential problems that may occur when φ(t) = 0, the projection
algorithm can be modified in practice, as follows:
ε: is a small number depend on n and number of hidden layers in
the ANNs, where,
and μ is a learning rate depend on the type of ANNs, ε and g_{max}
given in Eq. 19 such that
Figure 2 shows the direct adaptive control using LMS
approximation (Raun, 1997).
PROBLEM STATEMENT
Consider the general model of the SingleInputSingleOutput (SISO) nonlinear
plant given by:
where, y_{p}(t) is the measured plant output, u(t) is the measured
plant input (control output), n is the known plant order, d is the known
plant delay and f(.) is an unknown nonlinear function.
Further, the function f(.) is assumed to be continuous and bounded such
that:
From Eq. 19 we conclude that the function f(.) is monotonically
increasing in u(td), which is analogous to the assumption that the instantaneous
input gain in a linear plant (frequently denoted by b_{0}) be
positive. The problem, now, is to design a direct and stable adaptive
control strategy that drives the plant output to a reference command.
Here, we provide some results to facilitate the development of proposed
selftuning algorithm. Assume that y_{p}(t) being generated from
arbitrary functions φ(t) using the following equations:
where, the functions satisfy
Equation 21 is essentially a simplified representation
of Eq. 18. For each time instant t, the function can
be related by the following relationship:
For given y_{p}(t1),....,y_{p}(tn), u(td1),....,u(tn).
Using Eq. 23, we can verify that Eq. 19
and 22 are equivalent. Therefore, the following development, which is
based on Eq. 21 as well, applies to the plant, represented
by Eq. 18.
Now, consider the estimation of the parameter θ^{0} from
the observations φ(t) and y_{p}(t) employing the following
the algorithm, modified (given that t, d, n_{1}, n_{2}
are integers and t is expressed as t = n_{1} + n_{2}d,
with 0≤n_{1}‹ d, we define t [model d] as the integer
value n_{1}), in which its original (Ge and Wang, 2002),
where, θ(t) denotes the estimate of θ^{0} after the
t^{th} observation.
RADIAL BASIS FUNCTION NETWORKS (RBF)
A RBF neural network can be represented by a twohiddenlayer feed forward
network structured, the first layer of the network consists of RBF neurons
and the second layer is linear neurons. The inputs to the neural network
are directly connected to the RBF neurons are then connected to the linear
neurons in the second layer through adaptable weights. For the sake of
definiteness, let us assume that there are j inputs to the RBF neural
network, i RBF neurons in the first layer and q linear neurons in the
second layer.
The transfer function of the i^{th} RBF neuron in the first layer
has the form:
where, z_{i}(x) is the output of the i^{th} RBF neurons
and r(.) is a radial basis function. The most common used RBF is the Gaussian
function given below:
In which x is the input vector of dimension j and ρ_{i}
is the center vector of dimension j of the i^{th} RBF neurons
and σ_{i} the positive scalar of the width of the i^{th}
RBF neuron. Both ρ_{i} and σ_{i} can be different
for different RBF neurons and this offering flexibility in locally shaping
for the RBF neural network structure. The transfer function for the i^{th}
linear neuron is given by the following matrix notation:
where, M = [M_{1},M_{2},....,M_{q}]^{T}
lumps the outputs of all of the linear neurons together to form a vector;
z(x) = [z_{1}(x),z_{2}(x),....,z_{i}(x)]^{T}
and W is an i*q matrix with the iq^{th} entry w_{iq} interpreted
as the adaptable weight from the i^{th} RBF neuron in first layer
to the q^{th} linear neuron in the second layer.
Equation 30 apparently decomposes all of the parameters
involved in a RBF neural network into two categories: the matrix W containing
all of the adaptable weights and the vector z(.), containing all such
structural parameters as the number of neurons i, the centers ρ_{i}
and the widths σ_{i}.
CONTROLLER STRUCTURE
In the proposed adaptive control strategy, the controller structure is
fixed a priori, but its parameters are directly updated based on the inputoutput
measurement employing a stable adaptive scheme. It is well known that
satisfactory control of a nonlinear plant generally requires a nonlinear
controller. Accordingly, a nonlinear controller structure has been chosen.
In line with the gain scheduling principle, however, the controllers also
have a pseudolinear timevarying structure with the corresponding parameters
being functions of the operating point (Naraendra and Parthasarathy, 1990).
We assume that the plant operating point is measurable or may be inferred.
Examples of such systems include hydraulic drigs, Ac motors, power plant
control systems and turbofan engines (Wang et al., 1995). The controller
requires the measurement of the inputoutput as well as the variables
representing the operating point. The controller parameters are assumed
to be nonlinear functions of the operating point. The nonlinear functions
are approximated by a set of neural networks. The parameters of these
networks are directly updated online based on the inputoutput measurements.
In general the controller structure has the form (Chen and Kalil, 1992),

Fig. 3: 
Controller structure 
where, the order of the controller is given by max(ρ,σ,γ)
(Fig. 3). Typical choices for ρ and σ are
given as ρ = σ ≥n, where n is the dimension of the space
R^{n}. This guideline stems from linear control theory that allows
conciliation of all openloop poles and placement of all closedloop poles
at the desired locations in the Zplane. The choice of γ, however,
depends on the reference signal y_{m}(t) and corresponds to its
moving average modeling.
The quantity y_{m}(t) is the desired output, which can be generated
from an appropriate reference model driven by the set point S_{p}(t).
The parameters r_{i}(θ_{i}), S_{i}(θ_{t})
and K_{i}(θ_{t}) can be regarded as the timevarying
parameters of the pseudolinear controller and the variable θ_{t}
contains all of the measurements that effect the plant operating point.
Although the controller parameters [r_{i}(θ_{i})
and S_{i}(θ_{t})] shape up the transient response,
the introduction of the parameters K_{i}(θ_{t}) ensures
set point tracking. All of these controller parameters are assumed to
be nonlinear function of the operating points.
THE STRUCTURE OF DIRECT ADAPTIVE CONTROL USING RADIAL BASIS
FUNCTIONS (RBFS) This paper study also consider the MingleInputMultiOutput
(MIMO) structure neural model. The structure of MIMO for RBFs is shown
in Fig. 4. In this study we assume that RBF network
is a processing structure consisting of an input layer, two hidden layer
and an output layer. The hidden layer of a RBF network consists
of an array of nodes (neuron) and each node contains a parameter vector
called a center.
The node calculates the Euclidean distance between the center and the
network input vector and passes the result through a radially symmetric
nonlinear function. The output layer is essentially a set of linear combiners.
The overall inputoutput response of an N_{j}input, N_{q}output
RBF network is a mapping:

Fig. 4: 
Radial basis function network (a) model (neuron) and
(b) basis function 
In the first case we used the following model:
In the second case we used the following model:
Then:
where, θ_{t} = x_{1},x_{2},...,x_{j }
1≤j≤N_{j} operating points, w_{iq} are the weights
of the linear combiners, . denotes the Euclidean norm, a_{i}
are the RBF centers, G(x_{1}, x_{2}, x_{3},..,
a_{i}, b_{j}) is a function from R^{+}→R
and N_{j} is the number of input nodes and N_{i} is the
number of hidden nodes (Sanner and Stoline, 1992).
The neural weight update for MIMO structure in the direct adaptive control
model is given by Sanner and Stoline (1992):
where, θ parameter can be updated, θ(t) measurement of known
functions. For the computational purpose, using Eq. 35,
36, 37 and 31, we
have:
Thus if we use Eq. 25 we get,
In order to compute the control input, we require the knowledge about
the parameter vector θ. We modify the propose algorithm for the adaptive
controlling of the plant given (Ahmed, 2000) such that u(t) = φ^{T}(t)*θ(t).
where, y_{p}(t) and y_{m}(t), are the actual and the
desired values of the plant output respectively and y_{m}(t),
can be generated as the output of an appropriate reference model.
SIMULATION RESULTS
Here, we develop a scheme, using RBF neural network, for a direct adaptive
control which suited to be applied for controlling liquidlevel system.

Fig. 5: 
The referencemodel output of the example one 
Model 1: In this case a liquidlevel system can be described by
the following delay difference equation:
Where, y_{p}(t) is the output and u(t) is the input of the system,
this model is obtained through identification of a laboratoryscale liquidlevel
system (Sales and Billings, 1990).
The referencemodel: The y_{m}(t) has been taken as the
output of a firstorder referencemodel with transfer function:
And driven by the setpoint S_{p}(t) of amplitude ±0.5
shown in Fig. 5.
We first assume that the time delay d = 1, so that the parameters have
been updated every time. We consider operating point θ_{t}
has been taken as θ_{t }= [y_{p}(t),e(t)] and g_{max}
= 2, so that the learning rate 0<μ<1, ε>0, with μ
= 0.95, ε = 0.001. Five Gaussian basis functions with centroids a_{i},
[ranging uniformly between 0.6 and +0.6] have been chosen as: [a_{1}
= 0.48, a_{2} = 0.24, a_{3} = 0.0, a_{4} = 0.24,
a_{5} = 0.48]. The performance was measured by computing the error
using (Wang et al., 1995):
Table 1: 
The change of the performance measure of error E (least
mean square), with the number of iteration N, depending on the values
of ε for case 1, using MIMO adaptive control with d = 1 and μ
= 0.95, ε = 10, b = 0.3 


Fig. 6: 
The response of the plant y _{p}(t), with the
output of the referencemodel y _{m}(t) and error between them
e(t) (a) For N = 1, E = 0.007119, (b) For N = 1, E = 0.03673. For
example one, (case 1) using direct adaptive control with [MIMO, RBFs,
d = 1] 
Using spread of RBFs equal b = 0.15 with no intersection between RBFs,
for the first iteration N = 1, E = 0.007119, the results is shown in Fig.
6a. Then a spread of RBFs, equal b = 0.3 with intersection between
RBFs, with the first iteration N = 1, E = 0.03673, the results is shown
in Fig. 6b.
Table 1 shows the performance measure of error E with
iteration N for different values of ε. The selected value of ε
= 10 is the best value of performance measure of error.
For number of iteration equal to N = 1001, the response y_{p}(t)
with the output of the reference y_{m}(t) and the error e(t) is
shown in Fig. 7. It is appear that the performance is
better the performance measure of error E.
Model 2: In this case a liquidlevel system, change with time,
can be described by the following difference equation (Ahmed, 2000):

Fig. 7: 
The response of the plant y _{p}(t), with the
output of the referencemodel y _{m}(t), error between them
e(t) and output of the controller u(t), using direct adaptive control
with d = 1 and μ = 0.95, ε = 10, b = 0.3. For Model one,
(case 1) using direct adaptive control with [MIMO, RBFs, d = 1] 

Fig. 8: 
The referencemodel output of the model 2 

Fig. 9: 
The response of the plant y _{p}(t), with the
output of the referencemodel y _{m}(t), error between them
e(t) and output of the controller u(t) for model 2. (case 1) using
direct adaptive control with [MIMO, RBFs] 

Fig. 10: 
(a) The performance measure of error E. (b) and the
response of the plant y_{p}(t), with the output of the referencemodel
of the model 2 
The referencemodel: The y_{m}(t), has been taken as the
output of a secondorder referencemodel with transfer function:
And driven by the setpoint S_{p}(t) of amplitude with maximum
value equal 10, to minimum value equal 7 as shown in Fig.
8.
We consider operating point θ_{t} has been taken as θ_{t
}= [y_{p}(t),e(t)] and g_{max} = 2, so that the learning
rate 0<μ<1, ε>0, with μ = 0.95, ε = 10. Five
Gaussian basis functions with centroids a_{i}, [ranging uniformly
between +7 and +10] have been chosen as: [a_{1} = 7.3, a_{2}
= 7.9, a_{3} = 8.5, a_{4} = 9.1, a_{5} = 9.7].
The performance was measured by computing the error using Eq.
45.
Using spread of RBFs equal b = 1.5 with intersection between RBFs, for
N = 1 and then repeated for N = 1100, E = 0.00025, the results is shown
in Fig. 9.
Figure 10a shows that the performance measure of error E when the parameters
of the plant are changed, we can see no changes in the response, of the
plant as shown in Fig. 10b.
CONCLUSIONS
From our numerical results in model 1 and 2, we conclude that back propagation
neural network algorithm can be used effectively to compute and estimate
the adaptive control for nonlinear plants. However, as the neural hidden
layers, or neurons, increases we may get more accurate estimation of the
nonlinear plant control. The learning rate μ depend on the upper
bound of the continuous function f(.), g_{max}+ε and on the
plant delay d.