**ABSTRACT**

In this study, the application of the Radial Basis Function (RBF) with Multiple Input and Multiple Output (MIMO) Neural networks to control two types of non linear model plants of unknown dynamics. For the first step a model of a control was developed using the variable liquid-level which can be use in a chemical plant, or power station, where the liquid-level is change within fixed real time. In this control system Radial Basis Function (RBF) neural networks was used to control the liquid-level of the plant. The second step introduced the changes of the liquid-level in real time, also Radial Basis Function with MIMO neural networks has been used to control the level liquid. The study shows that the proposed control system produces accurate results for the two types of models. However, we notice that the training, using back propagation, for the second model take a more considerable time than training the first model.

PDF Abstract XML References Citation

####
**How to cite this article**

*Information Technology Journal, 7: 430-439.*

**DOI:**10.3923/itj.2008.430.439

**URL:**https://scialert.net/abstract/?doi=itj.2008.430.439

**INTRODUCTION**

A direct **adaptive control** algorithm is developed for a class of nonlinear plants. No restriction has been imposed on the plant structure. The condition imposed is that the plant must satisfy the instantaneous input-output gain and should be positive. Many Artificial Neural networks (ANNs), for nonlinear control structure, have been employed (Kosko, 1997) and (Passino and Yurkovich, 1998). In line with the gain scheduling principle, however, the controller also has pseudo linear time-varying structure with the parameters being functions of the operating point (Loannou and Fidan, 2006).

It is well known that the response of a nonlinear plant generally can’t be shaped into a desired pattern using a linear controller. Consequently, a nonlinear controller is required for such plants (Murray *et al*., 2003; Xiaohong and Shen, 2005). One of the main difficulties in designing the nonlinear controller, however, is the lack of a general structure for it. Nonlinear controller design may be viewed as a nonlinear function approximation problem. In recent years, ANNs become an important element in approximating a nonlinear function (Wu, 1994), their ability to learn arbitrary nonlinear mappings can be effectively utilized to represent the controller nonlinearity. The other advantages inherent in ANNs such as their robustness, parallel architecture and fault tolerant capability act as further incentives (Raun, 1997).

Many researchers (Diao and Passion, 2002; Nounou and Passion, 2004; Ge and Wang, 2002) have considered adaptive **neural network** to control nonlinear dynamical systems. All of these articles provide only local convergence analysis. In this study we developed the analysis of local convergence further toward to the global convergence, where for such purpose we use ANNs with many hidden layers and different number of neurons.

**DIRECT ADAPTIVE CONTROL**

The direct **adaptive control** means that the controller parameters are directly updated, without the need for the implementation of an identifier of the plant. Such method of control depends on recursive Least-Mean-Square (LMS) and gradient approximation to estimate the plant parameters (Murray *et al*., 2003; Xiaohong and Shen, 2005). The basic idea of direct **adaptive control** can be shown in Fig. 1 (Murray *et al*., 2003).

This problem can be solved using Kaczmarz^{’}s projection algorithm (Chen and Kalil, 1992). Consider the unknown parameter as an element , in the consider range (R^{n}), such that:

Fig. 1: | Plant estimator using LMS approximation |

(1) |

where, φ^{T}(t) are known functions that may depend on other known variable and are unknown parameters.

From the interpretation it is clear that the n measurements, of the elements φ_{1}(t),φ_{2}(t),......,φ_{n}(t) must span the space R^{n} so that the parameter vector is unique (Wu, 1994). Assume that an estimate is available and that a new measurement is obtained and since the measurement contains information only in the direction of the vector φ(t), in parameter space, we can update the estimate as:

(2) |

where, the parameter β is chosen so that . Thus from Eq. 2 and 1, we obtain:

Thus we have:

(3) |

And the error has the form:

(4) |

From Eq. 3, 4 and 2 we obtain:

(5) |

Also from Fig. 2 we can conclude that

(6) |

Fig. 2: | Direct adaptive control using LMS approximation |

(7) |

and thus,

(8) |

(9) |

(10) |

(11) |

(12) |

Since the term φ^{T}(t)φ(t) is not depend on f(.) then we have:

(13) |

and thus

(14) |

(15) |

Using Eq. 14, 15 and 9 we have

(16) |

To avoid potential problems that may occur when φ(t) = 0, the projection algorithm can be modified in practice, as follows:

(17) |

ε: is a small number depend on n and number of hidden layers in the ANNs, where, and μ is a learning rate depend on the type of ANNs, ε and g_{max} given in Eq. 19 such that

Figure 2 shows the direct **adaptive control** using LMS approximation (Raun, 1997).

**PROBLEM STATEMENT**

Consider the general model of the Single-Input-Single-Output (SISO) nonlinear plant given by:

(18) |

where, y_{p}(t) is the measured plant output, u(t) is the measured plant input (control output), n is the known plant order, d is the known plant delay and f(.) is an unknown nonlinear function.

Further, the function f(.) is assumed to be continuous and bounded such that:

(19) |

From Eq. 19 we conclude that the function f(.) is monotonically increasing in u(t-d), which is analogous to the assumption that the instantaneous input gain in a linear plant (frequently denoted by b_{0}) be positive. The problem, now, is to design a direct and stable adaptive control strategy that drives the plant output to a reference command.

Here, we provide some results to facilitate the development of proposed self-tuning algorithm. Assume that y_{p}(t) being generated from arbitrary functions φ(t) using the following equations:

(20) |

(21) |

where, the functions satisfy

(22) |

Equation 21 is essentially a simplified representation of Eq. 18. For each time instant t, the function can be related by the following relationship:

(23) |

For given y_{p}(t-1),....,y_{p}(t-n), u(t-d-1),....,u(t-n). Using Eq. 23, we can verify that Eq. 19 and 22 are equivalent. Therefore, the following development, which is based on Eq. 21 as well, applies to the plant, represented by Eq. 18.

Now, consider the estimation of the parameter θ^{0} from the observations φ(t) and y_{p}(t) employing the following the algorithm, modified (given that t, d, n_{1}, n_{2} are integers and t is expressed as t = n_{1} + n_{2}d, with 0≤n_{1}‹ d, we define t [model d] as the integer value n_{1}), in which its original (Ge and Wang, 2002),

(24) |

(25) |

(26) |

(27) |

where, θ(t) denotes the estimate of θ^{0} after the t^{th} observation.

**RADIAL BASIS FUNCTION NETWORKS (RBF)**

A RBF **neural network** can be represented by a two-hidden-layer feed forward network structured, the first layer of the network consists of RBF neurons and the second layer is linear neurons. The inputs to the **neural network** are directly connected to the RBF neurons are then connected to the linear neurons in the second layer through adaptable weights. For the sake of definiteness, let us assume that there are j inputs to the RBF neural network, i RBF neurons in the first layer and q linear neurons in the second layer.

The transfer function of the i^{th} RBF neuron in the first layer has the form:

(28) |

where, z_{i}(x) is the output of the i^{th} RBF neurons and r(.) is a radial basis function. The most common used RBF is the Gaussian function given below:

(29) |

In which x is the input vector of dimension j and ρ_{i} is the center vector of dimension j of the i^{th} RBF neurons and σ_{i} the positive scalar of the width of the i^{th} RBF neuron. Both ρ_{i} and σ_{i} can be different for different RBF neurons and this offering flexibility in locally shaping for the RBF **neural network** structure. The transfer function for the i^{th} linear neuron is given by the following matrix notation:

(30) |

where, M = [M_{1},M_{2},....,M_{q}]^{T} lumps the outputs of all of the linear neurons together to form a vector; z(x) = [z_{1}(x),z_{2}(x),....,z_{i}(x)]^{T} and W is an i*q matrix with the iq^{th} entry w_{iq} interpreted as the adaptable weight from the i^{th} RBF neuron in first layer to the q^{th} linear neuron in the second layer.

Equation 30 apparently decomposes all of the parameters involved in a RBF **neural network** into two categories: the matrix W containing all of the adaptable weights and the vector z(.), containing all such structural parameters as the number of neurons i, the centers ρ_{i} and the widths σ_{i}.

**CONTROLLER STRUCTURE**

In the proposed **adaptive control** strategy, the controller structure is fixed a priori, but its parameters are directly updated based on the input-output measurement employing a stable adaptive scheme. It is well known that satisfactory control of a nonlinear plant generally requires a nonlinear controller. Accordingly, a nonlinear controller structure has been chosen. In line with the gain scheduling principle, however, the controllers also have a pseudolinear time-varying structure with the corresponding parameters being functions of the operating point (Naraendra and Parthasarathy, 1990).

We assume that the plant operating point is measurable or may be inferred. Examples of such systems include hydraulic drigs, Ac motors, power plant control systems and turbofan engines (Wang *et al*., 1995). The controller requires the measurement of the input-output as well as the variables representing the operating point. The controller parameters are assumed to be nonlinear functions of the operating point. The nonlinear functions are approximated by a set of neural networks. The parameters of these networks are directly updated online based on the input-output measurements. In general the controller structure has the form (Chen and Kalil, 1992),

(31) |

Fig. 3: | Controller structure |

where, the order of the controller is given by max(ρ,σ,γ) (Fig. 3). Typical choices for ρ and σ are given as ρ = σ ≥n, where n is the dimension of the space R^{n}. This guideline stems from linear control theory that allows conciliation of all open-loop poles and placement of all closed-loop poles at the desired locations in the Z-plane. The choice of γ, however, depends on the reference signal y_{m}(t) and corresponds to its moving average modeling.

The quantity y_{m}(t) is the desired output, which can be generated from an appropriate reference model driven by the set point S_{p}(t). The parameters r_{i}(θ_{i}), S_{i}(θ_{t}) and K_{i}(θ_{t}) can be regarded as the time-varying parameters of the pseudolinear controller and the variable θ_{t} contains all of the measurements that effect the plant operating point. Although the controller parameters [r_{i}(θ_{i}) and S_{i}(θ_{t})] shape up the transient response, the introduction of the parameters K_{i}(θ_{t}) ensures set point tracking. All of these controller parameters are assumed to be nonlinear function of the operating points.

**THE STRUCTURE OF DIRECT ADAPTIVE CONTROL USING RADIAL BASIS**

**FUNCTIONS (RBFS)** This paper study also consider the Mingle-Input-Multi-Output (MIMO) structure neural model. The structure of MIMO for RBFs is shown in Fig. 4. In this study we assume that RBF* *network is a processing structure consisting of an input layer, two hidden layer and an output layer. The hidden layer of a RBF* *network consists of an array of nodes (neuron) and each node contains a parameter vector called a center.

The node calculates the Euclidean distance between the center and the network input vector and passes the result through a radially symmetric nonlinear function. The output layer is essentially a set of linear combiners. The overall input-output response of an N_{j}-input, N_{q}-output RBF network is a mapping:

Fig. 4: | Radial basis function network (a) model (neuron) and (b) basis function |

(32) |

In the first case we used the following model:

(33a) |

(33b) |

In the second case we used the following model:

(34a) |

(34b) |

Then:

(35) |

(36) |

(37) |

where, θ_{t} = x_{1},x_{2},...,x_{j } 1≤j≤N_{j} operating points, w_{iq} are the weights of the linear combiners, ||.|| denotes the Euclidean norm, a_{i} are the RBF centers, G(x_{1}, x_{2}, x_{3},.., a_{i}, b_{j}) is a function from R^{+}→R and N_{j} is the number of input nodes and N_{i} is the number of hidden nodes (Sanner and Stoline, 1992).

The neural weight update for MIMO structure in the direct **adaptive control** model is given by Sanner and Stoline (1992):

(38) |

where, θ parameter can be updated, θ(t) measurement of known functions. For the computational purpose, using Eq. 35, 36, 37 and 31, we have:

(39) |

Thus if we use Eq. 25 we get,

(40) |

In order to compute the control input, we require the knowledge about the parameter vector θ. We modify the propose algorithm for the adaptive controlling of the plant given (Ahmed, 2000) such that u(t) = φ^{T}(t)*θ(t).

(41) |

(42) |

(43) |

where, y_{p}(t) and y_{m}(t), are the actual and the desired values of the plant output respectively and y_{m}(t), can be generated as the output of an appropriate reference model.

**SIMULATION RESULTS**

Here, we develop a scheme, using RBF neural network, for a direct adaptive control which suited to be applied for controlling liquid-level system.

Fig. 5: | The reference-model output of the example one |

**Model 1:** In this case a liquid-level system can be described by the following delay difference equation:

(44) |

Where, y_{p}(t) is the output and u(t) is the input of the system, this model is obtained through identification of a laboratory-scale liquid-level system (Sales and Billings, 1990).

**The reference-model: **The y_{m}(t) has been taken as the output of a first-order reference-model with transfer function:

And driven by the setpoint S_{p}(t) of amplitude ±0.5 shown in Fig. 5.

We first assume that the time delay d = 1, so that the parameters have been updated every time. We consider operating point θ_{t} has been taken as θ_{t }= [y_{p}(t),e(t)] and g_{max} = 2, so that the learning rate 0<μ<1, ε>0, with μ = 0.95, ε = 0.001. Five Gaussian basis functions with centroids a_{i}, [ranging uniformly between -0.6 and +0.6] have been chosen as: [a_{1} = -0.48, a_{2} = -0.24, a_{3} = 0.0, a_{4} = 0.24, a_{5} = 0.48]. The performance was measured by computing the error using (Wang *et al*., 1995):

Table 1: | The change of the performance measure of error E (least mean square), with the number of iteration N, depending on the values of ε for case 1, using MIMO adaptive control with d = 1 and μ = 0.95, ε = 10, b = 0.3 |

Fig. 6: | The response of the plant y _{p}(t), with the output of the reference-model y_{m}(t) and error between them e(t) (a) For N = 1, E = 0.007119, (b) For N = 1, E = 0.03673. For example one, (case 1) using direct adaptive control with [MIMO, RBFs, d = 1] |

(45) |

Using spread of RBFs equal b = 0.15 with no intersection between RBFs, for the first iteration N = 1, E = 0.007119, the results is shown in Fig. 6a. Then a spread of RBFs, equal b = 0.3 with intersection between RBFs, with the first iteration N = 1, E = 0.03673, the results is shown in Fig. 6b.

Table 1 shows the performance measure of error E with iteration N for different values of ε. The selected value of ε = 10 is the best value of performance measure of error.

For number of iteration equal to N = 1001, the response y_{p}(t) with the output of the reference y_{m}(t) and the error e(t) is shown in Fig. 7. It is appear that the performance is better the performance measure of error E.

**Model 2: **In this case a liquid-level system, change with time, can be described by the following difference equation (Ahmed, 2000):

Fig. 7: | The response of the plant y _{p}(t), with the output of the reference-model y_{m}(t), error between them e(t) and output of the controller u(t), using direct adaptive control with d = 1 and μ = 0.95, ε = 10, b = 0.3. For Model one, (case 1) using direct adaptive control with [MIMO, RBFs, d = 1] |

Fig. 8: | The reference-model output of the model 2 |

Fig. 9: | The response of the plant y _{p}(t), with the output of the reference-model y_{m}(t), error between them e(t) and output of the controller u(t) for model 2. (case 1) using direct adaptive control with [MIMO, RBFs] |

Fig. 10: | (a) The performance measure of error E. (b) and the response of the plant y_{p}(t), with the output of the reference-model of the model 2 |

(46) |

**The reference-model: **The y_{m}(t), has been taken as the output of a second-order reference-model with transfer function:

And driven by the setpoint S_{p}(t) of amplitude with maximum value equal 10, to minimum value equal 7 as shown in Fig. 8.

We consider operating point θ_{t} has been taken as θ_{t }= [y_{p}(t),e(t)] and g_{max} = 2, so that the learning rate 0<μ<1, ε>0, with μ = 0.95, ε = 10. Five Gaussian basis functions with centroids a_{i}, [ranging uniformly between +7 and +10] have been chosen as: [a_{1} = 7.3, a_{2} = 7.9, a_{3} = 8.5, a_{4} = 9.1, a_{5} = 9.7]. The performance was measured by computing the error using Eq. 45.

Using spread of RBFs equal b = 1.5 with intersection between RBFs, for N = 1 and then repeated for N = 1100, E = 0.00025, the results is shown in Fig. 9.

Figure 10a shows that the performance measure of error E when the parameters of the plant are changed, we can see no changes in the response, of the plant as shown in Fig. 10b.

**CONCLUSIONS**

From our numerical results in model 1 and 2, we conclude that back propagation **neural network** algorithm can be used effectively to compute and estimate the **adaptive control** for nonlinear plants. However, as the neural hidden layers, or neurons, increases we may get more accurate estimation of the nonlinear plant control. The learning rate μ depend on the upper bound of the continuous function f(.), g_{max}+ε and on the plant delay d.

####
**REFERENCES**

- Ahmed, M.S., 2000. Neural-net-based direct adaptive control for a class of nonlinear plants. IEEE Trans. Autom. Control, 45: 119-124.

CrossRefDirect Link - Chen, F.C. and H.K. Khalil, 1992. Adaptive control of nonlinear systems using neural networks. Int. J. Control, 55: 1299-1317.

CrossRefDirect Link - Diao, Y. and K.M. Passion, 2002. Adaptive fuzzy-neural control for interpolated nonlinear systems. IEEE Trans. Fuzzy Syst., 10: 583-595.

CrossRefDirect Link - Ge, S.S. and C. Wang, 2002. Direct adaptive neural networks control for a class of nonlinear systems. IEEE Trans. Neural Networks, 13: 214-221.

CrossRefPubMedDirect Link - Murray, R.M., K.J. Astrom, S.P. Boyd, R.W. Brockett and G. Stein, 2003. Future directions in control in an information-rich world. IEEE Control Syst. Mage., 32: 20-23.

Direct Link - Narendra, K.S. and K. Parthasarathy, 1990. Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Networks, 1: 4-27.

CrossRefDirect Link - Nounou, H.N. and K.M. Passino, 2004. Stable auto-tuning of adaptive fuzzy-neural controllers for nonlinear discrete-time systems. IEEE Trans. Fuzzy Syst., 12: 70-83.

CrossRefDirect Link - Sales, K.R. and S.A. Billings, 1990. Self-tuning control of non-linear ARMAX models. Int. J. Control, 51: 753-769.

CrossRefDirect Link - Sanner, R.M. and J.J.E. Stoline, 1992. Gaussian networks for direct adaptive control. IEEE Trans. Neural Networks, 3: 837-863.

CrossRefDirect Link - Xiaohong, J. and T. Shen, 2005. Adaptive feedback control of nonlinear time-delay systems: The LaSalle- Razumikhin-based approach. IEEE Trans. Autom. Control, 50: 1909-1913.

CrossRefDirect Link