HOME JOURNALS CONTACT

Information Technology Journal

Year: 2008 | Volume: 7 | Issue: 3 | Page No.: 430-439
DOI: 10.3923/itj.2008.430.439
A Radial Basic Function with Multiple Input and Multiple Output Neural Network to Control a Non-Linear Plant of Unknown Dynamics
R. El-Kouatly and G.A. Salman

Abstract: In this study, the application of the Radial Basis Function (RBF) with Multiple Input and Multiple Output (MIMO) Neural networks to control two types of non linear model plants of unknown dynamics. For the first step a model of a control was developed using the variable liquid-level which can be use in a chemical plant, or power station, where the liquid-level is change within fixed real time. In this control system Radial Basis Function (RBF) neural networks was used to control the liquid-level of the plant. The second step introduced the changes of the liquid-level in real time, also Radial Basis Function with MIMO neural networks has been used to control the level liquid. The study shows that the proposed control system produces accurate results for the two types of models. However, we notice that the training, using back propagation, for the second model take a more considerable time than training the first model.

Fulltext PDF Fulltext HTML

How to cite this article
R. El-Kouatly and G.A. Salman, 2008. A Radial Basic Function with Multiple Input and Multiple Output Neural Network to Control a Non-Linear Plant of Unknown Dynamics. Information Technology Journal, 7: 430-439.

Keywords: back propagation and Radial Basis Function (RBF) neural networks

INTRODUCTION

A direct adaptive control algorithm is developed for a class of nonlinear plants. No restriction has been imposed on the plant structure. The condition imposed is that the plant must satisfy the instantaneous input-output gain and should be positive. Many Artificial Neural networks (ANNs), for nonlinear control structure, have been employed (Kosko, 1997) and (Passino and Yurkovich, 1998). In line with the gain scheduling principle, however, the controller also has pseudo linear time-varying structure with the parameters being functions of the operating point (Loannou and Fidan, 2006).

It is well known that the response of a nonlinear plant generally can’t be shaped into a desired pattern using a linear controller. Consequently, a nonlinear controller is required for such plants (Murray et al., 2003; Xiaohong and Shen, 2005). One of the main difficulties in designing the nonlinear controller, however, is the lack of a general structure for it. Nonlinear controller design may be viewed as a nonlinear function approximation problem. In recent years, ANNs become an important element in approximating a nonlinear function (Wu, 1994), their ability to learn arbitrary nonlinear mappings can be effectively utilized to represent the controller nonlinearity. The other advantages inherent in ANNs such as their robustness, parallel architecture and fault tolerant capability act as further incentives (Raun, 1997).

Many researchers (Diao and Passion, 2002; Nounou and Passion, 2004; Ge and Wang, 2002) have considered adaptive neural network to control nonlinear dynamical systems. All of these articles provide only local convergence analysis. In this study we developed the analysis of local convergence further toward to the global convergence, where for such purpose we use ANNs with many hidden layers and different number of neurons.

DIRECT ADAPTIVE CONTROL

The direct adaptive control means that the controller parameters are directly updated, without the need for the implementation of an identifier of the plant. Such method of control depends on recursive Least-Mean-Square (LMS) and gradient approximation to estimate the plant parameters (Murray et al., 2003; Xiaohong and Shen, 2005). The basic idea of direct adaptive control can be shown in Fig. 1 (Murray et al., 2003).

This problem can be solved using Kaczmarzs projection algorithm (Chen and Kalil, 1992). Consider the unknown parameter as an element , in the consider range (Rn), such that:

Fig. 1: Plant estimator using LMS approximation

(1)

where, φT(t) are known functions that may depend on other known variable and are unknown parameters.

From the interpretation it is clear that the n measurements, of the elements φ1(t),φ2(t),......,φn(t) must span the space Rn so that the parameter vector is unique (Wu, 1994). Assume that an estimate is available and that a new measurement is obtained and since the measurement contains information only in the direction of the vector φ(t), in parameter space, we can update the estimate as:

(2)

where, the parameter β is chosen so that . Thus from Eq. 2 and 1, we obtain:

Thus we have:

(3)

And the error has the form:

(4)

From Eq. 3, 4 and 2 we obtain:

(5)

Also from Fig. 2 we can conclude that

(6)


Fig. 2: Direct adaptive control using LMS approximation

(7)

and thus,

(8)


(9)


(10)


(11)


(12)

Since the term φT(t)φ(t) is not depend on f(.) then we have:

(13)

and thus

(14)


(15)

Using Eq. 14, 15 and 9 we have

(16)

To avoid potential problems that may occur when φ(t) = 0, the projection algorithm can be modified in practice, as follows:

(17)

ε: is a small number depend on n and number of hidden layers in the ANNs, where, and μ is a learning rate depend on the type of ANNs, ε and gmax given in Eq. 19 such that

Figure 2 shows the direct adaptive control using LMS approximation (Raun, 1997).

PROBLEM STATEMENT

Consider the general model of the Single-Input-Single-Output (SISO) nonlinear plant given by:

(18)

where, yp(t) is the measured plant output, u(t) is the measured plant input (control output), n is the known plant order, d is the known plant delay and f(.) is an unknown nonlinear function.

Further, the function f(.) is assumed to be continuous and bounded such that:

(19)

From Eq. 19 we conclude that the function f(.) is monotonically increasing in u(t-d), which is analogous to the assumption that the instantaneous input gain in a linear plant (frequently denoted by b0) be positive. The problem, now, is to design a direct and stable adaptive control strategy that drives the plant output to a reference command.

Here, we provide some results to facilitate the development of proposed self-tuning algorithm. Assume that yp(t) being generated from arbitrary functions φ(t) using the following equations:

(20)


(21)

where, the functions satisfy

(22)

Equation 21 is essentially a simplified representation of Eq. 18. For each time instant t, the function can be related by the following relationship:

(23)

For given yp(t-1),....,yp(t-n), u(t-d-1),....,u(t-n). Using Eq. 23, we can verify that Eq. 19 and 22 are equivalent. Therefore, the following development, which is based on Eq. 21 as well, applies to the plant, represented by Eq. 18.

Now, consider the estimation of the parameter θ0 from the observations φ(t) and yp(t) employing the following the algorithm, modified (given that t, d, n1, n2 are integers and t is expressed as t = n1 + n2d, with 0≤n1‹ d, we define t [model d] as the integer value n1), in which its original (Ge and Wang, 2002),


(24)


(25)


(26)


(27)

where, θ(t) denotes the estimate of θ0 after the tth observation.

RADIAL BASIS FUNCTION NETWORKS (RBF)

A RBF neural network can be represented by a two-hidden-layer feed forward network structured, the first layer of the network consists of RBF neurons and the second layer is linear neurons. The inputs to the neural network are directly connected to the RBF neurons are then connected to the linear neurons in the second layer through adaptable weights. For the sake of definiteness, let us assume that there are j inputs to the RBF neural network, i RBF neurons in the first layer and q linear neurons in the second layer.

The transfer function of the ith RBF neuron in the first layer has the form:

(28)

where, zi(x) is the output of the ith RBF neurons and r(.) is a radial basis function. The most common used RBF is the Gaussian function given below:

(29)

In which x is the input vector of dimension j and ρi is the center vector of dimension j of the ith RBF neurons and σi the positive scalar of the width of the ith RBF neuron. Both ρi and σi can be different for different RBF neurons and this offering flexibility in locally shaping for the RBF neural network structure. The transfer function for the ith linear neuron is given by the following matrix notation:

(30)

where, M = [M1,M2,....,Mq]T lumps the outputs of all of the linear neurons together to form a vector; z(x) = [z1(x),z2(x),....,zi(x)]T and W is an i*q matrix with the iqth entry wiq interpreted as the adaptable weight from the ith RBF neuron in first layer to the qth linear neuron in the second layer.

Equation 30 apparently decomposes all of the parameters involved in a RBF neural network into two categories: the matrix W containing all of the adaptable weights and the vector z(.), containing all such structural parameters as the number of neurons i, the centers ρi and the widths σi.

CONTROLLER STRUCTURE

In the proposed adaptive control strategy, the controller structure is fixed a priori, but its parameters are directly updated based on the input-output measurement employing a stable adaptive scheme. It is well known that satisfactory control of a nonlinear plant generally requires a nonlinear controller. Accordingly, a nonlinear controller structure has been chosen. In line with the gain scheduling principle, however, the controllers also have a pseudolinear time-varying structure with the corresponding parameters being functions of the operating point (Naraendra and Parthasarathy, 1990).

We assume that the plant operating point is measurable or may be inferred. Examples of such systems include hydraulic drigs, Ac motors, power plant control systems and turbofan engines (Wang et al., 1995). The controller requires the measurement of the input-output as well as the variables representing the operating point. The controller parameters are assumed to be nonlinear functions of the operating point. The nonlinear functions are approximated by a set of neural networks. The parameters of these networks are directly updated online based on the input-output measurements. In general the controller structure has the form (Chen and Kalil, 1992),

(31)


Fig. 3: Controller structure

where, the order of the controller is given by max(ρ,σ,γ) (Fig. 3). Typical choices for ρ and σ are given as ρ = σ ≥n, where n is the dimension of the space Rn. This guideline stems from linear control theory that allows conciliation of all open-loop poles and placement of all closed-loop poles at the desired locations in the Z-plane. The choice of γ, however, depends on the reference signal ym(t) and corresponds to its moving average modeling.

The quantity ym(t) is the desired output, which can be generated from an appropriate reference model driven by the set point Sp(t). The parameters rii), Sit) and Kit) can be regarded as the time-varying parameters of the pseudolinear controller and the variable θt contains all of the measurements that effect the plant operating point. Although the controller parameters [rii) and Sit)] shape up the transient response, the introduction of the parameters Kit) ensures set point tracking. All of these controller parameters are assumed to be nonlinear function of the operating points.

THE STRUCTURE OF DIRECT ADAPTIVE CONTROL USING RADIAL BASIS

FUNCTIONS (RBFS) This paper study also consider the Mingle-Input-Multi-Output (MIMO) structure neural model. The structure of MIMO for RBFs is shown in Fig. 4. In this study we assume that RBF network is a processing structure consisting of an input layer, two hidden layer and an output layer. The hidden layer of a RBF network consists of an array of nodes (neuron) and each node contains a parameter vector called a center.

The node calculates the Euclidean distance between the center and the network input vector and passes the result through a radially symmetric nonlinear function. The output layer is essentially a set of linear combiners. The overall input-output response of an Nj-input, Nq-output RBF network is a mapping:

Fig. 4: Radial basis function network (a) model (neuron) and (b) basis function

(32)

In the first case we used the following model:

(33a)


(33b)

In the second case we used the following model:



(34a)


(34b)

Then:

(35)


(36)


(37)

where, θt = x1,x2,...,xj 1≤j≤Nj operating points, wiq are the weights of the linear combiners, ||.|| denotes the Euclidean norm, ai are the RBF centers, G(x1, x2, x3,.., ai, bj) is a function from R+→R and Nj is the number of input nodes and Ni is the number of hidden nodes (Sanner and Stoline, 1992).

The neural weight update for MIMO structure in the direct adaptive control model is given by Sanner and Stoline (1992):

(38)

where, θ parameter can be updated, θ(t) measurement of known functions. For the computational purpose, using Eq. 35, 36, 37 and 31, we have:

(39)

Thus if we use Eq. 25 we get,

(40)

In order to compute the control input, we require the knowledge about the parameter vector θ. We modify the propose algorithm for the adaptive controlling of the plant given (Ahmed, 2000) such that u(t) = φT(t)*θ(t).

(41)


(42)


(43)

where, yp(t) and ym(t), are the actual and the desired values of the plant output respectively and ym(t), can be generated as the output of an appropriate reference model.

SIMULATION RESULTS

Here, we develop a scheme, using RBF neural network, for a direct adaptive control which suited to be applied for controlling liquid-level system.

Fig. 5: The reference-model output of the example one

Model 1: In this case a liquid-level system can be described by the following delay difference equation:

(44)

Where, yp(t) is the output and u(t) is the input of the system, this model is obtained through identification of a laboratory-scale liquid-level system (Sales and Billings, 1990).

The reference-model: The ym(t) has been taken as the output of a first-order reference-model with transfer function:

And driven by the setpoint Sp(t) of amplitude ±0.5 shown in Fig. 5.

We first assume that the time delay d = 1, so that the parameters have been updated every time. We consider operating point θt has been taken as θt = [yp(t),e(t)] and gmax = 2, so that the learning rate 0<μ<1, ε>0, with μ = 0.95, ε = 0.001. Five Gaussian basis functions with centroids ai, [ranging uniformly between -0.6 and +0.6] have been chosen as: [a1 = -0.48, a2 = -0.24, a3 = 0.0, a4 = 0.24, a5 = 0.48]. The performance was measured by computing the error using (Wang et al., 1995):

Table 1: The change of the performance measure of error E (least mean square), with the number of iteration N, depending on the values of ε for case 1, using MIMO adaptive control with d = 1 and μ = 0.95, ε = 10, b = 0.3

Fig. 6:
The response of the plant yp(t), with the output of the reference-model ym(t) and error between them e(t) (a) For N = 1, E = 0.007119, (b) For N = 1, E = 0.03673. For example one, (case 1) using direct adaptive control with [MIMO, RBFs, d = 1]

(45)

Using spread of RBFs equal b = 0.15 with no intersection between RBFs, for the first iteration N = 1, E = 0.007119, the results is shown in Fig. 6a. Then a spread of RBFs, equal b = 0.3 with intersection between RBFs, with the first iteration N = 1, E = 0.03673, the results is shown in Fig. 6b.

Table 1 shows the performance measure of error E with iteration N for different values of ε. The selected value of ε = 10 is the best value of performance measure of error.

For number of iteration equal to N = 1001, the response yp(t) with the output of the reference ym(t) and the error e(t) is shown in Fig. 7. It is appear that the performance is better the performance measure of error E.

Model 2: In this case a liquid-level system, change with time, can be described by the following difference equation (Ahmed, 2000):

Fig. 7:
The response of the plant yp(t), with the output of the reference-model ym(t), error between them e(t) and output of the controller u(t), using direct adaptive control with d = 1 and μ = 0.95, ε = 10, b = 0.3. For Model one, (case 1) using direct adaptive control with [MIMO, RBFs, d = 1]

Fig. 8: The reference-model output of the model 2

Fig. 9:
The response of the plant yp(t), with the output of the reference-model ym(t), error between them e(t) and output of the controller u(t) for model 2. (case 1) using direct adaptive control with [MIMO, RBFs]

Fig. 10: (a) The performance measure of error E. (b) and the response of the plant yp(t), with the output of the reference-model of the model 2

(46)

The reference-model: The ym(t), has been taken as the output of a second-order reference-model with transfer function:

And driven by the setpoint Sp(t) of amplitude with maximum value equal 10, to minimum value equal 7 as shown in Fig. 8.

We consider operating point θt has been taken as θt = [yp(t),e(t)] and gmax = 2, so that the learning rate 0<μ<1, ε>0, with μ = 0.95, ε = 10. Five Gaussian basis functions with centroids ai, [ranging uniformly between +7 and +10] have been chosen as: [a1 = 7.3, a2 = 7.9, a3 = 8.5, a4 = 9.1, a5 = 9.7]. The performance was measured by computing the error using Eq. 45.

Using spread of RBFs equal b = 1.5 with intersection between RBFs, for N = 1 and then repeated for N = 1100, E = 0.00025, the results is shown in Fig. 9.

Figure 10a shows that the performance measure of error E when the parameters of the plant are changed, we can see no changes in the response, of the plant as shown in Fig. 10b.

CONCLUSIONS

From our numerical results in model 1 and 2, we conclude that back propagation neural network algorithm can be used effectively to compute and estimate the adaptive control for nonlinear plants. However, as the neural hidden layers, or neurons, increases we may get more accurate estimation of the nonlinear plant control. The learning rate μ depend on the upper bound of the continuous function f(.), gmax+ε and on the plant delay d.

REFERENCES

  • Ahmed, M.S., 2000. Neural-net-based direct adaptive control for a class of nonlinear plants. IEEE Trans. Autom. Control, 45: 119-124.
    CrossRef    Direct Link    


  • Chen, F.C. and H.K. Khalil, 1992. Adaptive control of nonlinear systems using neural networks. Int. J. Control, 55: 1299-1317.
    CrossRef    Direct Link    


  • Diao, Y. and K.M. Passion, 2002. Adaptive fuzzy-neural control for interpolated nonlinear systems. IEEE Trans. Fuzzy Syst., 10: 583-595.
    CrossRef    Direct Link    


  • Ge, S.S. and C. Wang, 2002. Direct adaptive neural networks control for a class of nonlinear systems. IEEE Trans. Neural Networks, 13: 214-221.
    CrossRef    PubMed    Direct Link    


  • Kosko, B., 1997. Neural networks and fuzzy systems: A dynamical systems approach to machine intelligence. Prentice-Hall India Private Limited USA, Inc.


  • Loannou, P. and B. Fidan, 2006. Adaptive control tutorial-advances in design and control. Society for Industrial and Applied Mathematics, Philadelphia, PA 19104-2688, USA.


  • Murray, R.M., K.J. Astrom, S.P. Boyd, R.W. Brockett and G. Stein, 2003. Future directions in control in an information-rich world. IEEE Control Syst. Mage., 32: 20-23.
    Direct Link    


  • Narendra, K.S. and K. Parthasarathy, 1990. Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Networks, 1: 4-27.
    CrossRef    Direct Link    


  • Nounou, H.N. and K.M. Passino, 2004. Stable auto-tuning of adaptive fuzzy-neural controllers for nonlinear discrete-time systems. IEEE Trans. Fuzzy Syst., 12: 70-83.
    CrossRef    Direct Link    


  • Passino, K.M. and S. Yurkovich, 1998. Fuzzy control. Addison Wesley Longman, Inc.


  • Raun, D., 1997. Intelligent hybrid systems: Fuzzy logic, neural networks and genetic algorithms. Kluwer Academic Publishers.


  • Sales, K.R. and S.A. Billings, 1990. Self-tuning control of non-linear ARMAX models. Int. J. Control, 51: 753-769.
    CrossRef    Direct Link    


  • Sanner, R.M. and J.J.E. Stoline, 1992. Gaussian networks for direct adaptive control. IEEE Trans. Neural Networks, 3: 837-863.
    CrossRef    Direct Link    


  • Xiaohong, J. and T. Shen, 2005. Adaptive feedback control of nonlinear time-delay systems: The LaSalle- Razumikhin-based approach. IEEE Trans. Autom. Control, 50: 1909-1913.
    CrossRef    Direct Link    


  • Wang, H., G.P. Liu and M. Brown, 1995. Advanced adaptive control. Elsevier Science, New York.


  • Wu, J.K., 1994. Neural networks and simulation methods. Electrical Engineering and Electronics A series of Reference Books and Textbooks, Marcel Dekker, Inc.

  • © Science Alert. All Rights Reserved