INTRODUCTION
Accurate online estimates of critical system states and parameters are needed in a variety of engineering applications, such as condition monitoring, fault diagnosis and process control. In these and many other applications it is required to estimate a system variable which is not easily accessible for measurement, using only measured system inputs and outputs.
Since, the development of the wellknown Kalman Filter (KF) approach for state
estimation in linear systems (Crassidis and Junkins, 2004;
Kalman, 1960) this method has been widely studied in
the literature and applied to many problems. The KF is a recursive estimation
approach, where state estimates are obtained in two distinct steps. In the first
step, a onestepahead state prediction is obtained, based on the latest state
estimate. In the second step, the state estimate is refined by linearly combining
the state prediction obtained in the first step, with the new output measurements.
The KF has been extended to nonlinear systems. One such extension, called the
extended Kalman filter (EKF) (Crassidis and Junkins, 2004),
has experienced a lot of interest from researchers but it has found few industrial
applications (Jonsson and Palsson, 1994). In the EKF
approach, the state prediction is obtained in the first step using nonlinear
state propagation based on the available model. In the second step, the newly
received output measurements are combined with the state prediction, still in
a linear fashion. As a result the state estimate is effectively a linear combination
of all output measurements. In both the KF and the EKF, the model of the system
used in the predictor is assumed to be perfectly known, along with the statistics
of the process and sensor noise entering the system. These assumptions severely
restrict the applicability of these filters in realworld problems and a more
general nonlinear state estimation approach is definitely desirable. The EKF
algorithm has been around for over 25 years, but it has only in the last a couple
of years that rigorous analysis for this algorithm has appeared in the literature
(Boutayeb et al., 1997; Reif
et al., 1999).
Neural networks have been extensively investigated in the context of adaptive
control and system identification (Billings et al.,
1992; Haykin, 1999; Ljung and
Sjoberg, 1992; Naraendra and Parthasarathy, 1990),
but it was more recently that their use has been proposed in state estimation
(Rochefort et al., 2005).
This study introduces a new view of estimating the position and velocity of a six degree of freedom robot arm manipulator using a nonlinear filter based on neural networks, the results obtained of the presented algorithm were compared with the EKF method.
DYNAMIC MODEL OF PUMA560 WITH SIX DEGREE OF FREEDOM
The PUMA 560 (Programmable Universal Machine for Assembly) depicted in Fig.
1 is a 6 DOF manipulator with revolute joints that has been widely used
in industry and still is in academia.
The PUMA uses DC servo motors as its actuators, has high resolution relative
encoders for accurate positioning and potentiometers for calibration (Armstrong
et al., 1986).
Manipulator dynamics is concerned with the equations of motion, the way in which the manipulator moves in response to torques applied by the actuators, or external forces. The history and mathematics of the dynamics of seriallink manipulators are well covered in the literature. The equations of motion for an naxis manipulator are given by:
where, q is the vector of generalized joint coordinates describing the pose
of the manipulator, is
the vector of joint velocities, is
the vector of joint accelerations, M is the symmetric jointspace inertia matrix,
or manipulator inertia tensor, C describes Coriolis and centripetal effectsCentripetal
torques are proportional to ,
while the Coriolis torques are proportional to ,
F describes viscous and Coulomb friction and is not generally considered part
of the rigid body dynamics.
It is straightforward to express from
Eq. 1 as follows:
For the sixdegreeoffreedom model used here n = 6. The nonzero elements of the inertia matrix (in kgm^{2}) are given as:
The nonzero Christoffel (Coriolis) symbols (in Nms^{2}) are given as:

Fig. 2: 
Simulink simulation diagram of PUMA560 

Fig. 3: 
Simulink diagram of PUMA560 dynamics 
and nonzero gravity vector entries (in Nm) are given below:
Simulink based simulation model of puma 560 dynamics and complete simulation
diagram is shown in Fig. 2 and 3.
KALMAN FILTER
The main steps of the extended Kalman filter (EKF) can be formulated as described
by Haykin (1996). It assumes a formulation of the nonlinear
system with the input states u, the output states y and the nonmeasurable,
internal states x. In the following formulation, w and v are the process and
output noise vectors, respectively.
The state vector of interest in this estimation problem is:
The nonlinear statespace model follows from Eq. 1 as:
The Extended Kalman filter equations are given as follows:
Matrix H_{k} is obtained by linearization of the function h (x) as follows:
In the same way the matrix Φ_{k} is obtained as:
NEURAL NETWORKS
Multilayer Perceptron (MLP): A Neural Network (NN) is a computational
structure composed of a collection of artificial neurons. Each neuron is an
operator that processes, with a nonlinear activation function φ, the weighted
sum of its inputs to produce an output signal. The connections between the artificial
neurons define the behavior of the net, identify its applicability and specify
its training methods (Haykin, 1994). In a MultiLayer
Perceptron (MLP), the neurons are grouped in one or more layers with the output
of each layer being the input to the next layer.
The training process consists of adjusting the neuron weights based on the expected output and some optimization rules. In a supervised scheme, the weights are adjusted interactively by comparing the output of the network with the desired value at each step. This scheme means that the training process teaches the net the expected output for each given input.
A feed forward MLP is a mapping function with n_{0 }input elements
and n_{L }output elements. An MLP network is composed of L layers, with
n_{l }(l = 1, 2… L) neurons in layer l. When the layer l is not
0 or L the layer is referred to as a hidden layer. To formalize this description,
if is
the output of the neuron of layer l, ,
is the weight of the j^{th} input (coming from the j^{th }neuron
of the preceding layer) and φ^{l }is the activation function, then:
where, is
the output of the j^{th} neuron in the previous layer l1 and is
the bias, introduced to allow the neuron to present a nonnull output even in
the presence of a null input (Stinchcombe and White, 1989).
While an MLP can have any number of layers, L, for most applications a single
hidden layer, or two total layers, is sufficient as is shown in Fig.
2. This system has p inputs, q hidden nodes and r outputs. In matrix form,
the complete expression can be formulated as:
Training algorithm: Training a neural net generally consists of applying a method to adjust or estimate the neuron weights. The training process minimizes the neural net output error through the application of an optimization method. All methods need to know how the net output varies with respect to the variation of a given neuron weight.
The back propagation algorithm is an algorithm for learning in feed forward
networks using mean squared error and gradient descent (Rumelhart
et al., 1986). The error that we choose to minimize for our training
algorithm is:
calculated at a given time step, n.
The calculated error at the output layer is:
where d_{k }is the desired output of element k and y_{k }is the hidden layer output response specified by:
In (6), v_{k }is the output layer activation level:
This continues through the hidden layer to the input layer:
and
To calculate the gradient of the error function, we use the chain rule to get:
This derivative allows the weight update to be calculated, using a learning parameter, ç_{1}:
The resulting weight update is then:
To simplify the equations, we calculate an intermediate delta value,δ, for the output layer:
which gives the final weight update equation for the output layer:
This operation is repeated for the hidden layer:
and again defining an intermediate δ:
gives the final weight update:
Activation function: The nonlinear neuron computes its activation level by adding all the weighted activations it receives. It then transforms this activation level into a response using an activation, or squashing, function φ. Several squashing functions can be used. The most common one is the logistic function:
This function maps the set of real numbers into the (0, 1) range. The derivative is easy to compute:
which is useful for computing the backpropagation error. The hyperbolic tangent is also often used when the response range is the interval (1, +1):
Its derivative is also easy to compute:
There are many such activation functions that can be used. Choice of which to use in which layers is left to the designer of the network based on the dynamics of the system.
SIMULATION RESULTS
To illustrate the performance of the proposed estimator, an initial set of simulation results will be presented. In these simulations we have utilized the model of a Puma 560 type robot manipulator as presented in Fig. 1.
The networks considered in this paper for approximating the filter functional forms are the feedforward multilayer perceptron (FFMLP) illustrated in Fig. 4, the state filter utilizing neural networks assumes that a deterministic system model is available with the structural form of Eq. 3. The neural is chosen to be a three layer FFMLP network with eighteen nodes in the first layer, corresponding to six component of torque, six component of orientation q and six component of velocity qd. The twelve nodes in the third layer correspond to outputs q and qd. The number of units in the hidden layer is determined experimentally, from studying the network behavior during the training process taking into consideration some factors like convergence rate, error criteria, etc. In this regard, different configurations were tested and the best suitable configuration was selected based on the accuracy level required. The number of hidden units for the MLP is 40.
In this study, an attempt is made to develop and compare EKF with the developed neural state filters.

Fig. 4: 
Topology of MLP network 
For implementation of the EKF, the covariance matrices of the process noise and measurement noise were tuned empirically to obtain the successful values.
The initial guess of the covariance matrix P_{o} was simply zero.
After simulating the predicting trajectories and the true trajectory are shown
in Fig. 5 (Position estimation) and Fig. 6
(Velocity estimation) respectively. Moreover the Fig. 7 and
8 exhibit error comparison of algorithms.
From the results obtained it can be observed that the neural estimator is more accurate than the EKF. In Fig. 6 it is clear that the EKF is not robust to estimate the velocity of the arm manipulator.
As can be seen from Fig. 7 and 8, the estimation
errors for neural filter proposed decrease fast after the first estimation.
The average RMS error for the EKF was 0.0309, whereas the neural observer presents
an average error of 0.0035.

Fig. 5: 
Position estimation for six DOF PUMA 560 arm manipulato 

Fig. 6: 
Velocity estimation for six DOF PUMA 560 arm manipulator 

Fig. 7: 
Position estimation error for both methods 

Fig. 8: 
Velocity estimation error for both methods 
From simulations it can be said the neural filter proposed for the studied
application is more consisted and convergent than the EKF filter.
In this study we present a new method for developing neural networkbased state
filters for use when system models are not available or they are inaccurate
(Menon et al., 1998). The study demonstrates the
accuracy and effectiveness of the state filtering method on a realworld process
state and parameter filtering problem.
DISCUSSION
More recently, neural networks have been proposed for use in state estimation problems for approximating nonlinear mappings between the states, inputs and outputs. No constraining assumptions are needed about the structure of the dynamic system investigated and the nature of the stochastic process and or measurement noises. This is unlike traditional state estimation methods, where explicit knowledge of the system dynamics and the stochastic processes affecting the system must be available or assumed.
In this study, the Neural Networks as a tool for state estimation has been used in robot arm manipulator state estimation and compared to the standard method for nonlinear state estimation the EKF. The state estimation methods have been compared using the same tuning parameters to make the comparative study as credible as possible.
The results show promise for using neural networks for developing state filters for use when system models are not available or they are inaccurate. The study demonstrates the accuracy and effectiveness of the adaptive state filtering method on a realworld process state and parameter filtering problem.
CONCLUSION
The extended Kalman filter has been widely used as a position estimation method for mobile robot applications. The system model is generally nonlinear in mobile robotics, so the classical extended Kalman filter for mobile robot position estimation suffers from a fundamental flaw. Linear approximation of nonlinear system equations with first order Taylor series induces linearization error. To overcome these drawbacks a new estimation method based on neural networks was proposed in this paper. The simulation results show an improvement of the new filter. It avoids linear approximation of nonlinear system equations and is free of linearization error. The filter is consistent and convergent. Comparing with EKF, it gives more conservative estimation result.