INTRODUCTION
In terms of multisensor TDOA (time different of arrival) positioning systems,
the threesensor TDOA positioning system is widely used because the system scale
is accessible and its positioning accuracy can meet the demands of the trajectory
estimation (YongGuang et al., 2004; ChaoMou
and ZhiQiang, 2009).
The threesensor TDOA positioning algorithm (Foy, 1976;
Chan and Ho, 1994; Huang and Lu,
2004) needs to solve nonlinear equations with the common methods of Taylor
expansion method, twostep least square method, modified least square method,
etc. Influenced by the factors of geometry disposition, noise and measurement
errors of three stations, multiple solutions or no solution is common for threestation
TDOA positioning algorithm (ChaoMou and ZhiQiang, 2009).
And such phenomenon is commonly eliminated by increasing stations and using
the method of data fusion.
For singlesensor location algorithm, the Target Motion Analysis (TMA) is widely
studied in the fields such as positioning algorithm (ZhongKang
et al., 2008; Nardone and Graham, 1997; Moon
and Nordone, 2000), observability of motion parameters (Nardone
and Aidala, 1981; Fogel and Gavish, 1988; Song,
1996) and optimal maneuver strategy of observation station (Fawcett,
1988; Hammel et al., 1989) etc. The TMA aims
to estimate the trajectory by using TOA of signal, bearing, Doppler shift and
other types of measurements. In TMA, the characteristics of the constantspeed
motion and the periodicity of signal are commonly used.
In the multisensor bearingonly target motion analysis, the target can be
positioned by the combination of crossing location and target motion analysis.
Iltis and Anderson (1996) and Tremois
and Le Cadre (1996) have researched the methods of estimating the motion
trajectory by using bearingonly measurements.
A target position algorithm is researched in this study by adopting the method of integrating threesensor TDOA positioning and target motion analysis. Firstly, a model is established for target location and motion analysis in threedimensional space. Secondly, the estimation algorithms of range and velocity are proposed according to the observability of TOAonly target motion analysis. Then, the observabilities of position coordinates and velocity components in twodimensional and threedimensional space are analyzed and the corresponding estimation algorithms are presented. Finally, the simulation analysis is made for the proposed algorithms.
ALGORITHM MODEL
Assuming that the positions of the three sensors are R_{1} (x_{01},
y_{01}, z_{01}) R_{2} (x_{02}, y_{02},
z_{02}) and R_{3} (x_{03}, y_{03}, z_{03}),
the target velocity is (v_{x}, v_{y}, v_{z}).

Fig. 1: 
TMA for 3 sensors 
The three sensors receive the N+1 signals which are, respectively sent by the
moving target on the positions of T_{k}, T_{k1}, ... and T_{kN},
as shown in Fig. 1. Suppose that the corresponding sending
time of each signal is t _{t(kj)} (j = 0, 1,..., N) and the corresponding
receiving time of the sensor i is t_{ri (kj)} (i = 0, 1,..., N). The
sending time interval of any two signals is a multiple of the signal cycle T_{s}.
The relationship between the receiving time and sending time is shown as:
where, c is the speed of light. The range from target at position T_{kj} to the sensor i is r_{ij}:
For the first sensor, the receiving time difference between the signals of target at positions T_{kj} and T_{k1} is:
where:
The r_{1j}r_{1l} in (3) is the range difference between the target positions at T_{kj} and T_{kl} to the sensor 1 that is generally much smaller than the range of electromagnetic wave travelling in the signal cycle T_{s}. If the following condition is met:
Δm can be calculated by:
where, round () is rounding function.
According to (3), the range difference between the target positions at T_{kj} and T_{kl} to the sensor 1 is:
According to (4), the sending time difference between the signals of target positions at T_{kj} and T_{kl} is:
In the same way, the range difference between the target positions at T_{kj} and T_{kl} to the sensor 2 or 3 is:
According to Eq. 1, the range difference between the ranges from the target at position T_{kj} to the sensor 1 and 2 is:
The range difference between the ranges from the target at position T_{kj} to the sensor 1 and 3 is:
The Eq. 7, 9 and 10
are the models of the range difference of different signals to the same sensor,
Eq. 11 and 12 are that of the range difference
of the same signal to different sensors and Eq. 8 is that
of the sending time difference of different signals.
OBSERVABILITY OF THE TARGET RANGE BASED ON MULTIPLE SENSORS
According to the model (11) of the range difference of the same signal to sensor 1 and 2 for the target position T_{kj}, r_{2l} is moved leftwards, d_{r1} is moved rightwards and then square it, the following formula is derived:
Using Eq. 2:
In which the range from sensor to the coordinate origin is:
Let:
Eq. 14 is rewritten as:
To find the range of target at position T_{kj}, by using (7), we can obtain:
The matrix equation is established by using N+1 measurements:
where:
In the same way, according to the model Eq. 12 of the range
difference of the same signal to sensor 1 and 3 for the target position T_{kl},
if:
we can obtain:
where:
The solvability conditions of Eq. 20 and 23
are:
and
When the target moves on the extension of the connecting line of two sensors,
dr_{12j} or dr_{13j} is the distance between the two corresponding
sensors and a constant for any j. In Eq. 20 or 23,
the first and the third ranks of A_{1j} or A_{2j} are proportionate
with a ratio coefficient of dr_{12j} or dr_{13j}, so neither
Eq. 20 nor 23 has a solution.
The simultaneous Eq. 20 and 23 is:
where:
The solvability condition of (26) is:
According to the characteristic of the constantspeed motion, the range r_{1j}
from the target to sensor 1 can be estimated from Eq. 20,
23 and 26 and then the ranges from the
target to sensor 2 or 3 can be obtained from Eq. 11 and 12.
OBSERVABILITY OF THE TARGET RANGE AND VELOCITY BASED ON SINGLE SENSOR
In Eq. 7, r_{1l} is moved leftwards, Δr_{1jl} is moved rightwards and then square it, the following formula is derived:
Using Eq. 2, then:
Let:
where, v is the speed of target motion.
According to Eq. 8,
Eq. 29 is deformed as:
For any j, when; lε (0, 1, 2,..., N) and l ≠ j, N equations with the variable of r_{1j} can be established.
In order to facilitate to express the matrix form in any j, it is also denoted with a equation in case of l = j, that is, there are N+1 equations to be established with the variable of r_{1j}. The matrix form of equations is:
In which:
For the sensor 2, let:
According to Eq. 9, then:
Where:
For the sensor 3, let:
According to Eq. 10, then
Where:
Eq. 35, 38 and 41
are the singlesensor range and velocity estimation algorithms to estimate target
ranges and velocities based on the models of the range difference of different
signals on the trajectory to the same sensor. When the corresponding ranks of
matrix A_{4j}, A_{5j} and A_{6j} are 3, the corresponding
Eq. 35, 38 and 41 have
solutions and we can estimate the range and velocity of target.
According to Eq. 11, use r_{1j} to express r_{2j}
in Eq. 37, then:
According to Eq. 12, use r_{1j} to express r_{3j}
in Eq. 40, then:
The simultaneous equation of 34, 42 and
43 is:
Where:
Equation 44 uses the TOA measurements of three sensors to
estimate the velocity and range from target to the first sensor. When the rank
of matrix A_{7j} is 5, Eq. 44 has a solution.
TMA OF TWODIMENSIONAL TARGET
For twodimensional target, Eq. 16, 17,
21 and 22 are represented as:
The matrix equation of target position estimation is
Where:
The matrix equation of velocity components estimation is:
Where:
Matrix A_{8} in Eq. 49 is equal to A_{9}
in Eq. 50. The solvability condition of (49) and (50) is:
Eq. 49 and 50 are solvable as long as
three sensors are not at the same line in the twodimensional space, i.e., the
following condition is met:
Therefore, the target position coordinates and velocity components are observable making use of target motion analysis for twodimensional target.
TMA OF THREEDIMENSIONAL TARGET
Taking ordinate z as a known parameter, the (x, y) can be estimate using (16) and (21):
Where:
Supposed:
then:
Where:
The range r_{1j} from target to the sensor 1 can be estimated according to the range observability. When j is equal to zero, from Eq. 2:
Substituting Eq. 55 into above equation, we can obtain:
Where:
Solving Eq. 58, we can obtain two solutions of z:
Substituting Eq. 59 into Eq. 55, we can
obtain two pairs of target position solutions (x_{1}, y_{1},
z_{1}) and (x_{2}, y_{2}, z_{2}) which one is
real and the other is pseudo.
If the ordinates of the three sensors are the same:
The (x, y) can be get from (49)and then:
In view of the actual space region where the target appears, we take the solution with positive ordinate as the real one.
Taking v_{z} as a known parameter, the (v_{x}, v_{y})
can be estimate using Eq. 17 and 22,
Where:
According to Eq. 54,
Where:
The v^{2} can be estimated according to the velocity observability.
Substituting Eq.63 into 31, we can obtain:
Where:
Solving Eq. 58, we can obtain two solutions of v_{z}:
Substituting Eq. 66 into 63, we can obtain
two pairs of solutions (vx_{1}, vy_{1}, vz_{1}) and
(v_{x2}, v_{y2}, v_{z2}) which one is real and the other
is pseudo.
If the ordinates of the three sensors are the same, the (v_{x}, v_{y})
can be get from Eq. 50 and then:
Once determining the true position (x, y, z), we can identify the true value of velocity components by using Eq. 30. The u_{5} can be estimated from velocity observability, we can use the search method shown in the following formula to determine the subscript i corresponding with the real velocity component:
SIMULATION OF TOAONLY TARGET MOTION ANALYSIS IN THREESENSOR SYSTEM
Assuming that three sensors are located at coordinates (0, 0, 0), (10000, 5000, 0) and (10000, 5000, 0) and RMS error of sensor location coordinates is 3 m, that of range difference 30 m and that of target system synchronization 100 m (the range difference corresponding with the RMS error of signal cycle T_{s}). Suppose that a moving target sends a signal every 5 sec.
Simulation analysis of range estimation: Assuming that a target makes constantaltitude flight from left to right and parallel to the Xaxis in threedimensional space at the altitude of 8,000 m and the constant flight velocity of 200 ms^{1} and the starting coordinate is (100000, 50000, 8000). Figure 2 shows the ranges from the locations of the target trajectory to the three sensors.
The estimation algorithms of r1, the range from the target to sensor 1, are
as follows: dualsensor TDOA algorithm X1r1 (Eq. 20) and
X2r1 (Eq. 23), dualsensor TDOA joint algorithm X3r1 (Eq.
26), singlesensor algorithm [X4r1 (Eq. 35] and singlesensor
synchronization algorithm [X7r1 (Eq. 44)].
The simulation results in Fig. 3 and 4 are used to describe the biased characteristics of the estimation algorithms.
Figure 3 shows the differences between the real value r1 and the estimated values derived from the dualsensor TDOA algorithms X1r1, X2r1 and X3r1. The results show that the algorithms of the dualsensor TDOA algorithms X1r1, X2r1 and X3r1 are biased. The biased values increase with the increase of target range and the estimated values are smaller than the true ones. The biased value of algorithm X1r1 is smaller than that of algorithm X2r1 when the target moves from far to near, vice versa. The biased characteristic of algorithm X3r1 is between that of the algorithms X1r1 and X2r1.
Figure 4 shows the differences between the real value r1 and the estimated values derived from singlesensor algorithms X4r1 and X7r1. The results show that the biased characteristic of singlesensor synchronization algorithm X7r1 is slightly better than that of singlesensor algorithm X4r1.
Comparing Fig. 3 with Fig. 4, we can conclude that the singlesensor algorithms have better biased characteristic than dualsensor TDOA ones.
Figure 5 shows the estimated variances of dualsensor TDOA algorithms X1r1, X2r1 and X3r1. The estimation accuracy of dualsensor TDOA joint algorithm X3r1 is better than that of algorithms X1r1 and X2r1. The variance of algorithm X1r1 is smaller than that of algorithm X2r1 when the target moves from far to near, vice versa.

Fig. 2: 
Ranges of the target to three sensors 

Fig. 3: 
Range differences between the real and estimated values of
dualsensor TDOA algorithms X1r1, X2r1 and X3r1 

Fig. 4: 
Range differences between the real and estimated values of
singlesensor algorithms X4r1 and X7r1 

Fig. 5: 
Variances of dualsensor TDOA algorithms X1r1, X2r1 and X3r1 

Fig. 6: 
Variances of dualsensor TDOA algorithm X3r1, singlesensor
algorithms X4r1 and X7r1 
Figure 6 shows the estimated variances of dualsensor TDOA algorithm X3r1, singlesensor algorithms X4r1 and X7r1. For the singlesensor algorithms X4r1 and X7r1, the estimation accuracy of algorithm X7r1 is better than that of algorithm X4r1. For the singlesensor synchronization algorithm X7r1 and dualsensor TDOA joint algorithm X3r1, the estimation accuracy of algorithm X7r1 is also better than that of algorithm X3r1.
Simulation analysis of velocity estimation: The following velocity estimation
algorithms are discussed in this paper: sensor 1 algorithm X4v (Eq.
35), sensor 2 algorithm X5v (Eq. 38), sensor 3 algorithm
X6v (Eq. 41) and singlesensor synchronization algorithm
X7v (Eq. 44).
Figure 7 shows the velocity estimation accuracy analysis
of algorithms X4v, X5v, X6v and X7v. For the singlesensor algorithms X4r1,
X5r1 and X6r1, every algorithm have better accuracy situation than others during
the target motion.

Fig. 7: 
Velocity variances of algorithms X4v, X5v, X6v and X7v 

Fig. 8: 
Estimation variances of variable u_{1} 
The estimation accuracy of singlesensor synchronization algorithm X7v is better
than that of other singlesensor algorithms X4v, X5v and X6v.
We can also generalize a conclusion from Fig. 7 that the velocity estimation accuracies of all the four algorithms will be local maximum values when the target trajectories are nearness to the shortest range to the three sensors.
Simulation analysis of twodimensional target location estimation: The position estimation accuracy is expressed by GDOP (Geometric Dilution of Precision):
In which σ^{2}_{x}, σ^{2}_{y} are estimated RMS values of coordinates x and y.
Assuming that a twodimensional target makes flight from left to right and parallel to the Xaxis at the constant velocity of 200 m s^{1} and the starting coordinate is (50000, 50000).
For the twodimensional target location algorithm of Eq. 49,
the intermediate variables u_{1 }and u_{3 }are needed. The estimation
algorithms of u_{1} are dualsensor TDOA algorithm X1u1 (Eq.
20) and dualsensor TDOA joint algorithm X3u1 (Eq. 26).
As shown in Fig. 8, the algorithm X3u1 has the better estimation
accuracy than X1u1. Therefore, the estimated value derived from algorithm X3u1
is used to position the target.
The estimation accuracy of intermediate variable u_{3} is shown in Fig. 9. The algorithm X3u3 is chose to position the target.
Figure 10 shows the contrastive analysis of the estimated
and true trajectory with a basic agreement. Thus, the target location algorithm
of Eq. 49 can position twodimensional target.
Figure 11 shows the location estimation accuracy. We can
get a conclusion that the smaller the range is, the better the estimation accuracy
will be.
Simulation analysis of threedimensional target location estimation: The threedimensional position estimation accuracy is estimated by GDOP:
In which σ^{2}_{x}, σ^{2}_{y} and σ^{2}_{z} are estimated RMS values of coordinates x, y and z.
Besides the intermediate variables u_{1} and u_{3}, the range
u_{10} is required for locating threedimensional target using location
algorithms of Eq. 55 and 59. According
to the results of above range simulation analysis, the algorithm X7r1 is used
to position the threedimensional target.
Figure 12 and 13 show the simulation
results of a target motion parallel to the horizontal axis with yaxis 50 km
and Zaxis 8000 m. Figure 12 shows the contrastive analysis
of the estimated and true trajectory of target. Figure 13
shows the contrastive analysis of location estimation accuracy and the estimation
accuracy of range r_{10} derived from algorithm X7r1.
Figure 12 shows a basic agreement between the estimated
and real trajectory. Therefore, the target location algorithms of Eq.
55 and 59 can position threedimensional target.

Fig. 9: 
Estimation variances of variable u_{3} 

Fig. 10: 
Target real and estimated trajectories 

Fig. 11: 
Variance of target location 

Fig. 12: 
Target real and estimated trajectories in XY plane 

Fig. 13: 
Variance of target location 
As shown in Fig. 13, the smaller the range is, the better the estimation accuracy will be and the variance of algorithm X7r1 is smaller than that of target location.
CONCLUSION
This study, based on the characteristics of target motion and the TOAs of the periodic signal, researches the position algorithm for the two and threedimensional target in threestation location system. The main conclusions are as follows:
• 
The unique solutions of position coordinates and velocity
components of twodimensional target can be found by solving linear equations
and the shortcomings of multiple solutions or no solution of the traditional
TDOA positioning algorithm can be avoided 
• 
For threedimensional target, the algorithms proposed in
this paper can make twovalued estimations of velocity components and position
coordinates, and we can obtain two trajectories which one is false. The
height value of false trajectory is negative while a certain condition is
met, for instance, the heights of the three stations are the same value;
the true trajectory can be uniquely identified based on the space region
where the target may occur 