Subscribe Now Subscribe Today
Research Article

Target Capture for Free-Floating Space Robot Based on Binocular Stereo Vision

Tian Zhixiang, Wu Hongtao and Feng Chun
Facebook Twitter Digg Reddit Linkedin StumbleUpon E-mail

As the dynamics of free-floating space robots are highly nonlinear, strongly coupled and nonholonomic constrained, the control methods for the terrestrial robot are not suitable for the free-floating space robot. At present, the control methods for the free-floating space robot are based on accurate dynamic model and need parameters identification. In this study, the dynamic model of the free-floating space robot is established and the binocular stereo vision feedback method to overcome the nonholonomic constrained characteristics that caused by the linear and angular momentum conservation theory is proposed. Then the free-floating space robot motion control method based on acceleration decomposition method is designed. Finally, the software of the free-floating space robot target capturing based on binocular stereo vision is developed and simulated. The simulation results show that the free-floating space robot based on binocular stereo vision can track a moving target in real time and ultimately can complete the target capture mission. The proposed control method provides a new effective method for the free-floating space robot capturing the target and avoiding the complex process of parameters identification for the space robot and also can effectively prevent the manipulator of the space robot in collsion with the target.

Related Articles in ASCI
Similar Articles in this Journal
Search in Google Scholar
View Citation
Report Citation

  How to cite this article:

Tian Zhixiang, Wu Hongtao and Feng Chun, 2011. Target Capture for Free-Floating Space Robot Based on Binocular Stereo Vision. Information Technology Journal, 10: 1222-1227.

DOI: 10.3923/itj.2011.1222.1227

Received: January 12, 2011; Accepted: March 16, 2011; Published: May 13, 2011


The third step of manned space flight is to build a large space station for astronauts long-term residence, but the future of the large space station is a combination of many cabin space station and track complexes. At present, the assembly of the large space station is generally by the use of space robot to implement capturing and docking the cabins and track complexes to instead the astronauts to complete the dangerous or impossible tasks. At the same time, the space robot also can do the recovery, repair and maintenance of the satellite. However, these activities are based on precise robot motion control and compare with the traditional fixed based robots the base of the free-floating space robot is moveable and one mainly difference is that the base body of the free-floating robot has six degrees of freedom that the control methods for terrestrial robot can not be easily applied to free-floating space robot directly (Wen-fu et al., 2009). The position and posture of the base body would change when the free-floating space robot is moving its manipulator to the position to capture a target that take a great challenge to the control of the free floating space robot. Therefore, it has received the domestic and foreign scholars’ great attention.

Yoshida (2003) derived the general form of the Jacobian matrix of the space robot by using the conservation of the angular momentum theorem and based this new form generalized Jacobian matrix a resoled motion rate control method is proposed. In the ETS-VII validated the control method based the generalized Jacobian matrix. Belousov et al. (2005) proposed a motion planning method by two iterative, for a wide range space robot based on complex dynamics. Huang et al. (2006), Wenfu et al. (2007) and Xu et al. (2007) proposed a control method for free-floating space robots that can track, approach and capture the target in Cartesian space with the spiral trajectory.

In this study, the target capturing for free-floating space robot based on binocular stereo vision (Malamas et al., 2003; Van der Zwaan et al., 2002) is studied and the method does not require accurate modeling of the system with the external feedback (Chaumette and Hutchinson, 2008). In the general methods, the feedback is used to implement the target capturing avoid identifying the parameters of the free-floating space robot system. In the visual servoing control system a computer is employed for processing the acquired images. This is achieved by applying special purposed image processing analysis and classification software. One or more cameras placed at the sence under inspection usually acquire images. And the main problem of the machine vision task is to understand what kind of information the machine vision system is to retrieve and how this is translated into measurements or features extracted from images (Ruf and Horaud, 2000). There are two methods to design the visual servoing system (Christie et al., 2005). One is image-based visual servoing method and the other is position-based visual servoing method. Usually, the position of the cameras are fixed but in the free-floating binocular stereo vision control system, the position of the cameras mounted on the base body of the space robot is changing all the time that takes a great challenge to the space robot system. In this study, the external feedback binocular stereo vision device is used to calculate the target position in real-time for the control system to compensate for the error of the conservation of the momentum caused by the movement of the free-floating space robot.


Consider an n degrees of freedom manipulator with free-floating base body which considered as link n+1, as shown in Fig. 1 and each joint of the manipulator is a single degree of freedom, it can be rotational or translational joint. Based on Lagrange equations the free-floating space robot dynamic equations can be found by Hong-Tao and You-Lun (2000) and Fu-Yang et al. (2010). The linear and angular momentum conservation equations of the free-floating space robot are as follows:


where, ρi denotes the density of link i, mi denotes the mass of link i , drives, actuators and other rigid bodies, m0 denotes the mass of the base body of the free-floating space robot, Jri denotes the inertia tensor of unit length of the link i according the mass centre of the link i, Jv denotes the inertia tensor of the base body of the free-floating space robot according to the mass centre of the base body, Ls and Lr denote the initial linear and angular momentum of the free-floating space robot system which make the degrees of freedom increase to n+6.

Fig. 1: The free-floating space robot


The binocular stereo vision is based on the parallax principle that get the three dimensional geometric information of the object by multiple images. In the machine vision system, the binocular stereo vision system is generally comprise of two or more cameras apart at some distance with different angles or with one movable camera at different place taking two different images. The parallax principle is based on the triangular method and the projectional transformation to recover the three dimensional geometric objects. Although, the real scene is three dimensional, the image could be two-dimensional or three-dimensional. When we do not need to determine the depth of the scene or scene features, we can use two dimensional images, for example, in determining the contours of the object or image side that we do not need the depth information of each point of the objects, so we could use the two dimensional image. The three dimensional images processing is mainly used for those that need motion detection, depth measurements, remote sensing, relative positioning and navigation during the operation. In the free-floating space robot binocular stereo vision control system, the three dimensional images are used to calculate the position and posture of the target and feedback to the robot control system for the space robot approaching and capturing the target.

Assumed that in the space there is a target point p. Points p1 and p2 denote the points in the image coordinates of camera C1 and camera C2 respectively, and the projection matrix (Shuqing et al., 2005) of the cameras are M1 and M2. ot (xt, yt, zt) denotes the coordinate system of the base body of the free-floating space robot and according to the perspective projection matrix transformation (Guangjun, 2004) could obtain equations as follows:


The geometry locations of the two CCD cameras are shown in Fig. 2. (u1, v1, 1) and (u2, v2, 1) denote homogeneous coordinates of the points p1 and p2 in the image coordinate systems of the cameras C1 and C2, respectively and (xt, yt, zt, 1) denotes homogeneous coordinate of point p. From Eq. 3, the parameters zc1 and zc2 can be eliminated and could obtain 4 linear equations:


From the analytic geometry theorem, combining two plane equations we could get a space linear equation. The geometric meaning of Eq. 3 is the line oc1 p1 and line oc2 p2 and the space point p (xt, yt, zt) is the intersection point of the line oc1 p1 and the line oc2 p2 by which we could calculate the coordinate of point p. For the posture of the target, we could measure the four points of a square ABCD to calculate the posture; the posture can be calculated by following equations:


where, L = |AB| denotes the side length of the square ABCD.


In the robot kinematics, the position and posture of robot end-effector could be obtained from solving the robot kinematic equations and the turning angle of each joint can be calculated when the robot end-effector is moving.

Fig. 2: Geometry of binocular stereo vision

The movements of robot joints are not independent but associational and coordinational and the control system is actually achieved through the joint servoing systems. The movement of robot end-effector decomposed to each joint velocity, acceleration, force and torque and each joint is controlled independently. In this study, the decomposition acceleration control method with binocular stereo vision feedback is used to control the free-floating space robot capturing the target. The decomposition acceleration control method requests drivers of each joint simultaneously to move at calculated acceleration that to guarantee the robot end-effector moving smoothly and stably in Cartesian coordinate. Before control the robot, we need to decompensate the position and posture of the robot end-effector into joint angle acceleration and then control the joints of the space robot (Jifeng, 2006)..

Supposed that the robot has n degrees of freedom, and the position and posture of the end-effector in the Cartesian coordinates in vector form are written as follows:


The generalized robot joint coordinate vector could be defined as:


The relationship between Eq. 5 and Eq. 6 are:


Supposed that the robot has m degrees of operational space freedom and the relationship of joint angle and Cartesian coordinates are written in nonlinear function Eq. 7. Derivate Eq. 7 we could obtain:


where, j (q) denotes the generalized Jacobian matrix of joints, defined as:


denotes the desired velocity of the robot end-effector in Cartesian coordinate, q (t) denotes the joint velocity in the joint space coordinate and Eq. 9 has established the relationship of the two velocities. When degrees of freedom of the robot in operational space is equal to the degrees of freedom in robot joint space, the robot is non-redundant and the inverse of generalized Jacobian matrix can be calculated directly as follows:


When m<n, the robot is redundant and the generalized Jacobian matrix does not exist. According to the generalized inverse matrix theory, we could obtain:


j+ (q) denotes the inverse generalized Jacobian matrix. When j (q) has the full rank, we could obtain following equation:


Derivate Eq. 12, we could obtain the acceleration of the robot end effector, written as follows:


Equation 13 denotes the relationship between the acceleration of joints in Cartesian coordinate and in joint space coordinate, in fact, the goal of acceleration control method is to achieve the error between the actual and desire position and posture of robot end-effector converging to zero, supposed that:


where, e denotes the error between the actual and desire position and posture of the robot end-effector, Xd and X denote the desire and actual position and posture of the robot end-effector, respectively. To meet the requirement of the converging condition that must satisfy:


where, k1, k2 are proportional coefficient, and should make sure that the real root of Eq. 15 is negative. With Eq. 13-15, we could obtain:


The closed loop acceleration decomposition control method is based on Eq. 16, if we have known the desire trajectory of the robot end-effector that is desire position, posture, velocity and acceleration of the robot end-effector, we could calculate the joint acceleration of each joint in joint space coordinate, then we could control the robot.


In this study, the free floating space robot with a 6 degrees of freedom manipulator capturing the moving target is simulated. The free-floating space robot is modeled in OpenGL and using DirectShow acquiring the images of the two cameras and the colour is used to get the position of the target in the image coordinate and at last use the OpenCV for image processing and send the position and posture of the target to the control system real-timely to modify the movement of the manipulator.

Figure 3 is the simulation software of the target capturing of the free floating space robot. Figure 4 is the position of the target in the robot coordinate that calculated from the processed images. Figure 5 is the trajectory of the robot end effector that approaches the target from the feedback of the binocular stereo vision system.

Fig. 3: The simulation software

Fig. 4: The calculated spatial points

Fig. 5: Trajectory of the space robot end-effector


In this study, the target capturing of the free-floating space robot based on binocular stereo vision method is proposed. This method does not need to consider the nonholonomic constraints characteristics that caused by linear and angle momentum conservation constraints and just need to set up the dynamic model of the manipulator of the free-floating space robot which does not need to model the accurate dynamic model of the whole free-floating space robot system. The control method for the traditional fixed based manipulator is introduced into the free-floating space robot control system successfully and avoids the complex process of the parameter identification and prevents the robot end-effector collision with the target that provides a new method for space station assembly, in-orbit satellite service.


My research project was sponsored by Commission of Science, Technology and Industry for National Defense Pre-research Foundation of China (No. C4220062501) (period: 2008-2011).

1:  Wen-fu, X., L. Cheng, B. Liang L. Yu and Q. Wen-yi, 2009. Coordinated planning and control method of space robot for capturing moving target. Acta Automatica Sinica, 35: 1215-1225.
Direct Link  |  

2:  Yoshida, K., 2003. ., Engineering test satellite VII flight experiments for space robot dynamics and control: Theories on laboratory test beds ten years ago, now in orbit. Int. J. Robotic Res., 22: 321-335.
CrossRef  |  

3:  Belousov, I., C. Esteves, J.P. Laumond and E. Ferre, 2005. Motion planning for the large space manipulators with complicated dynamics. Proceedings of IEEE/RSJ International Conference on Intelligent, Robots and Systems, Aug. 2-6, IEEE Xplore, pp: 2160-2166.

4:  Huang, P., Y. Xu and B. Liang, 2006. Tracking trajectory planning of space manipulator for capturing operation. Int. J. Adv. Rob. Syst., 3: 211-218.
Direct Link  |  

5:  Wenfu, X., Y. Liu, B. Liang, Y. Xu and W. Qiang, 2007. Autonomous path planning and experiment study of free-floating space robot for target capturing. J. Intel. Robotic Syst., 51: 303-331.
CrossRef  |  Direct Link  |  

6:  Xu, W., B. Liang, Y. Xu, C. Li and W. Qiang, 2007. A ground experiment system of Free-floating space robot for capturing space target. J. Intell. Rob. Syst., 48: 187-208.
CrossRef  |  

7:  Malamas, E.N., E.G.M. Petrakisa, M. Zervakis, L. Petit and J.D. Legat, 2003. A survey on industrial vision systems, applications and tools. Image Vision Comput., 21: 171-188.
Direct Link  |  

8:  Van der Zwaan, S., A. Bernardino and J. Santos-Victor, 2002. Visual station keeping for floating robots in unstructured environments. Robotics Autonomous Syst., 39: 145-155.
Direct Link  |  

9:  Chaumette, F. and S. Hutchinson, 2008. Visual servoing and visual tracking. Springer Handbook Robotics, Part C: 563-583.
CrossRef  |  

10:  Ruf, A. and R. Horaud, 2000. Vision-based guidance and control of robots in projective space. Lecture Notes Comput. Sci., 1843: 50-66.
CrossRef  |  Direct Link  |  

11:  Christie, M., R. Machap, J.M. Norm, P. Olivier and J. Pickering, 2005. Virtual camera planning: A survey. Lecture Notes Comput. Sci., 3638: 40-52.
Direct Link  |  

12:  Hong-Tao, W. and X. You-Lun, 2000. The problem of Multi-body systems dynamics in mechanical engineering. China Mech. Eng., 11: 608-611.

13:  Fu-yang, T., W. Hong-tao, Z. Da-xu, S. Bing and W. Chaoqun, 2010. Research on generalized efficient recursive dynamics of flexible macro-micro space robots system. J. Astronautics, 2010: 687-694.
Direct Link  |  

14:  Shuqing, Z., W. Zhonggui and R. Longsui, 2005. Space redezvous docking and measuring technique and application. China Astronautic Publishing house, Beijing.

15:  Guangjun, Z., 2004. Machine Vision. Science Press, Beijing.

16:  Jifeng, L., 2006. The Base Technology of Robot. High Education Press, Beijing.

©  2021 Science Alert. All Rights Reserved