HOME JOURNALS CONTACT

Trends in Applied Sciences Research

Year: 2011 | Volume: 6 | Issue: 5 | Page No.: 495-506
DOI: 10.17311/tasr.2011.495.506
Videogrammetry Application for Stereo Vision Bio-production Harvester
W.I. Wan Ishak, R.M. Hudzari, S.M. Nasir, I. Napsiah and M.S. Abdul Rashid

Abstract: This study discusses the application of Videogrammetry technique and to define it capability for applying into bio-production arm in real cocoa plant environment. The testing was performed under laboratory control environment and dummy target point also was established to collect the actual data. The result was divided on two-application category. It is to determine the system capability to generate 3-axes coordinate (3D) target point form robot base and the accuracy of the robot arm to grab the target using mouse click method. The developed Graphical User Interface (GUI) successfully generate 3D of the targeted fruits and sent the electrical signal thru interface card for moving the robot arm and grabbing the selected target automatically.

Fulltext PDF Fulltext HTML

How to cite this article
W.I. Wan Ishak, R.M. Hudzari, S.M. Nasir, I. Napsiah and M.S. Abdul Rashid, 2011. Videogrammetry Application for Stereo Vision Bio-production Harvester. Trends in Applied Sciences Research, 6: 495-506.

Keywords: bio-production, stereo vision, Videogrammetry and simulation

INTRODUCTION

In industrial sector, vision system is used under working environment that can be controlled. Vision system is always used to detect products, for quality control purposes, by its shape and pattern properties (Chen et al., 2002). The vision system is the most sensing device for positioning technique in industrial application (Poplawski and Sultan, 2007). In agricultural application, especially for fruit handling, we cannot detect fruit quality just by its shape or pattern. This is because one type of fruit has variations of shape and pattern and may have same level of quality (Ismail and Razali, 2009). To solve this problem, vision system that will be used should be able to analyze the color of the object or fruit (Seiichi et al., 1995). In other application, such as harvesting and picking cocoa fruit in the field, the combination of shape, pattern and color analysis in a vision system should be applied. This is because in the field there are so many objects and parameter that should be considered, for example (Edan et al., 2000):

Intensity of light (sunlight)
Various color of object, such as trash and grass
Various shape and pattern of object

As we know, in the field, the cocoa fruit sometimes blocks sunlight and besides that intensity of sunlight is always changing. This caused the color intensity of similar objects not constant (Wan-Ishak et al., 2008). So, the camera vision system should be able to work in these different situations (Razali et al., 2008) use the manipulation of the digital value of image for outdoor recognition system. Valuet trash for example, may give same index color (or the index color of the trash in the range of fruit index color) with the fruit. The shape may influence the result analysis. Therefore the vision system should be able to differentiate the color, shape and pattern of fruits from other objects (Kondo and Ting, 1998).

This research focuses on the task of robot control system for plucking the cocoa pod. For this purpose, a computer system has been developed using Visual Basic V6 to determine the location of the cocoa pod according to the videogrammetry principle (Hudzari et al., 2005). Videogrammetry method is used to identify the location of the cocoa pod at the X, Y and Z-axes by using video scene from two or more different positions of digital web cameras (Fraser, 1996). The real time video scene will appeared on the computer screen. The user will choose the ripe cocoa pod by clicking onto GUI then the coordinated of cocoa pod was determined by triangulation method. The information from calculated result will sent to the interface card that was previously developed by Omrane (1999). This signal was guided the robot arm to move the determined location and pluck the cocoa pod fruit.

These research objectives can be outlined as follows:

To develop the user interface software to assist the user manipulate the robot arm during the harvesting operation
To develop measurement system of the cocoa pod location using wed based digital camera.
To interface the software with the robot arm controller

MATERIALS AND METHODS

The two of the object view will display inside developed User Interface (UI) by data acquisition of the digital camera. Using the clicking action of the object picture, the robot will move and grab the selected target. The UI also show the real time simulation during robot movement (Hudzari et al., 2005).

System algorithm: The algorithm for the system was shown in Fig. 1. Image acquisition is a fundamental process of videogrammetry (Atkinson, 1996). Images of the object of interest must be stored to allow videogrammetry measurement to be made. The measurements within the image space are then subjected to one or more videogrammetry transformations to determine characteristics or dimensions within the object space.

The embedded interfacing software is used to generate communication signal between software and interface card of input output relay thus making real time robot simulation program. The embedded interfacing software was used in two applications that use:

Between UI with camera
Between UI with robot solenoid

The driver to communicate the software between camera and robot solenoid was provided by Dynamic Link Library source files (DLL). The DLL file contains one or more functions that are compiled linked and stored separately from the processes that use them (Craig and Webb, 1998). The operating system maps the DLLs into the address space of the calling process when the process is starting or while it's running.


Fig. 1: Algorithm for the system operation

Triangulation: The true 3D coordinate of the target can be extract using by taking two views of video scene at different location, so the triangulation formula would calculate with condition of:

The optic axis of both camera are parallel. The optic axis is the center point of the scene in term of vertical and horizontal direction which was set as (0,0)
Both location of cameras must at same level

Refer to Fig. 2, the two picture was taken by digital camera and was scaled with actual distance. Focal Len (f) was measured from the center point of camera lens to intersect point through focal line. The true 3D coordinates can formulate using:

The measurement of this 3D coordinates was taken from robot Cartesian coordinate system at the base of robot movement. This 3D coordinate will be use on kinematical calculation for robot movement. A Cartesian (sometimes called rectangular) coordinate system uses three axes to indicate the location of a point. The axes intersect at a single point of reference called the Origin, which is assigned with the value (0, 0, 0).


Fig. 2: Three axis coordinates extraction by two images view using triangulation

Mathematical model: The mathematical model was developed base on sine, cosine and tangent of angle of joined robot. All calculations use was embedded in the developed software. When the user clicked on the target image, the system will perform the mathematical calculation and at the same time the electrical signal is sent to trigger the solenoid valve of robot.

The basic trigonometry calculation was used to develop the kinematical calculation of top view robot simulation. The system will read on every calculation of angle of robot movement and displays it as a single line graphic at a certain time and erases it to display another single line graphics at another angle. The first FOR…NEXT of programming code will read an angle of robot movement and will display and erase the calculated graphics and shown it as moving single line graphic. The graphics algorithm as shown on Fig. 3.

The Cartesian coordinate was selected from the center of robot base for visualize top view robot simulation. Robot arm was able to turn 45° left and right (-45°<θ<45°) shown in Fig. 4.

Real time simulation calculation: The real time simulation was developed base on the time period for the robot to move from home to the maximum position. There are step that follow to perform the real time simulation during robot movement:

Each pneumatic motor and cylinder was triggered manually; time taken for the cylinder move from right to left position was recorded. It is indicated minimum and maximum position that robot can reach
All the time period that was collected will be use for creating an equation to make visual and real robot simulation calculate must to be same
The robot movement was depends on supply of air pressure, weight of robot structure and this parameter was embedded in simulation programming calculation. With this parameter, the robot simulation on user interface and actual robot movement will be same thus making the system work on real time mode

Calculation method: This method was used to determine the real time robot simulation during activated on top view side as shown in Fig. 5. By fixing the pneumatic pressure source to 6 bar, the time period for motor No. 1 (Fig. 8) to complete one rotation was taken by pressing manually the solenoid valve.


Fig. 3: Complete simulation algorithm


Fig. 4: Top view of visual robot structure shows the angle of robot movement

Then this time (value) will put onto FOR…NEXT statement of simulation program on visual basic as below formula, making the real robot movement and simulation was performed synchronously.

The extended and refracted of cylinder No. 1 (Fig. 7), the calculation was based on the home position of robot which mean the maximum position of robot during simulation process as shown in Fig. 6.


Fig. 5: Basic simulation program of robot rotation on top view side


Fig. 6: Basic simulation program for up/down movement of robot top view


Fig. 7: The complete design of robot arm structure

It will show that the robot only will retract during initial movement of simulation. Its because the simulation program was take top view as a simulation view instead of side view will below calculation.


Fig. 8: Developed GUI for this project

Software: The GUI was developed to handle the digital data of video/picture from camera and then send the signal to ON/OFF the solenoid valve for robot movement. The user will select the target-using mouse clicking of displayed video/picture. The real time simulation also will generate inside user interface during robot movement.

RESULTS AND DISCUSSION

The software was able to determine the cocoa pod location using the triangulation method and simulate the robot arm movements. The technique also use by Atkinson (1996) for application for the measurement in machine vision. The development of the graphical user interface software for providing the real time dynamic video scene and top view simulation movement of the robot arm in this project was achieved. The developed software as shown in Fig. 8 was able to retrieve real environment scene from digital cameras and send a real time cocoa pods images to the graphical user interface software. The video program was developed using API (Application Programming Interface) in visual basic.

Cameras calibration: The system was able to generate 3 dimensional axes of the object using image of video/picture in term of on-line system. The robot controller was able to interact with the software in order to synchronize the simulation movements with the real robot arm movements. Ismail and Razali (2009) use the technique of videogrammetry to get the 3 axis coordinate axis of the target and to be feed in DH calculation for robot movement.

Otherwise to determine the accuracy of camera, an experiment must be performed in which some number of points (show shining point in Fig. 9) is known positions are exposed to the cameras. The measured positions are then compared with the actual positions.

An experiment operation was performed in order to find an optimal mathematical model. This model converted the position of the target in the image plane into the real position in the base coordinate frame. The experiments had led to the following algorithm as shown in Fig. 10 for transferring the coordinates of the target from the image plane to the world plane.


Fig. 9: Graphical user interface for camera calibration technique


Fig. 10: Robot grab simulation to the selected target

Table 1-4 show the result for the calibration of measurement using videogrammetry method for this study. The calibration seems that the absolute errors for all direction of axis will more if the images distance below than 5 cm. All the measurement was based on this principles:

All unit on millimeter
Relative Error = Absolute error/actual coordinate
Absolute Error = Actual coordinate-computed coordinate

Table 1 show the target Distance for X is 0 to 10 mm, Y is 25 mm and Z is 0 to 5 mm and from the experiment show that the absolute errors for all direction of axis will more if the images distance below than 5 cm.

Table 2 show the target Distance for X is 0 to 10 mm, Y is 30 mm and Z is 0 to 10 mm and from the experiment show that the absolute errors for all direction of axis will more if the images distance below than 5 cm.


Table 1: The target distance for; X = 0-10 mm, Y = 25 mm, Z = 0-5 mm

Table 2: The target distance for; X = 0-10 mm, Y = 30 mm, Z = 0-10 mm

Table 3: The target distance for ; X = 0-10 mm, Y = 48 mm, Z = 0-10 mm

Table 3 show the target Distance for X is 0 to 10 mm, Y is 48 mm and Z is 0 to 10 mm and from the experiment show that the absolute errors for all direction of axis also will more if the images distance below than 5 cm.


Table 4: The target distance for ; X = 0-10 mm, Y = 48 mm, Z = 0-15 mm

Table 4 show the target distance for X is 0 to 10 mm, Y is 48 mm and Z is 0 to 15 mm and from the experiment show that the absolute errors for all direction of axis also will more if the images distance below than 5 cm.

REAL TIME SIMULATION ERROR

The movement of the robot arm was not accurately performed the real time simulation during robot operation. It is due to the problem on momentum force that created by air compression inside robot cylinder especially on cylinder No. 1. The overall weight of the robot structure also was considered unsuitable compare to actuator performance due to mechanical vibration during operation. During initiate, the compress air also was make shocking on actuator movement. This disadvantage of using pneumatic system also was mentioned on previous research (Wan-Ishak, 2007; Ahn and Yokota, 2005). So that this technology only suit with trivial application like end effector or gripper (Barber, 1997).

The robot also was not moving accurately if two or more solenoid valve active on same time. It due to insufficient air pressures which not enough to supply to the system. The movement for motor1 also was found error which not suitable for precise movement. Figure 10 shows the top view robot simulation.

ROBOT POSITIONAL ACCURACY

The robot was not able to grab the selected targets accurately that mention before and it still need to improve. It quite difficult to control the pneumatic pressure on application of robot to grab the target on random position unless we use it to grab on fixed position which had the fixed control strategy of actuators. The overall compressed air also was effected the movement of robot arm especially for motor1. To supply constant air pressure is quite difficult if all actuators use it as sources. It also difficult to control robot arm movement if used longer air tubing. It because the longer tubing system will give more space to air pressure to compressed by weight (Barber, 1997).

During simulation process, the embedded programmed will generate electrical signal to trigger the solenoid valve to move the robot to the selected targets. From the calibration seem that cameras did the measurement and using triangulation principle was acceptable after compare with actual measurement in certain degree.

The capabilities of the robot to grab the selected target still need to be improved. From the experimentation, the robot movement unable to grab the captured targets properly as shown in simulation due to:

Jerking force generated during initiate cylinder1 due to robot weight and will cause the robot move not accurately
Motor No. 1 not work properly due to residual pressure generated inside and will caused it to rotate around 10-30 degree after solenoid was deactivated
The top view simulation on 2-dimensional plane was not a good medium to show real time robot simulation due to lack of robot parameter collected

Based on this completed study, several suggestion was recommended to improve capability of further research:

Apply the hybrid power source system to the robotic arm. Electrical source will use for precise movement and pneumatic source for used on fixed position
The top view robot simulation had lacked of robot information parameter compare with side view parameter which kinematic calculation will improvement compare with manual method
To develop the network camera design uses to multiple the cameras that placed on different location on convergent way. This application will increase the accuracy of 3D coordinate finder of the object target
The development of mathematical model of robot can be improved by using the commercial software-etc. mathlab and photomodeler software. Mathlab is the technical computing language to solve and develop an equation of kinematics problem and simulation while the Photomodeler software is uses for 3D modeling and measurement on videogrametry application

CONCLUSIONS

The system was able to retrieve real time dynamic video scene and from there the 3 coordinate axes of object target will generate using mouse click action. The coordinate was calculated using triangulation principle based on the video scene of two different locations of cameras. These 3-axis coordinates were measured from the Cartesian robot coordinate that was taken as reference point to calculate a mathematical model for robot simulation and kinematics. The workspace in the simulation software also was calibrated to be same with the real robot workspace.

The present work could be considered as initial research in developing of an intelligent robot eye for agriculture harvester robot. By using the concept of non- contact measurement like videogrammetry to detect the object and measure it in 3D coordinate, the developments of the robot eye was explored. For further research the robot eye need to use the RGB camera which will automatically recognize the mature object by pattern or by color or wave character manipulation without human intervention, which need to click the image targets.

By using the concept of non-contact measurement like videogrammetry for coordinate measurement detection of the object target, the developments of the “robot eye” were explored on agriculture sector. Razali et al. (2008) continue the research in robot eye that to be use for the RGB camera which will be automatically detect whether recognize the mature object by pattern or by color or wave character manipulation without human intervention which need the user “click” the images target that use in this project. In this project, the user was identifying the matured fruit using color interpretations. The color was mentioned had related with the maturity stage of the agriculture product. The term of color that mentioned by the righteous book, Al-Quran; In Surah Al Hadiid, Chapter 57, verse 20; as the likeness of vegetation after rain, thereof the growth is pleasing to the tiller; afterwards it dries up and you see it turning yellow; then it becomes straw (Ali, 1934).

REFERENCES

  • Omrane, B., 1999. Vision system interfacing three DOF agriculture robot. M.Sc. Thesis, Universiti Putra, Malaysia.


  • Fraser, C.S., 1996. Industrial Measurement Applications. In: Close Range Photogrammetry and Machine Vision, Atkinson, K.B. (Ed.). Whittles Publishing, Scotland, pp: 329-361


  • Atkinson, K.B., 1996. Close Range Photogrammetry and Machine Vision. Whittles Publishing, Roseleigh House, Scotland, UK


  • Kondo, N. and K.C. Ting, 1998. Robotics for Bioproduction Systems. ASAE., St. Joseph, Michigan, USA


  • Hudzari, R.M., W.I. Wan-Ishak, I. Napsiah, S.M. Nasir and S. Rashid, 2005. Videogrammetry technique for arm positioning of bio-production robot. Proceedings of International Advanced Technology Congress 2005. 6-8 December 2005, Putrajaya, Malaysia.


  • Craig, J.C. and J. Webb, 1998. Microsoft Visual Basic 6.0, For 32-bit Windows Development. Microsoft Press, USA


  • Razali, M.H., W.I.W. Ismail, A.R. Ramli and M.N. Sulaiman, 2008. Modeling of oil palm fruit maturity for the development of an outdoor vision system. Int. J. Food Eng.,
    CrossRef    


  • Seiichi, A., K. Naoshi, F. Tateshi, N. Hiroshi and Y. Jun, 1995. Basic studies on cucumber harvesting robot. Proceedings of International Symposium on Automation and Robotics in Bioproduction and Processing, (IPSARBP`95), Kobe, Japan, pp: 195-202.


  • Ismail, W.I.W. and M.H. Razali, 2009. Conceptual control design for harvester. Eng. e-Trans., 4: 14-20.
    Direct Link    


  • Ali, A.Y., 1934. The Holy Quran: Text, Translation and Commentary. Tahrike Tarsile Quran, Lahore


  • Chen, Y.R., K. Chao and M.S. Kim, 2002. Machine vision technology for agricultural applications. Comput. Electronics Agric., 36: 173-191.
    CrossRef    


  • Edan, Y., D. Rogozin, T. Flash and G.E. Miles, 2000. Robotic melon harvesting. IEEE. Trans. Robotics Automation, 16: 831-835.
    CrossRef    


  • Wan-Ishak, W.I., M.A. Awal and R. Elango, 2008. Development of an automated transplanter for the gantry system. Asian J. Scientific Res., 1: 451-457.
    CrossRef    Direct Link    


  • Poplawski, J.S. and I.A. Sultan, 2007. Position sensing of industrial robots-a survey. Inform. Technol. J., 6: 14-25.
    CrossRef    Direct Link    


  • Barber, A., 1997. Pneumatic Handbook. Elsevier Advanced Technology Publisher, Oxford, UK


  • Wan-Ishak, W.I., 2007. Development of automation technology for the Malaysian agricultural sector. J. Ingenieur Board Eng. Malaysia, 33: 46-54.


  • Ahn, K. and S. Yokota, 2005. Intelligent switching control of pneumatic actuator using on/off solenoid valves. Mechatronics, 15: 683-702.
    CrossRef    

  • © Science Alert. All Rights Reserved