Subscribe Now Subscribe Today
Research Article
 

Development and Application of the Single-Camera Vision Measuring System



Kuei- Shu Hsu, Ko- Chun Chen, Tsung- Han Li and Min- Chie Chiu
 
Facebook Twitter Digg Reddit Linkedin StumbleUpon E-mail
ABSTRACT

The main purpose of this research is to construct a highly efficient low cost image-calculation system. In this system the distance between the user and the targeted object can be calculated by a light signal (Super LED) received from a image input device (CCD). We are trying to accomplish two things: an image measurement in conjunction with an image depth and a fast measurement of the image`s distance. Of course, it is difficult to measure the distance between an object and an image using a simple eye lens (CCD). The hardware platform of a system is composed of a light source, a Web cam and a convex lens constitute. According to the principle of parallel optical axis, the distance can be obtained by conveying the image to the system via USB communication interface, applying a characteristic (a straight line between the light source and the object), utilizing the dimension data and using a variety of light from the light source. Consequently, the experimental results reveal that the distance between the object and the image lens can be acquired by utilizing a system where the image calculation in conjunction with a vision algorithm is used.

Services
Related Articles in ASCI
Similar Articles in this Journal
Search in Google Scholar
View Citation
Report Citation

 
  How to cite this article:

Kuei- Shu Hsu, Ko- Chun Chen, Tsung- Han Li and Min- Chie Chiu, 2008. Development and Application of the Single-Camera Vision Measuring System. Journal of Applied Sciences, 8: 2357-2368.

DOI: 10.3923/jas.2008.2357.2368

URL: https://scialert.net/abstract/?doi=jas.2008.2357.2368
 

INTRODUCTION

A computer`s visual system can be divided into two categories-plane vision and stereoscopic vision. The difference is based on the ability to estimate the depth of an object in an image-depth perception. In Barnard and Thomson (1980) proposed the idea that a computer stereoscopic vision system should include image acquisition, camera modeling, feature detection, image matching, depth determination and interpolation. At the end of 1970, researchers had already developed several computer stereoscopic vision algorithms. The stereoscopic visual system needs at least two cameras to synchronously capture a stereo image. An accurate parameter acquired from the camera is required so that the object`s stereoscopic depth can be calculated by using the corresponding point of the object`s stereo image. The stereoscopic depth of the object (Henstock and David, 1966; Olson and Huttenlocher, 1997; Starck et al., 2003) can be obtained by using the asymmetric geometry under the given parameters of the camera. The accuracy of stereoscopic depth is related to the parameters of the camera. Many researchers have proposed several calibration methods for correcting the cameras stereoscopic vision so as to decrease the calculation error in the object`s depth (Mallon et al., 2002)

This research will establish a quick, efficient and low cost image-calculation-system in which a distance calculation can be achieved by using the web cam (CCD), a cheap input device and a USB interface-this is done without auxiliary equipment. The image-calculation-system is used to evaluate the target`s distance (from the system`s platform) and the variety of the object. In addition, the distance between the system`s platform and target can be evaluated using the diaphragm size of the light source derived from the CCD (Barnard and Thompson, 1980; Tsai, 1987; Bertozzi and Broggi, 1988; Ohya et al., 1998; Yau and Wang, 1999).

A traditional image, using one eye, cannot detect the distance between an object and a web camera (Tirumalai et al., 1992; Stafford et al., 2007). In this research, a new judgment rule is adopted to find the distance between the light source and the camera. The camera will pick up the image continuously (Ohya et al., 1998). The variety of light from the light source is therefore acquired through image processing. Moreover, the relative distance between the system`s platform and the object is calculated using the vision algorithm (Mohan and Nevatia, 1989; Marichal et al., 2001). Experimental results reveal that the distance between the system and the object can be evaluated immediately (Han et al., 1999). The measuring system for the visual distance can be conveniently operated in a general environment. This is a convenient and time-saving method. By using an image in conjunction with its distance information, the distance measurement work is assigned (Kuo and Wu, 2002; Stafford et al., 2007; Bjorkman and Kragic, 2004; Kim et al., 2003; Seara and Schmidt, 2004).

SYSTEM DESIGN

As shown in Fig. 1, consecutive dynamic images are fed into the computer`s memory by the imaging equipment of the web cam (CCD). A common USB (Universal Serial Bus) interface has been adopted and used in the CCD.

The image processing converts an analog image into a digital image; moreover, advanced image data will be gained by using GFLSDK to delete the unnecessary images. The above method can be achieved by the threshold value.

The image`s data, which has already been processed by the above equipment, can be used to acquire the object`s location by the distance measurement rule. While the object is moving continuously, the system will keep tracking the object`s distance. This is called the on-line distance measurement of the image.

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 1: System process

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 2: The projection region of the light source

As shown in Fig. 2, the system has two regions - a light projecting zone and a non-light projecting zone. When the light projecting zone is located within the object area and the distance-measuring method is used, the distance will be measured by the system.

THE STRUCTURE OF DISTANCE MEASUREMENT SYSTEM

The acquisition of a light signal is an essential issue for a visual image. Based on the same axis with respect to a light signal and a CCD, the system can evaluate the distance using the image of the object and the light signal received from a single CCD. The most important aspect is how to catch the clear light signal. The object`s distance can be calculated by fast image processing, pattern comparison and the measuring algorithm of the CCD`s light signal.

System introductions: The distance measuring system includes a web camera. According to the parallel optical axis, the image will be sent to the system host via the USB interface. As indicated in Fig. 3, both the Super LED and convex lens installed in the platform are responsible for light control which is required and used in the visual judgment.

Image input: The language program of the system uses Visual Basic. The programmer can easily and quickly code and debug when using Visual Basic.

There are several objects that are visually retrieved in Visual Basic. In our system, the VideoCapFree.ocx is used. It is free and easy to operate and can be used with Access, Visual C++, Visual Basic, Visual Foxpro and Delphi. The VideoCapFree.ocx is linked to the image grab system. The captured image, which uses a single photograph that can be enlarged, shrunk and corrected, can provide the retrieval of video and be saved in various formats within the multi-media files.

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 3: Digital image system

Table 1: The required memory space with respect to various colors
Image for - Development and Application of the Single-Camera Vision Measuring System

The required imaging hardware includes TV tuner Cards, a web cam, capture cards, etc.

Image processing: A concept of graphing is required before the imaging process is performed. For example, a piece of graphic data is 320 pixels in width, 240 pixels in height and has 24 colors. It is regarded as a 320x240 pixel. Each pixel, the basic unit in an image, has 24 bits which can indicate the corresponding color used. The category of color in the pixel is increased when the bit number is enlarged.

The corresponding memory the pixel uses will be larger. The color of the image is composed of red, green and blue. The common color format and its related memory in the computer image are shown in Table 1.

The way to simplify image processing is to simplify the depth of the RGB color. We use both the gray scale and binary technique to filter out the unnecessary color until the color becomes the binary form only. The leftover binary data will be the only useful image data. The location of the object`s image is obtained by the tracking and judgment rule.

Gray scale: The image of the CCD camera is in full color and is regarded as the RGB image data. It is not necessary to take time to deal with the RGB image. We can then simplify the full color by converting it to the gray scale. This means that the RGB format is transformed to the YIQ, where Y is the luminance, I is the hue and Q is the saturation. As shown in Fig. 4, the grayed image (gamma turns) is obtained when Y is filtered out of the image. Consequently, the above pixel will be composed of 256 colors (Ohya et al., 1998).

Image threshold: The image data processed by the gray scale is achieved by dividing both the black and the white into 256 kinds of colors. A single pixel is compared at least 256 times while the object tracking is performed. Even though recognizing the color of an object by using the above grayed image is much better than that of the non-grayed image (checked with 16777216 times), an advanced technique for improving color recognition is expected in future.

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 4: The gray scale

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 5: Image threshold

Image threshold is an important stage during image processing. The processed image data is better than non-processed image data for image storing, processing and recognition. In regard to the technique in extracting certain images, the important point is setting the threshold value in which the gray value is between 0 and 255. After the image threshold processing is performed, the image will become two gray values. If the gray value of the related pixel is larger than the preset threshold value, the pixel will be defined as the light spot (the bit of 1); otherwise, the pixel with a lower gray value will be defined as the dark spot (the bit of 0). Thereafter, a pixel represented by two bits will exhibit a mono color as shown in Fig. 5.

GFLSDK library: The GFLSDK library is used for establishing the image-distance measuring system. The purpose of the system is the transformation of the image format. The GFLSDK, fee free and available for the noncommercial and academic group, has 100 kinds of input formats and 40 kinds of output formats.

Table 2: The variation of pixel range with respect to distance in a CCD
Image for - Development and Application of the Single-Camera Vision Measuring System

It can also be linked to the Delphi, Visual C++, Visual Basic, Borland C++ Builder, etc., for programming work. The distinguishing feature of the GFLSDK is power and high speed processing. It is often applied in developing image graphic software.

Image vision discusses: Using a CCD camera, judging the visual image is equivalent to measuring a distance by using a ruler. A feature of our research is to quickly estimate depth by using a camera and a light source, by using the distance and related information between the image and the object and by converting the above data into the required values (Mallon et al., 2002; Bertozzi and Broggi, 1988; Yau and Wang, 1999; Ohya et al., 1998).

Discrepancy from a distance: The resolution in the CCD camera is fixed; therefore, the valued pixel will not change when the distance is varied. The COMS is the primary web cam lens in the current market. The newest lens is the VGA CCD which has one million three hundred thousand pixels. Even though the lens of VGA CCD is adopted in this research, an error in distance still exists. The variation of pixel range with respect to distance in CCD is shown in Table 2.

Depth computation: The allocation of a system platform and the object is shown in Fig. 6a. The L1, a span between Super LED and the convex lens, is related to both the focus (d) and diaphragm (D), which are produced by casting the source`s light to the object.

In this experiment, the span (L1) is set at 11 cm. To obtain the real distance (L) shown in Fig. 6b, the related information of the focus (d) and diaphragm (D) at L1 = 1 m is required, where diaphragm (D) is the diameter of the amplified circle projected from the light source to the object via the convex lens.

The relationship between distance L and the image (projected from the light source to the target via the convex lens) is shown in Fig. 7. Taking the CCD`s image variation into consideration, the L can be expressed as below:

Image for - Development and Application of the Single-Camera Vision Measuring System

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 6a: System platform

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 6b: The relationship of light source and convex lens

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 7: The relationship between the distance L and the image

The image seen through a convex lens at L1 = 1 m has been acquired as earlier. When the object is turned at an angle shown in Fig. 8, the L can be obtained by using the CCD`s image variation value in the following equation.

Image for - Development and Application of the Single-Camera Vision Measuring System

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 8: The distance judged with respect to various angles

IMAGE JUDGING METHOD

In this research, the visual image is used to evaluate the object`s distance. All the experimental data, accuracy and measuring speed are related to the calculation of the visual image. How to achieve a fast measurement, reduce the error in experimental work and efficiently promote the quality of the research becomes the main issue.

Introduction of the image judging method: As indicated in Fig. 9a and b, the image judging begins by comparing colors with respect to the individual image pixel from left to right. After image processing, the color of a single pixel is reduced from 16,770,000 colors to 2 colors. For an image with 320x240 pixels, the total color-comparison is decreased 77361 times.

The image-judgment-method is used to recognize the object`s location inside the image. An efficient image-judgment-method will not only speed up the object`s measurement but also reduce the measurement error. The image-judgment-method is essential for the distance-measuring system.

For an image with 320x240 pixels, the variation of the judgment number with respect to various processes such as the original image, the grayed image and the image threshold at a single pixel is shown in Table 3.

Compound image judgment: In practical usage, the image-judgment-method, used to seek the area projected by the light source, is too accurate; therefore, it is inefficient in image judgment. For a working piece with a given error range of 50±0.5 mm, it would be meaningless if the required accuracy of manufacturing reaches 50±0.05 mm.

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 9a: Scanning from left to right

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 9b: Scanning into the center

Table 3: The No. of judgments for a single pixel at various stages
Image for - Development and Application of the Single-Camera Vision Measuring System

Therefore, to facilitate the imaging judgment, a fundamental judgment method needs to be planned, again. That is, the compound-image-measurement method becomes the modified image-judgment-method.

The fundamental judgment method maintains a fixed judging region and scans the image. However, the compound judgment method will adjust the judging zone in accordance with the various locations of a moving object. Therefore, the scanning zone shown in Fig. 10 will be adjusted automatically.

The best distance between light source and convex lens: If both of the object`s judgment velocity and measurement accuracy are regarded as the most important targets, the following strategies can be considered for improving both the speed and accuracy.

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 10: Compound image judgment

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 11:
The reduction and amplification of the image from the light source

Decrement of the span between the light source and the convex lens: The CCD can obtain one set of light images on the object while the span between the light source and the convex lens is shortened. If the system platform reduces the span between the light source and the convex lens, the projected light image will become divergent or will shorten the allowable judgment distance. As indicated in Fig. 11, the image is obtained by the CCD using the reduced distance between the light source and the convex lens.

Increment of the span between the light source and the convex lens: In the above example, if the distance between the light source and convex lens is enlarged, the percentage of image concentration will be increased at some distant range between the light source and object. However, the image becomes vague beyond the above distant range. The available range will be changed when the amplified distance is varied. As shown in Fig. 12, the image is captured by the CCD within the available distance between the light source and object. Similarly, in Fig. 13 the image is captured by the CCD outside the available distance between the light source and object.

In order to obtain a fast measurement and improved accuracy, the amplification of the best distance is required.

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 12: The image within an available distance

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 13: The image beyond an available distance

The image`s clearness projected from the light source to the object is a must while the above method is used.

EXPERIMENTAL ANALYSIS

Whole analysis: This section is concerned with the system`s function as well as its practical performance. To evaluate the system`s effect with respect to various parameters and to improve errors in the laboratory, individual comparison of parameters using experimental work has been developed. One of the features in our research is to overcome the difficulty of image acquisition within a normal environment. To acquire an image, a color filter lens in conjunction with the Super LED used to produce various wave lengths of light is applied.

Influence analysis: There are lots of convex lenses with various magnifications on the current market. In order to evaluate the diaphragm effect with respect to various magnifications of convex lenses, various experiments with different magnifications have been performed. As indicated in Fig. 14, the comparison of a variety of lenses for object projection can strengthen the liability and accuracy of the experiment.

In order to appreciate the system`s performance index, the system is divided into two kinds of structures and tested in the lab.

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 14: The influence of lens enlarging rates

Table 4: Focus and diaphragm with respect to distance
Image for - Development and Application of the Single-Camera Vision Measuring System

The structure of the system, a two-dimensional flat surface visual image, is used to evaluate the effect of focus (d) and diaphragm (D) with respect to the distance. The first experiment presumes that L1 is fixed at 11 cm. The investigation of the projected area`s variety with respect to various distances is carried out using a convex lens (magnification: 2X). The results are shown in Table 4.

If L1 is a variable, the area variety of focus (d) and diaphragm (D) with respect to distance will be experimentally investigated at various L1s. Both the focus variety and diaphragm variety with respect to distance at various L1s are depicted in Fig. 15 and 16.

The results reveal that L1 plays an essential role in the image acquisition and the object`s projection; moreover, the L1 influences the image judgment.

The analysis of real environmental influences: In the real world, an object is not always perpendicular to the ground; therefore, the effect of a projected area`s variety with respect to distance under a non-perpendicular circumstance is investigated and shown in Fig. 17 and Table 5.

According to different requirements, the system will be equipped with an appropriate program which can make the system more efficient. The program`s structure of a two dimensional visual image is shown in Fig. 18. Table 6 is the related hardware equipment that is used within the visual image.

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 15: The variety of diaphragm focus with respect to distance at various L1s

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 16:
The variety of diaphragms with respect to distance at various L1s

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 17: The variety of a diaphragm with respect to angles

Table 5: The variety of a diaphragm with respect to distance under a perpendicular and a tilting situation
Image for - Development and Application of the Single-Camera Vision Measuring System

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 18: The program structure of a dimensional visual image

Table 6: Related hardware used with the visual image
Image for - Development and Application of the Single-Camera Vision Measuring System

RESULTS AND DISCUSSION

Experiment for a two-dimensional flat surface image: The input image used in the laboratory is 640x480 pixels. The images with and without image processing are shown in Fig. 19a and b. Using the data in the system, a image judgment can be performed.

Table 7 is the specification of image processing, including the required time per single piece, the processing pieces per second and the pixels of the input image.

The comparison of the image recognition method

Basic scanning method: As indicated in Fig. 20, the image scanning is performed point by point. It is one kind of image processing which may take a long time.

Matrix scanning method: As indicated in Fig. 21, image scanning is performed with multi-points. The required scanning time is shorter than that of the basic scanning methods, even though some of the area will be repeatedly scanned. The blue matrix area will move continuously from the left to the right and from the top to the bottom.

Compound scanning method: As indicated in Fig. 22, the image will be scanned within a specified area. Using this method, the required time for scanning will be shortened; in addition, the scanning speed will be faster than the others. There will be sufficient precision.

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 19a: Original image

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 19b: Image after system`s processing judgment

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 20: Point to point scanning

Table 7: Specification of image processing
Image for - Development and Application of the Single-Camera Vision Measuring System

Comparison: To evaluate the performance of three kinds of image scanning methods, ten pictures are exemplified, compared and shown in Table 8.

.
Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 21: Matrix scanning method

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 22: Compound scanning method

Table 8: The comparison of various image scanning methods
Image for - Development and Application of the Single-Camera Vision Measuring System

It is obvious that the compound scanning method has the shortest scanning time than other methods; therefore, the distance can be quickly calculated by image judgment when the compound scanning method is adopted.

APPLICATION AND INTEGRATION OF THE IMAGE-DISTANCE MEASURING

System and a realistic environment: The aim of this research is to establish an image-distance measuring system which can be used in a complicated light-disturbing environment. This system, which replaces the hand measurement, can be performed in a normal environment.

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 23: Whole system

Introduction of whole system: In order to carry out the image-distance measuring system, the system is design to be used in a normal environment. Because of the interference by natural optics or fluorescent lamps in a normal environment, three kinds of light-filter lenses (red, blue and green) are required. To utilize the system in industry and improve the efficiency of image-acquisition by the CCD, the above light-filter lens is installed in the system.

Hardware configurations: As indicated in Fig. 23, the integrated image-judging system is composed of three kinds of components. In this system, a computer connects with the image-judging system. By using the input image and moving the object, the system can judge the object`s distance.

The influence of vision with respect to a light-filter lens
The projecting experiment with a fixed background: In the past, the recognition of a visual image was possible in a good environment. For an environment with a intense brightness condition, the image can not be identified; therefore, a light-filter lens used to filter out the external signal noise is required. The image captured by the CCD at 3 m before and after image processing with different filter lenses is shown in Fig. 24.

Special background tests: The recognition of visual images is used for special objects in a simple environment. In a complicated environment, the image is difficult to recognize; therefore, a light-filter lens used to filter out the external signal noise is needed. The image variety with respect to various light-filter lenses before and after image processing is depicted in Fig. 25. It is obvious that the errors of both image-obtained and image-judged induced by background interference can be highly reduced by using the appropriate light-filter lens in conjunction with image processing.

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 24: The image before and after image processing with different filter lens

Image for - Development and Application of the Single-Camera Vision Measuring System
Fig. 25: The image before and after threshold with different filter lenses

CONCLUSION

The main purpose of this research is to establish a highly efficient vision-distance measuring system which is steady, cheap and fit for the application in machine vision. As the results reveal, the performance of the system is quick, accurate and cheap. Unfortunately, in the system the CCD device and the light source used for image input are costly. Other software, in which the trial version is released, is free of charge. The devices, including a CMOS lens of 300,000 pixels, a laser pen and a convex lens (magnification: 2X), are adopted and used in the experiment. However, because of light ray interference and a complicated environment, the stability of the system is insufficient.

To overcome the above drawbacks, a VGA CCD of 1500000 pixels is adopted for image recognition; however, errors in image identification still exist. Therefore, the Super LED is taken as the new light source in the experimental work. Yet, performance is still lacking. Because of the interference of light and a complicated environment, it is easy for the system to make errors in image-judging while image acquisition using a CCD is processing. In order to overcome this problem, three kinds of light-filter lenses (red, blue and green) are used with the background variation. Image recognized using the human eye with some colors at a long distance is weak. After image-processing, the identification of an image signal is improved. Moreover, by adjusting the light-filter lens, the image-judging system`s instability is highly reduced because of interference by light rays and a complicated environment.

The degree of light source lumen in the experiment is 25~30. The measured distance can reach 1(m)~7(m) under the influence of a light ray. It is recommended to increase the lumen of the light source appropriately if the environment is highly interfered with a light ray.

The purpose of this research is to replace the general measuring method which is done by hand, is time-consuming and is inefficient in the modern world. Therefore, a new distance-measuring system is quite efficient and suitable for accident-site measurement.

REFERENCES
1:  Barnard, S.T. and W.B. Thompson, 1980. Disparity Analysis of images. IEEE Trans. Pattern Anal. Mach. Intell., 2: 330-340.
CrossRef  |  Direct Link  |  

2:  Bertozzi, M. and A. Broggi, 1998. GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection. IEEE Trans. Image Process., 7: 62-81.
CrossRef  |  Direct Link  |  

3:  Bjorkman, M. and D. Kragic, 2004. Combination of foveal and peripheral vision for object recognition and pose estimation. Proceedings of the International Conference on Robotics Automation, April 26-May 1, 2004, Stockholm, Sweden, pp: 5135-5140.

4:  Han, S.H., W.H. Seo, K.S. Yoon and M.H. Lee, 1999. Real-time control of an industrial robot using image-based visual servoing. Proceedings of the International Conference on Intelligent Robots and Systems, October 17- 21, 1999, Kyungnam University, Masan, pp: 1762-1796.

5:  Henstock, P.V. and M.C. David, 1966. Automatic gradient threshold determination for edge detection. IEEE Trans. Image Processing, 5: 784-787.
CrossRef  |  Direct Link  |  

6:  Kim, S., I.S. Kweon and I. Kim, 2003. Robust model-based 3d object recognition by combining feature matching with tracking. Proceedings of the International Conference on Robotics and Automation, September 14-19, 2003, IEEE Xplore, London, pp: 2123-2128.

7:  Kuo, H.C. and L.J. Wu, 2002. An image tracking system for welded seams using fuzzy logic. J. Mater. Process. Technol., 120: 169-185.
CrossRef  |  

8:  Mallon, J., O. Ghita and P.F. Whelan, 2002. Robust 3-D landmark tracking using trinocular vision. OPTO-Ireland: SPIE's regional meeting on optoelectronics. Photonics Imaging, pp: 221-229.

9:  Marichal, G.N., L. Acosta, L. Moreno, J.A. Méndez, J.J. Rodrigo and M. Sigut, 2001. Obstacle avoidance for a mobile robot: A neuro-fuzzy approach. Fuzzy Sets Syst., 124: 171-179.
Direct Link  |  

10:  Mohan, R. and R. Nevatia, 1989. Using perceptual organization to extract 3D structures. IEEE Trans. Pattern Anal. Mach. Intell., 11: 1121-1139.
CrossRef  |  Direct Link  |  

11:  Ohya, A., A. Kosaka and A. Kak, 1998. Vision-based navigation by a mobile robot with obstacle avoidance using single-camera vision and ultrasonic sensing. IEEE Trans. Robot. Automation, 14: 969-978.
CrossRef  |  Direct Link  |  

12:  Olson, C.F. and D.P. Huttenlocher, 1997. Automatic target recognition by matching oriented edge pixels. IEEE Trans. Image Process., 6: 103-113.
CrossRef  |  Direct Link  |  

13:  Seara, J.F. and G. Schmidt, 2004. Intelligent gaze control for vision-guided humanoid walking: Methodological aspects. Robot. Autonomous Syst., 48: 231-248.
CrossRef  |  Direct Link  |  

14:  Stafford, R., R.D. Santer and F.C. Rind, 2007. A bio-inspired visual collision detection mechanism for cars: Combining insect inspired neurons to create a robust system. Biosystems, 87: 164-171.
CrossRef  |  PubMed  |  Direct Link  |  

15:  Starck, J.L., F. Murtagh, E.J. Candes and D.L. Donoho, 2003. Gray and color image contrast enhancement by the curvelet transform. IEEE Trans. Image Process., 12: 706-717.
CrossRef  |  Direct Link  |  

16:  Tsai, R.Y., 1987. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom., 3: 323-344.
Direct Link  |  

17:  Tirumalai, A.P., B.G. Schunck and R.C. Jain, 1992. Dynamic stereo with self- calibration. IEEE Trans. Pattern Anal. Mach. Intel., 14: 1184-1189.
CrossRef  |  Direct Link  |  

18:  Yau, W.Y. and H. Wang, 1999. Fast relative depth computation for an active stereo vision system. Real Time Imag., 5: 189-202.
CrossRef  |  

©  2021 Science Alert. All Rights Reserved