A computer`s visual system can be divided into two categories-plane
vision and stereoscopic vision. The difference is based on the ability
to estimate the depth of an object in an image-depth perception. In Barnard
and Thomson (1980) proposed the idea that a computer stereoscopic vision
system should include image acquisition, camera modeling, feature detection,
image matching, depth determination and interpolation. At the end of 1970,
researchers had already developed several computer stereoscopic vision
algorithms. The stereoscopic visual system needs at least two cameras
to synchronously capture a stereo image. An accurate parameter acquired
from the camera is required so that the object`s stereoscopic depth can
be calculated by using the corresponding point of the object`s stereo
image. The stereoscopic depth of the object (Henstock and David, 1966;
Olson and Huttenlocher, 1997; Starck et al., 2003) can be obtained
by using the asymmetric geometry under the given parameters of the camera.
The accuracy of stereoscopic depth is related to the parameters of the
camera. Many researchers have proposed several calibration methods for
correcting the cameras stereoscopic vision so as to decrease the calculation
error in the object`s depth (Mallon et al., 2002)
This research will establish a quick, efficient and low cost image-calculation-system
in which a distance calculation can be achieved by using the web cam (CCD),
a cheap input device and a USB interface-this is done without auxiliary
equipment. The image-calculation-system is used to evaluate the target`s
distance (from the system`s platform) and the variety of the object. In
addition, the distance between the system`s platform and target can be
evaluated using the diaphragm size of the light source derived from the
CCD (Barnard and Thompson, 1980; Tsai, 1987; Bertozzi and Broggi, 1988;
Ohya et al., 1998; Yau and Wang, 1999).
A traditional image, using one eye, cannot detect the distance between
an object and a web camera (Tirumalai et al., 1992; Stafford et
al., 2007). In this research, a new judgment rule is adopted to find
the distance between the light source and the camera. The camera will
pick up the image continuously (Ohya et al., 1998). The variety
of light from the light source is therefore acquired through image processing.
Moreover, the relative distance between the system`s platform and the
object is calculated using the vision algorithm (Mohan and Nevatia, 1989;
Marichal et al., 2001). Experimental results reveal that the distance
between the system and the object can be evaluated immediately (Han et
al., 1999). The measuring system for the visual distance can be conveniently
operated in a general environment. This is a convenient and time-saving
method. By using an image in conjunction with its distance information,
the distance measurement work is assigned (Kuo and Wu, 2002; Stafford
et al., 2007; Bjorkman and Kragic, 2004; Kim et al., 2003;
Seara and Schmidt, 2004).
As shown in Fig. 1, consecutive dynamic images
are fed into the computer`s memory by the imaging equipment of the web
cam (CCD). A common USB (Universal Serial Bus) interface has been adopted
and used in the CCD.
The image processing converts an analog image into a digital image; moreover,
advanced image data will be gained by using GFLSDK to delete the unnecessary
images. The above method can be achieved by the threshold value.
The image`s data, which has already been processed by the above equipment,
can be used to acquire the object`s location by the distance measurement
rule. While the object is moving continuously, the system will keep tracking
the object`s distance. This is called the on-line distance measurement
of the image.
||The projection region of the light source
As shown in Fig. 2, the system has two regions - a light projecting zone and a non-light
projecting zone. When the light projecting zone is located within the
object area and the distance-measuring method is used, the distance will
be measured by the system.
THE STRUCTURE OF DISTANCE MEASUREMENT SYSTEM
The acquisition of a light signal is an essential issue for a visual image.
Based on the same axis with respect to a light signal and a CCD, the system
can evaluate the distance using the image of the object and the light signal
received from a single CCD. The most important aspect is how to catch the clear
light signal. The object`s distance can be calculated by fast image processing,
pattern comparison and the measuring algorithm of the CCD`s light signal.
System introductions: The distance measuring system includes a
web camera. According to the parallel optical axis, the image will be
sent to the system host via the USB interface. As indicated in Fig.
3, both the Super LED and convex lens installed in the platform are
responsible for light control which is required and used in the visual
Image input: The language program of the system uses Visual Basic.
The programmer can easily and quickly code and debug when using Visual
There are several objects that are visually retrieved in Visual Basic.
In our system, the VideoCapFree.ocx is used. It is free and easy to operate
and can be used with Access, Visual C++, Visual Basic, Visual Foxpro and
Delphi. The VideoCapFree.ocx is linked to the image grab system. The captured
image, which uses a single photograph that can be enlarged, shrunk and
corrected, can provide the retrieval of video and be saved in various formats within
the multi-media files.
||Digital image system
||The required memory space with respect to various colors
The required imaging hardware includes TV tuner
Cards, a web cam, capture cards, etc.
Image processing: A concept of graphing is required before the
imaging process is performed. For example, a piece of graphic data is
320 pixels in width, 240 pixels in height and has 24 colors. It is regarded
as a 320x240 pixel. Each pixel, the basic unit in an image, has 24 bits
which can indicate the corresponding color used. The category of color
in the pixel is increased when the bit number is enlarged.
The corresponding memory the pixel uses will be larger. The color of
the image is composed of red, green and blue. The common color format
and its related memory in the computer image are shown in Table
The way to simplify image processing is to simplify the depth of the
RGB color. We use both the gray scale and binary technique to filter out
the unnecessary color until the color becomes the binary form only. The
leftover binary data will be the only useful image data. The location
of the object`s image is obtained by the tracking and judgment rule.
Gray scale: The image of the CCD camera is in full color and is
regarded as the RGB image data. It is not necessary to take time to deal
with the RGB image. We can then simplify the full color by converting
it to the gray scale. This means that the RGB format is transformed to
the YIQ, where Y is the luminance, I is the hue and Q is the saturation.
As shown in Fig. 4, the grayed image (gamma turns) is
obtained when Y is filtered out of the image. Consequently, the above
pixel will be composed of 256 colors (Ohya et al., 1998).
Image threshold: The image data processed by the gray scale is
achieved by dividing both the black and the white into 256 kinds of colors.
A single pixel is compared at least 256 times while the object tracking
is performed. Even though recognizing the color of an object by using
the above grayed image is much better than that of the non-grayed image
(checked with 16777216 times), an advanced technique for improving color
recognition is expected in future.
||The gray scale
Image threshold is an important stage during image processing.
The processed image data is better than non-processed image data for image
storing, processing and recognition. In regard to the technique in extracting
certain images, the important point is setting the threshold value in
which the gray value is between 0 and 255. After the image threshold processing
is performed, the image will become two gray values. If the gray value
of the related pixel is larger than the preset threshold value, the pixel
will be defined as the light spot (the bit of 1); otherwise, the pixel
with a lower gray value will be defined as the dark spot (the bit of 0).
Thereafter, a pixel represented by two bits will exhibit a mono color
as shown in Fig. 5.
GFLSDK library: The GFLSDK library is used for establishing the
image-distance measuring system. The purpose of the system is the transformation
of the image format. The GFLSDK, fee free and available for the noncommercial
and academic group, has 100 kinds of input formats and 40 kinds of output formats.
||The variation of pixel range with respect to distance in a CCD
It can also be linked to
the Delphi, Visual C++, Visual Basic, Borland C++ Builder, etc., for programming
work. The distinguishing feature of the GFLSDK is power and high speed
processing. It is often applied in developing image graphic software.
Image vision discusses: Using a CCD camera, judging the visual
image is equivalent to measuring a distance by using a ruler. A feature
of our research is to quickly estimate depth by using a camera and a light
source, by using the distance and related information between the image
and the object and by converting the above data into the required values
(Mallon et al., 2002; Bertozzi and Broggi, 1988; Yau and Wang,
1999; Ohya et al., 1998).
Discrepancy from a distance: The resolution in the CCD camera
is fixed; therefore, the valued pixel will not change when the distance
is varied. The COMS is the primary web cam lens in the current market.
The newest lens is the VGA CCD which has one million three hundred thousand
pixels. Even though the lens of VGA CCD is adopted in this research, an
error in distance still exists. The variation of pixel range with respect
to distance in CCD is shown in Table 2.
Depth computation: The allocation of a system platform and the
object is shown in Fig. 6a. The L1, a span between Super
LED and the convex lens, is related to both the focus (d) and diaphragm
(D), which are produced by casting the source`s light to the object.
In this experiment, the span (L1) is set at 11 cm. To obtain the real
distance (L) shown in Fig. 6b, the related information
of the focus (d) and diaphragm (D) at L1 = 1 m is required, where diaphragm
(D) is the diameter of the amplified circle projected from the light source
to the object via the convex lens.
The relationship between distance L and the image (projected from the
light source to the target via the convex lens) is shown in Fig.
7. Taking the CCD`s image variation into consideration, the L can
be expressed as below:
||The relationship of light source and convex lens
||The relationship between the distance L and the image
The image seen through a convex lens at L1 = 1 m has been acquired as
earlier. When the object is turned at an angle shown in Fig.
8, the L can be obtained by using the CCD`s image variation value
in the following equation.
||The distance judged with respect to various angles
IMAGE JUDGING METHOD
In this research, the visual image is used to evaluate the object`s
distance. All the experimental data, accuracy and measuring speed are
related to the calculation of the visual image. How to achieve a fast
measurement, reduce the error in experimental work and efficiently promote
the quality of the research becomes the main issue.
Introduction of the image judging method: As indicated in Fig.
9a and b, the image judging begins by comparing
colors with respect to the individual image pixel from left to right.
After image processing, the color of a single pixel is reduced from 16,770,000
colors to 2 colors. For an image with 320x240 pixels, the total color-comparison
is decreased 77361 times.
The image-judgment-method is used to recognize the object`s location
inside the image. An efficient image-judgment-method will not only speed
up the object`s measurement but also reduce the measurement error. The
image-judgment-method is essential for the distance-measuring system.
For an image with 320x240 pixels, the variation of the judgment number
with respect to various processes such as the original image, the grayed
image and the image threshold at a single pixel is shown in Table
Compound image judgment: In practical usage, the image-judgment-method,
used to seek the area projected by the light source, is too accurate;
therefore, it is inefficient in image judgment. For a working piece with
a given error range of 50±0.5 mm, it would be meaningless if the required accuracy of manufacturing reaches 50±0.05 mm.
||Scanning from left to right
||Scanning into the center
||The No. of judgments for a single pixel at various stages
to facilitate the imaging judgment, a fundamental judgment method needs
to be planned, again. That is, the compound-image-measurement method becomes
the modified image-judgment-method.
The fundamental judgment method maintains a fixed judging region and
scans the image. However, the compound judgment method will adjust the
judging zone in accordance with the various locations of a moving object.
Therefore, the scanning zone shown in Fig. 10 will
be adjusted automatically.
The best distance between light source and convex lens: If both
of the object`s judgment velocity and measurement accuracy are regarded
as the most important targets, the following strategies can be considered
for improving both the speed and accuracy.
||Compound image judgment
The reduction and amplification of the image from the
Decrement of the span between the light source and the convex lens:
The CCD can obtain one set of light images on the object while the span
between the light source and the convex lens is shortened. If the system
platform reduces the span between the light source and the convex lens,
the projected light image will become divergent or will shorten the allowable
judgment distance. As indicated in Fig. 11, the image
is obtained by the CCD using the reduced distance between the light source
and the convex lens.
Increment of the span between the light source and the convex lens:
In the above example, if the distance between the light source and convex
lens is enlarged, the percentage of image concentration will be increased
at some distant range between the light source and object. However, the
image becomes vague beyond the above distant range. The available range
will be changed when the amplified distance is varied. As shown in Fig.
12, the image is captured by the CCD within the available distance
between the light source and object. Similarly, in Fig.
13 the image is captured by the CCD outside the available distance
between the light source and object.
In order to obtain a fast measurement and improved accuracy, the amplification
of the best distance is required.
||The image within an available distance
||The image beyond an available distance
The image`s clearness projected from the light source to the
object is a must while the above method is used.
Whole analysis: This section is concerned with the system`s
function as well as its practical performance. To evaluate the system`s
effect with respect to various parameters and to improve errors in the
laboratory, individual comparison of parameters using experimental work
has been developed. One of the features in our research is to overcome
the difficulty of image acquisition within a normal environment. To acquire
an image, a color filter lens in conjunction with the Super LED used to
produce various wave lengths of light is applied.
Influence analysis: There are lots of convex lenses with various
magnifications on the current market. In order to evaluate the diaphragm
effect with respect to various magnifications of convex lenses, various
experiments with different magnifications have been performed. As indicated
in Fig. 14, the comparison of a variety of lenses for
object projection can strengthen the liability and accuracy of the experiment.
In order to appreciate the system`s performance index, the system is
divided into two kinds of structures and tested in the lab.
||The influence of lens enlarging rates
||Focus and diaphragm with respect to distance
The structure of the system, a two-dimensional
flat surface visual image, is used to evaluate the effect of focus (d)
and diaphragm (D) with respect to the distance. The first experiment presumes
that L1 is fixed at 11 cm. The investigation of the projected area`s variety
with respect to various distances is carried out using a convex lens (magnification:
2X). The results are shown in Table 4.
If L1 is a variable, the area variety of focus (d) and diaphragm (D)
with respect to distance will be experimentally investigated at various
L1s. Both the focus variety and diaphragm variety with respect to distance
at various L1s are depicted in Fig. 15 and 16.
The results reveal that L1 plays an essential role in the image acquisition
and the object`s projection; moreover, the L1 influences the image judgment.
The analysis of real environmental influences: In the real world,
an object is not always perpendicular to the ground; therefore, the effect
of a projected area`s variety with respect to distance under a non-perpendicular
circumstance is investigated and shown in Fig. 17 and
According to different requirements, the system will be equipped with
an appropriate program which can make the system more efficient. The program`s
structure of a two dimensional visual image is shown in Fig.
18. Table 6 is the related hardware equipment that
is used within the visual image.
||The variety of diaphragm focus with respect to distance at various
The variety of diaphragms with respect to distance at various L1s
||The variety of a diaphragm with respect to angles
||The variety of a diaphragm with respect to distance under a perpendicular
and a tilting situation
||The program structure of a dimensional visual image
||Related hardware used with the visual image
RESULTS AND DISCUSSION
Experiment for a two-dimensional flat surface image: The
input image used in the laboratory is 640x480 pixels. The images with
and without image processing are shown in Fig. 19a
and b. Using the data in the system, a image judgment
can be performed.
Table 7 is the specification of image processing, including
the required time per single piece, the processing pieces per second and
the pixels of the input image.
The comparison of the image recognition method
Basic scanning method: As indicated in Fig. 20,
the image scanning is performed point by point. It is one kind of image
processing which may take a long time.
Matrix scanning method: As indicated in Fig. 21,
image scanning is performed with multi-points. The required scanning time
is shorter than that of the basic scanning methods, even though some of
the area will be repeatedly scanned. The blue matrix area will move continuously
from the left to the right and from the top to the bottom.
Compound scanning method: As indicated in Fig.
22, the image will be scanned within a specified area. Using this
method, the required time for scanning will be shortened; in addition,
the scanning speed will be faster than the others. There will be sufficient
|| Image after system`s processing judgment
||Point to point scanning
Comparison: To evaluate the performance of three kinds of image
scanning methods, ten pictures are exemplified, compared and shown in
||Matrix scanning method
||Compound scanning method
||The comparison of various image scanning methods
It is obvious that the compound scanning method has the shortest scanning time than other methods;
therefore, the distance can be quickly calculated by image judgment when
the compound scanning method is adopted.
APPLICATION AND INTEGRATION OF THE IMAGE-DISTANCE MEASURING
System and a realistic environment: The aim of this research is to establish
an image-distance measuring system which can be used in a complicated
light-disturbing environment. This system, which replaces the hand measurement,
can be performed in a normal environment.
Introduction of whole system: In order to carry out the image-distance
measuring system, the system is design to be used in a normal environment.
Because of the interference by natural optics or fluorescent lamps in
a normal environment, three kinds of light-filter lenses (red, blue and
green) are required. To utilize the system in industry and improve the
efficiency of image-acquisition by the CCD, the above light-filter lens
is installed in the system.
Hardware configurations: As indicated in Fig. 23,
the integrated image-judging system is composed of three kinds of components.
In this system, a computer connects with the image-judging system. By
using the input image and moving the object, the system can judge the
The influence of vision with respect to a light-filter lens
The projecting experiment with a fixed background: In the past,
the recognition of a visual image was possible in a good environment.
For an environment with a intense brightness condition, the image can
not be identified; therefore, a light-filter lens used to filter out the
external signal noise is required. The image captured by the CCD at 3
m before and after image processing with different filter lenses is shown
in Fig. 24.
Special background tests: The recognition of visual images is
used for special objects in a simple environment. In a complicated environment,
the image is difficult to recognize; therefore, a light-filter lens used
to filter out the external signal noise is needed. The image variety with
respect to various light-filter lenses before and after image processing
is depicted in Fig. 25. It is obvious that the errors
of both image-obtained and image-judged induced by background interference
can be highly reduced by using the appropriate light-filter lens in conjunction
with image processing.
||The image before and after threshold with different filter lenses
The main purpose of this research is to establish a highly efficient
vision-distance measuring system which is steady, cheap and fit for the
application in machine vision. As the results reveal, the performance
of the system is quick, accurate and cheap. Unfortunately, in the system
the CCD device and the light source used for image input are costly. Other
software, in which the trial version is released, is free of charge. The
devices, including a CMOS lens of 300,000 pixels, a laser pen and a convex
lens (magnification: 2X), are adopted and used in the experiment. However,
because of light ray interference and a complicated environment, the stability
of the system is insufficient.
To overcome the above drawbacks, a VGA CCD of 1500000 pixels is adopted
for image recognition; however, errors in image identification still exist.
Therefore, the Super LED is taken as the new light source in the experimental
work. Yet, performance is still lacking. Because of the interference of
light and a complicated environment, it is easy for the system to make
errors in image-judging while image acquisition using a CCD is processing.
In order to overcome this problem, three kinds of light-filter lenses
(red, blue and green) are used with the background variation. Image recognized
using the human eye with some colors at a long distance is weak. After
image-processing, the identification of an image signal is improved. Moreover,
by adjusting the light-filter lens, the image-judging system`s instability
is highly reduced because of interference by light rays and a complicated
The degree of light source lumen in the experiment is 25~30. The measured
distance can reach 1(m)~7(m) under the influence of a light ray. It is
recommended to increase the lumen of the light source appropriately if
the environment is highly interfered with a light ray.
The purpose of this research is to replace the general measuring method
which is done by hand, is time-consuming and is inefficient in the modern
world. Therefore, a new distance-measuring system is quite efficient and
suitable for accident-site measurement.