HOME JOURNALS CONTACT

Information Technology Journal

Year: 2014 | Volume: 13 | Issue: 16 | Page No.: 2611-2618
DOI: 10.3923/itj.2014.2611.2618
Study of Moving Obstacle Detection at Railway Crossing by Machine Vision
Yong-Ren Pu, Li-Wei Chen and Su-Hsing Lee

Abstract: This study is designed to develop an advanced safety system that is able to detect the existence of moving obstacles at a railway crossing. In a miniature railway crossing, scaled 1:22.5, the authors installed a grayscale CCD camera and developed a graphical user interface to process the images of the crossing. To achieve this goal, the software was programmed to perform several image processing techniques such as image subtraction, binarization, morphological transformation and segmentation to track down the moving obstacles. In addition, portions of the monitored image around the rails were labeled as alert and alarm zones where detected obstacles would trigger the sirens of this system. Under various lighting conditions for the model cars with different colors in the indoor environment, the experiments on the developed system demonstrated that the level of illuminance was a significant factor affecting the average alert accuracy rate but the color of cars was not. Overall, the average alert accuracy rate reached 97.8%. The promising results of the system’s capability to recognize obstacles that might pose threats, merit pursuit of full-scale development at railway crossing sites to provide effective protection against previously undetected, now preventable incidents.

Fulltext PDF Fulltext HTML

How to cite this article
Yong-Ren Pu, Li-Wei Chen and Su-Hsing Lee, 2014. Study of Moving Obstacle Detection at Railway Crossing by Machine Vision. Information Technology Journal, 13: 2611-2618.

Keywords: machine vision, obstacle detection, railway crossing and CCD

INTRODUCTION

The railway transportation industry is quite important and has been integrated into the lives of populaces all over the world. The railways bring us convenience on one hand but on the other, they also cause many traffic accidents each year. Most accidents take place at street-level crossings. This seems to demonstrate that the road users do not consistently obey the traffic laws and that the safety protection measures of crossings also need to be improved. An accident within a railway crossing boundary is often caused when an approaching train collides with intruding pedestrians or vehicles on the tracks of the crossing. The outcomes of such incidents vary and many produce results ranging from simply inconvenient schedule delays to traffic jams, to damage to vehicles or human injury, or finally, to fatalities or even train derailments. Although the accidents at crossings are not common, their resulting consequences, of both casualties and property losses, are the most egregious among ground transportation accidents. They often leave victims’ families and local communities devastated. The principal cause of most accidents occurs when a report of an incident or a vehicle stalled at a crossing can not be communicated to the engineer quickly. By the time the engineer becomes aware of the situations, it is already too late.

Governments and railroad companies all over the world pay close attention to railway security and implement many safety precautions to prevent unfortunate incidents at railway crossings. According to statistics, the accident rates are decreasing but are still far from the total elimination that people are pursuing (Cunliffe, 1999).

In order to actively detect obstacles in the way of approaching trains at railway crossings, researchers have developed various devices such as: Infrared pairs of detecting units to indicate ray-interrupting objects (Takeda, 1999); various radar systems designed to detect vehicles present within the crossing boundaries (Lohmeier et al., 2002; Watanabe et al., 2002; Narayanan et al., 2011); various optical devices for level crossing obstacle detection (Kim et al., 2012; Silar and Dobrovolny, 2013); alarm or warning systems based on accurate positioning of trains using GPS (Lei and Xiao, 2011; Oorni, 2014); a dual intelligent safety alarming system based on PLC to warn the passing vehicles (Zhang et al., 2014) and ultrasonic sensors installed above a crossing to detect any obstructions beneath (Sato et al., 1998).

Along with advances in computer capabilities, the technologies of image processing have been applied to upgrade the safety of the railway crossings for more than a decade. Many studies that have been done or that are currently in progress are as follows. Video cameras mounted on some locomotives capture the images of crossings directly in front of the trains to look for possible obstacles (Kim and Cohn, 2004; Mockel et al., 2003). Some object detection algorithms for railway environments were developed based on background subtraction techniques (Gonzalez et al., 2008; Wei et al., 2013). A group of Japanese researchers installed cameras at the four corners of a railroad crossing, each of which pointed toward its center, to monitor the passage of vehicles and provide safety at the crossings (Yoda et al., 2006). Some scholars proposed to use real-time crossing-site images to provide the engineers more information (Ku, 2010; Salmane et al., 2013).

In order to warn locomotive engineers of emergency situations at upcoming crossings and avoid possible imminent accidents, this study conducted preliminary research to develop an obstacle detection system for railway crossings. By assembling models of a locomotive, cars, railways and crossing pavements; setting up an image grabbing device and coding an image processing program and interface; the authors studied the feasibility of this program to recognize whether a crossing is clear of obstacles through the processing of CCD images. The achievement should be an important basis for developing the future visual safety system.

MATERIALS AND METHODS

Nearly three quarters of the accidents inside railway crossing boundaries are caused by risky driver behavior. Those vehicles that intrude swiftly, trying to get through the crossings, unfortunately, often get hit by approaching trains. These accidents happen in the blink of an eye and the engineers on board can do nothing about it. This kind of incidents can only be prevented by the barrier devices at the crossings. On the other side of the coin, more than a quarter of the accidents take place when vehicles inadvertently become stranded or stall at the crossings for reasons such as: Traffic jams, uneven pavement, malfunction of vehicles and poor driving skills, etc. These dangerous situations are usually identifiable early on and if they are detected immediately, are able to be communicated to the engineers soon enough to avoid serious consequences.

This study designs an advanced safety system for obstacle detection by machine vision at railway crossings. The schematic diagram of this system is shown in Fig. 1. A CCD camera installed at a crossing continuously captures images of vehicles passing over the tracks. When a train approaches, the system constantly transmits the live video feed of the crossing to the locomotive’s cab and processes the images to see if there is any obstacle within the crossing boundaries. If such is the case, an alarm signal is promptly sent to draw the engineer’s attention. Therefore, the engineer can initiate appropriate measures according to the monitored scene of the crossing ahead.

Figure 2 shows the flow chart of the safety system. When the train enters the operating zone (triggers a reed switch in this case), the system begins to acquire and broadcast the images of the crossing. At the same time, the images are being processed to detect obstacles. If any are present, a warning signal is transmitted immediately to the locomotive. The system is turned off when the train exits the operating zone by triggering a second reed switch located near the crossing where the engineer has clear visibility of the scene. The cycle starts over when the next train enters the operating zone again.

Fig. 1: Schematic diagram of the obstacle detection system at a railway crossing

Fig. 2: Flow chart of the obstacle detection system at a railway crossing

The authors constructed a miniature railway crossing scaled at 1:22.5 and installed a grayscale 1394 CCD camera of AVT Guppy F146B to shoot the crossing. The frame rate is 30 Hz and each frame has the size of 640H480. The camera is wired to an NI PCI-8252 1394 interface that is plugged in to the computer of Intel Core 2 Duo CPU E8300, 2833 MHz. The operating system is Microsoft Windows XP SP2 and the software is coded using LabVIEW 7.1.

It is noted that the camera is fixed and the scene is stationary. During image processing certain procedures introduced in the following subsections have to be interpreted as detecting model cars in the miniature crossing.

Image subtraction: Each pixel of the acquired grayscale image has an 8-bit intensity. A railway crossing image clear of any obstacles is initially saved as a reference image. In order to detect moving targets the software performs the image subtraction of the reference image from each of the monitored images.

Those pixels that possess higher values are the candidates of the targets. These clustered pixels might belong to the model cars which are indeed the obstacles of interest, or to other smaller and unimportant objects that do not impede safety. Some other pixels in the image also have higher values due to the flickers of lighting conditions. They seemingly scatter all over the image and can be categorized into noises. They need to be filtered out by morphology discussed later. There are inevitably pixels with smaller values, which actually belong to the moving targets. It is because the grayscale of these pixels is similar to the backgrounds. They will be recovered by morphology as well.

Double threshold binarization: The grayscale values of all pixels during image subtraction contain many details that are trivial to the process and burden to the software computations. Binarization is used to reduce the size of managed data and to distinguish the pixels with values higher than a threshold from those with lower values. In order to identify the obstacles the software needs to get rid of those pixels whose values do not change abruptly. A smaller threshold t1 can be chosen to do the job. On the other hand, the pixels with a much higher value (close to 255) are actually some of the noises. Therefore, the software uses a double threshold binarization to separate all pixels into two groups as follows:

(1)

where, T is the binarized value, i is the original grayscale value and t2 is the larger threshold that can eliminate a portion of noises. After binarization the pixels with a value of 0 are regarded as the foregrounds while the others are the backgrounds. Figure 3 shows the extracted foreground pixels through image subtraction and double binarization from two grayscale images. The pretests suggested that t1 = 30 and t2 = 230 would let the extracted foreground object resemble the target closely and that the scattered noises were neither particularly many in number, nor thick in size.

Morphological transformation: A particular binarized image consists of most of the target pixels and many isolated noises. To filter out noises, people often use morphological transformations such as erosion, dilation, opening, closing and so forth (Gonzalez and Woods, 2007). Since almost all noises are so small that their sizes are only one or two pixels wide, one or two erosions can wipe these noises off the binarized image. This step will, however, erode the target as well which would cause it to lose some of the foreground area.

Fig. 3(a-d):
Result images of image subtraction and double binarization, (a) Original image without target, (b) Original image with target, (c) Image after subtraction of original image with target from original image without target and (d) Binarized image after subtraction of original image with target from original image without target with [t1,t2] = [30, 230]

Fig. 4(a-b):
Result images of erosion and dilation, (a) Image after eroding Fig. 3d twice and (b) Image after dilating Fig. 3d twice

In order to maintain the size of the foreground, several dilations can make the target regain its weight and keep its integrity. Figure 4 shows the examples of erosion and dilation on the noises and foreground object. It is obvious that after dilation the target is almost the same size and more connected.

Segmentation and tracking: After identifying the foreground pixels in an image, the next step is to group them into ‘particles’ by the function called connectivity (National Instruments Corporation, 2005). Those pixels that are adjacent to each other are included in the same particle. Each particle can be assigned the coordinates of all pixels of its own by another function called particle analysis. Knowing these coordinates for a particle, its area and centroid can further be determined. Once the threshold for the particle area is set, smaller particles will be neglected. What is left are those bigger particles. For each filtered particle a tracking frame is drawn, according to the coordinates of its most up, down, left and right pixels, to follow this specified particle in real time.

Fig. 5: Segmentation and tracking of two model cars in the miniature railway crossing

Figure 5 demonstrates an example of how to segment the foreground into two particles (model cars) and to trace them with separate tracking frames. Both centroids are marked and can be traced for their positions and velocities.

According to the authors’ experience, one-time erosion is good enough to get rid of almost all noises and the rest can be filtered out by area thresholding. Moreover, four dilations will let all model cars regain their integrity by filling up their seams and holes in the binarized images. However, there is a catch. After too many dilations, each particle will swell to a certain extent. The tracking frames of both particles appear to be larger than the actual sizes of both cars. It is because the inconsistent lighting in the environment and the shadows, together with dilations, definitely influence particle size. Therefore, there is always a certain amount of discrepancy in detecting a moving obstacle entering the railway crossing.

Alert and alarm zones: There are several types of railway crossings. Some are equipped with only signs, some with activated signals and others with gates or barriers. Vehicles might stop or get stranded at any place within a crossing. These conditions need to be communicated to the engineer of the approaching train. The authors, therefore, designed an alert zone and an alarm zone onto the monitored image which are rectangular and can be adjusted directly on the control panel. The alert zone usually comprises most of the crossing area. If obstacles are detected inside this zone, a1n alert signal and a moderate siren are transmitted. The alarm zone, however, must be close to the rails. If vehicles stay inside this zone, they are bound to collide with the train. When the software determines this condition, a blaring alarm siren is immediately issued to prompt the engineer respond immediately. Figure 6 is an example of a crossing image in which both alert and alarm zones are defined.

Fig. 6:
Alert and alarm zones in the image of the miniature railway crossing

Developed software: The graphical user interface is coded using LabVIEW 7.1, as shown in Fig. 7, to conform the procedures and techniques discussed above. In the monitored image of the control panel, the alert and alarm zones are present as visible rectangles whose sizes can be adjusted by entering their horizontal and vertical line numbers. The double thresholds for binarization are attuned using scroll bars. The scaled warning indicator is visualized with three colors: Green for clear of obstacles, yellow for obstacles in the alert zone and red for obstacles in the alarm zone. One of two sound files of sirens with different tones and intensities, is played depending on where obstacles are detected in the alert or alarm zones, respectively. At the moment shown in the figure, there are two model cars being simultaneously tracked and successfully triggering the alarm siren, because they are both inside the alarm zone.

Experiment on the system: In the indoor environment the artificial illumination on the miniature railway crossing can not exceed 500 lux. Varying the illumination makes a segmented particle change its size. Besides, the color of a moving obstacle might be another factor that affects the integrity of the particle. The authors conduct an experiment to test how effective this software would be in detecting moving model cars passing through the miniature crossing. In the meantime, this experiment also investigates how the illumination and the color of model cars influence the results.

The experiment is designed to calculate the accuracy rate of the software in its ability to detect the model cars. There are three model cars with different colors that include red, yellow and black. The illuminance starts at 100 lux, increased to 300 lux and ends up at 500 lux. At each level of illuminance, the software processes all of the model cars which, one at a time, roll through the crossing at a constant speed and trigger, without loss of generality, the alert signal only.

Fig. 7: Graphical user interface for obstacle detection in the miniature railway crossing

Fig. 8(a-c):
Experiment images on the yellow model car which was (a) Entering the alert zone, (b) Triggering the alarm and (c) Departing from the alert zone

The Accuracy Rate (AR) is defined as the percentage of the following fraction:

(2)

where, Tci is the time when the model car enters the alert zone, Tco is the time when the model car exits the alert zone. Similarly, Tfi is the time when the tracking frame enters the alert zone, Tfo is the time when the tracking frame exits the alert zone. Figure 8 shows the experiment images on the yellow model car at 300 lux.

Since, the size of the segmented particle is always larger than the actual model car, this rate is always less than 1. It is noted that AR is the index of how the extracted particle is similar to the corresponding model car. The experiment will test each model car twenty times under the controlled illumination.

RESULTS AND DISCUSSION

The results of the accuracy rates on all model cars under various illuminance are sorted in Table 1. There are nine treatment groups formed by making each combination of three levels of illuminance and three levels of colors. The total average alert accuracy rate was 97.8%. Using two-way ANOVA statistics to test the differences of average accuracy rates at three levels of illuminance with three different colored model cars, the results showed that there was no interaction effect between the color of cars and the level of illuminance (F = 1.23, p = 0.30).

Fig. 9:
Association between levels of illuminance and colors of model cars, where, 1 = Red, 2 = Yellow and 3 = Black

Table 1: Accuracy rates of the software in detecting the moving model cars with the specific colors to trigger the alert mode under the controlled illuminance

For the main effect of individual factors, the level of illuminance had a significant result (F = 23.11, p<0.0001) but not the color of the cars (F = 0.58, p = 0.56).

The statistical information indicates that the level of illuminance was a significant factor on the average alert accuracy rate in detecting the model cars. The color of the model cars, on the other hand was not a significant factor. Moreover, there was no interaction between these two factors. They did not affect each other in all treatment groups of the average accuracy rates.

The association between the model car colors and the illuminance at the crossing is shown in Fig. 9. Based on Scheffe’s test, a post hoc comparison in ANOVA statistics, accuracy rates at 100, 300 and 500 lux were compared to each other. At the level of 100 lux, the image was murky and the average accuracy rate was significantly lower than at either 300 or 500 lux. However, comparing the accuracy rates at 300 and 500 lux, there was no difference between these two levels of illuminance.

It is noted that this study utilized a miniature railway crossing to simulate a real one and the indoor illumination could not be comparable to the outdoor lighting condition of those image-processing studies (Gonzalez et al., 2008; Wei et al., 2013; Salmane et al., 2013). Moreover, the experiment performed in this study was to acquire the accuracy rate of the alarm triggering at varying illumination and color of the model cars which was not analogous to the previous studies, especially the results of Salmane et al. (2013).

CONCLUSION

In this study, the development of a safety system at a miniature railway crossing by machine vision and the associated image processing techniques were introduced. This software demonstrated the capability of detecting moving obstacles in a defined area and realizing whether the situation was in a safety, an alert or an alarm mode. In the experiments on the system, the authors used three model cars, each with different colors, at three varying levels of illuminance to calculate the accuracy rates of detecting obstacles. The statistical analyses showed that the accuracy of recognizing an obstacle in the alert zone is affected by the illuminated condition but not by the color of the model car. These results point to a very promising direction in developing the future safety system. Once the system is completely developed and fully operational, the safety of the crew and passengers on trains, as well as the road users, will be further assured.

REFERENCES

  • Cunliffe, J.P., 1999. Rail/highway safety: The advent of high-speed rail system. Prof. Saf.: J. Am. Soc. Saf. Eng., 44: 24-26.
    Direct Link    


  • Gonzalez, J.F.V., J.L.L. Galilea, M.M. Quintas and C.A.L. Vazquez, 2008. Sensor for object detection in railway environment. Sens. Lett., 6: 690-698.
    CrossRef    Direct Link    


  • Gonzalez, R.C. and R.E.Woods, 2007. Digital Image Processing. 3rd Edn., Pearson Hall, New Jersey, USA., ISBN-13: 978-0131687288, Pages: 976


  • Kim, G., J. Baek, H. Jo, K. Lee and J. Lee, 2012. Design of safety equipment for railroad level crossings using laser range finder. Proceedings of the 9th International Conference on Fuzzy Systems and Knowledge Discovery, May 29-31, 2012, Sichuan, China, pp: 2909-2913.


  • Kim, Z. and T.E. Cohn, 2004. Pseudoreal-time activity detection for railroad grade-crossing safety. IEEE Trans. Int. Trans. Syst., 5: 319-324.
    CrossRef    


  • Ku, B.Y., 2010. Grade-crossing safety. IEEE Veh. Technol. Mag., 5: 75-81.
    CrossRef    


  • Lei, Y. and H. Xiao, 2011. Research on alarm system of railway crossing based on GPS and GPRS. Proceedings of the International Conference on Remote Sensing, Environment and Transportation Engineering, June 24-26, 2011, Nanjing, China, pp: 3374-3376.


  • Lohmeier, S.P., R. Rajaraman and V.C. Ramasami, 2002. An ultra-wideband radar for vehicle detection in railroad crossings. Proceedings of the IEEE Conference on Sensors, Volume 2, June 12-14, 2002, Orlando, Florida, USA., pp: 1762-1766.


  • Mockel, S., F. Scherer and P.F. Schuster, 2003. Multi-sensor obstacle detection on railway tracks. Proceedings of the IEEE Intelligent Vehicles Symposium, June 9-11, 2003, Columbus, OH., USA., pp: 42-46.


  • Narayanan, A.H., P. Brennan, R. Benjamin, N. Mazzino, G. Bochetti and A. Lancia, 2011. Railway level crossing obstruction detection using MIMO radar. Proceedings of the European Radar Conference, October 12-14, 2011, Manchester, UK., pp: 57-60.


  • National Instruments Corporation, 2005. IMAQ vision concepts manual. January 2005 Edition, Part No. 372916D-01, National Instruments Corporation, Austin, TX., USA. http://www.ni.com/pdf/manuals/372916d.pdf.


  • Oorni, R., 2014. Reliability of an in-vehicle warning system for railway level crossings-a user-oriented analysis. IET Intell. Transport Syst., 8: 9-20.
    CrossRef    Direct Link    


  • Salmane, H., L. Khoudour and Y. Ruichek, 2013. Improving safety of level crossings by detecting hazard situations using video based processing. Proceedings of the IEEE International Conference on Intelligent Rail Transportation, August 30-September 1, 2013, Beijing, China, pp: 179-184.


  • Sato, K., H. Arai, T. Shimizu and M. Takada, 1998. Obstruction detector using ultrasonic sensors for upgrading the safety of a level crossing. Proceedings of the International Conference on Developments in Mass Transit Systems, April 20-23, 1998, London, UK., pp: 190-195.


  • Silar, Z. and M. Dobrovolny, 2013. Utilization of directional properties of optical flow for railway crossing occupancy monitoring. Proceedings of the International Conference on IT Convergence and Security, December 16-18, 2013, Macao, China, pp: 1-4.


  • Takeda, T., 1999. Improvement of railroad crossing signals. Proceedings of the IEEE/IEEJ/JSAI International Conference on Intelligent Transportation Systems, October 5-8, 1999, Tokyo, Japan, pp: 139-141.


  • Watanabe, M., K. Okazaki, J. Fukae, N. Tamiya, N. Ueda and M. Nagashima, 2002. An obstacle sensing radar system for a railway crossing application: A 60 GHz millimeter wave spread spectrum radar. Proceedings of the IEEE MTT-S International Conference on Microwave Symposium Digest, Volume 2, June 2-7, 2002, Seattle, WA., USA., pp: 791-794.


  • Wei, C.P., Y.M. Huang, Y.C.F. Wang and M.Y. Shih, 2013. Background recovery in railroad crossing videos via incremental low-rank matrix decomposition. Proceedings of the 2nd Asian Conference on Pattern Recognition, November 5-8, 2013, Naha, Japan, pp: 702-706.


  • Yoda, I., K. Sakaue and D. Hosotani, 2006. Multi-point stereo camera system for controlling safety at railroad crossings. Proceedings of the IEEE International Conference on Computer Vision Systems, January 4-7, 2006, New York, USA., pp: 51-.


  • Zhang, Z.G., X.F. Li and Y.L. Gan, 2014. Railway crossing intelligent safety alarming system design based on PLC. Proceedings of the 26th Chinese Control and Decision Conference, May 31-June 2, 2014, Changsha, China, pp: 5045-5048.

  • © Science Alert. All Rights Reserved