HOME JOURNALS CONTACT

Information Technology Journal

Year: 2010 | Volume: 9 | Issue: 5 | Page No.: 1022-1030
DOI: 10.3923/itj.2010.1022.1030
Road Detection and Corner Extraction Using High Definition Lidar
Xia Yuan, Chun-Xia Zhao and Hao-Feng Zhang

Abstract: In this study, we propose an algorithm to detectro and find landmarks (corners) in 3-D point cloud. Point cloud classification is an approach to find road or specific target but it is usually a time-consuming task especially when theory of random field was introduced into this research area recent years. The proposed algorithm is adapted for fast preprocessing and its parameters don’t need online learning. The algorithm employs a fuzzy cluster method based on maximum entropy theory to segment points, then a multi-times weight least-square linear fitting algorithm is used to differentiate linear and nonlinear distributed point segment. We extract road surface instead of road boundary and filters are designed to find road area. A corner fitting method will find corners of buildings as land marks according to different distribution of points. The spatial dependences among different laser detectors are considered to refine the results of extracted features. Experiment results valid the algorithm. The algorithm successfully extracts road and corners of buildings in point cloud which is sampled from complex semi-structured environment.

Fulltext PDF Fulltext HTML

How to cite this article
Xia Yuan, Chun-Xia Zhao and Hao-Feng Zhang, 2010. Road Detection and Corner Extraction Using High Definition Lidar. Information Technology Journal, 9: 1022-1030.

Keywords: segmentation, Ground robot, feature extraction, lidar and point cloud

INTRODUCTION

Navigating in urban area is still challenging for a ground robot as it has to deal with complex environment including buildings, trees, cars, pedestrians, etc. (Bailey, 2002; Castellanos et al., 1999; Dissanayake et al., 2000; Lu, 1995). Global localization technology like Global Position System (GPS) works badly sometimes because of bad satellite signal receiving. The robot itself has to make sure two things to ensure moving safely without collision and kidnapping. Firstly, it should know where is safe to go, i.e., where is the road. Secondly, it needs to know where it is now.

Researchers have studied how to use camera and lidar to solve two problems mentioned above. Variation of illumination, complicated texture and shadow make image processing becomes very difficult in urban area (Zhang et al., 2008) so the robot usually use lidar to find the road and landmarks (Nieto et al., 2007; Wellington et al., 2006) recent years. There are two types of lidar data commonly: data from single-row range finder and cloud points collected by 3D laser scanner. Range finder is usually applied to avoid collision as data collected by it is sparseness and adapted to process real time easily (Wu-Zu et al., 2008). Point cloud is dense and contains more information about the environment. Dealing with this kind of dense point cloud is time-consuming and it is a big challenge to apply it in robot navigation.

There are two mainly methods in point cloud understanding field: One is to resample data as small grids and analyze every grids to show the robot where is safe enough to pass through (Wellington et al., 2006; Iagnemma et al., 2002). It can make sure the safety of a robot and it is useful for local path planning algorithms. But this method treats every grid equally as it ignores the dependence between neighboring grids. This approach doesn’t satisfy some special requirements such as recognizing special objects or finding landmarks, etc. Another common method to understand point cloud is clustering point first and then extracts features of each cluster (Munoz et al., 2008; Kim et al., 2008) which is close to the way that human understands the world so it works well to differentiate different objects and focus on some saliency areas in a scene. Unfortunately this method always time-consuming and requires dense point cloud which is not easy to collect in the past. These two methods usually work together in actual systems (Urmson et al., 2008). Researchers pay lots attention to 3D point cloud is because it can be acquired easier than ever with the development of laser scanners and positioning systems.

Hidden Markov Model (HMM) (Wellington et al., 2006), Associated Markov Network (AMN) (Triebel et al., 2006, 2007; Munoz et al., 2008) or Conditional Random Field (CRF) (Lim and Suter, 2007) is used to classify point cloud recent years. These methods get better results than conventional ones which don’t consider the dependency among points (Vandapel et al., 2004).

In this study, we use a new type of lidar HDL64-E which has 64 laser detectors. Each laser detector of HDL64-E works independently. This study mode is convenient for the proposed method to analyze scan lines one by one instead of the whole point cloud at one time. We use fuzzy clustering method that based on maximum entropy theory to cluster lidar points and then a weight linear fitting algorithm is applied to find linear distribution area and corner distribution area. The proposed algorithm integrates features which extracted from different detectors to find road area and extract useful corners as land marks.

OVERVIEW OF THE ALGORITHM

High definition lidar: High definition lidar (HDL64-E) is designed for autonomous navigation. Traditional lidar sensors use a single laser detector firing into a mechanically actuated mirror, providing only one plane of view. The HDL-64E's one-piece design uses 64 fixed-mounted laser detectors to measure the surrounding environment. Each detector mechanically mounted with a specific vertical angle. This approach dramatically increase reliability, field of view and point cloud density. We get a point cloud after the head of HDL64-E rotating 360 degrees and meanwhile each detector will sample a scan line, respectively.

Approaches of the algorithm: We extract features from each scan line first and then strengthen the confidence of these features according to the dependency among 64 laser detectors scan lines. In this way, the algorithm works faster than classifying the whole point cloud first and then analyzes every class to find road and landmarks.

Another characteristic of this method is that we find road surface area instead of road boundary. We believe in many cases there are no road boundary can be reliably detected under lidar’s resolution, but it is possible to use some clustering algorithm to group the points which sampling from road surface together. Present experiments indicate that finding road area is more useful than finding road boundary in semi-structured environment.

For each scan line of a detector we segment points and fit lines first. Then we find road segments and corner segments according to some rules and spatial dependency of these features.

SEGMENTATION AND LINEAR FITTING

Coordinate system: The coordinate system of the robot is shown in Fig. 1.

Fig. 1: The coordinate system

Segmentation of points: We employ a fuzzy cluster method that based on maximum entropy theory to cluster lidar points (Yuan et al., 2008; Liu and Meng, 2004). It is assumed the threshold of prediction error is δ, the next predicted lidar point’s position from a same laser detector is and the real measured position is . The 3-D Euclidean distance between and is:

(1)

A point will be considered to be the start of another segment if ei>δ and it means this point is a critical point between different segments. The parameter δ should be determined according to the distance between lidar and the target since points collected by lidar are asymmetrical distributed in space. We use Eq. 2 to settle this problem:

(2)

where,

Parameter a in Eq. 2 is a scale factor to counteract noise and θ is half of the angle resolution of HDL64-E.

Suppose a laser detector gets N points after rotating 360 degrees and the system model which can be characterized by M cluster centers in an N-dimension space. Each cluster center can be represented by a vector that is composed by two vectors: the input vector and the output vector . For , let the first p dimensions correspond to p input variables that constitute the input vector and the other m-p dimensions correspond to m-p output variables that form the output vector .

Prediction error is exist when predict the position of the next point. The accumulated error can not be ignored in present algorithm. We present each input vector as:

(2)

where error item ei takes part in prediction process too and then the predicted error is used to minus the last predicted error in next prediction to reduce the effect of accumulate errors. Then present cluster centers are formed as follows:

(3)

For data point measured at time t, the probability of it belongs to the cluster centers can be viewed as its fuzzy membership μi, where μi ∈ [0, 1] and i = 1, 2,..,M. The summation of all μi’s is equal to one, i.e.,

(4)

The clustering process can be formulated as an optimization problem and the corresponding cost function to be minimized is defined as:

(5)

where, is the input vector of fuzzy centers.

In present problem the distribution of μi is unknown. According to the information theory, the Maximum Entropy Principle (MEP) is the most unbiased prescription to choose the values of the membership μi, for which Shannon entropy, i.e., the expression,

(6)

is maximal under the constrains Eq. 4. This optimization problem can be reformulated as the maximization of the Lagrange.

(7)

where, λ is a Lagrange multiplier.

The final solved probabilities are satisfying Gibbs distribution (Rose et al., 1990), i.e.,

(8)

Take Eq. 9 into Eq. 5 and versus λ differentiate and then take the experiential parameter (Rose, 1998) we get:

(9)

The algorithm use Eq. 10 to compute output vector.

(10)

where, wl is the weighting term:

(11)

There are no parameters have to be learned in this clustering method. After clustering we do linear fitting to differentiate different distribution of point segments.

Linear fitting: Road surface is smooth in urban area so the laser points sampled from it are close to linear distributed. We do linear fitting to find candidate road segments in each laser scan line.

Liadr points always contain lots of outliners which will affect linear fitting result. We use a weight linear fitting algorithm to find linear distributed segments in this paper. The fitting process contains three steps.

Step 1: On this step, we execute a regular least-square linear fitting algorithm on a point segment and get initial fitting line L1
Step 2: On this step, the algorithm do weight least-square linear fitting based on L1. Suppose the line function is:

(12)

and the sum of square of least-square dispersion of its simple linear regression is:

(13)

we can compute (β0, β1) by using regression analysis.

The next problem is to compute weight of a point. Suppose there are two adjacent points Pi and Pi+1. The weight of Pi equals to `s projection on L1. We normalize the weight of every point wi that belongs to the same segment using Eq. 14.

(14)

Symbol direction(L1) in Eq. 14 denotes the direction of L1. Weight least-squire linear fitting was executed after weight computing and then we get fitting line L2.

Step 3: On this step, we re-compute the weight of each point. The new weight of Pi equals to `s projection on L2. Then the algorithm takes 80% points whose weight are bigger and then executes the third times weight least-squire linear fitting to get L3

This linear fitting method has considered the direction between the fitting line and the line linked by two neighboring points. A point gets big weight if its direction is accord with the main distributed direction of the points in a segment.

Present experiment show value of slope of L1 and L3 are closely if points belong to a linear distributed segment, otherwise the difference is bigger than a threshold θline.

ROAD DETECTION AND CORNER EXTRACTION

The linear distributed segments are considered as candidate road surface area. We have to decide which candidate segments are real road surface according to some filters we designed here. Corners of buildings will be extracted from those nonlinear distributed segments. The dependence among adjacent scan lines collected by 64 laser detectors is considered.

Road detection: There are lots of linear distributed segments are not belong to road obviously because any sample points from regular surface are linear distributed. We have to design some filters to find real road surface.

It is easy to know the yaw, roll and pitch of the robot by using a gyro. We transform the point cloud P to the robot coordinates O shown in Fig. 1 using Eq. 15.

(15)

The road surface around the robot should be horizontal in O so we refine candidate road segments by searching horizontal linear distributed segments of points after project them into XOZ and YOZ plane as we have no idea where the road locates around the robot. Suppose the included angle between the horizontal and a fitting line of a linear distributed segment ri is β and the average height of points in ri is . The length of ri is L and the least width that the robot can be passed through is W. Then the first three road filters are expressed in Eq. 16.

(16)

where, Δβ is the threshold of the error of horizontal and h is the highest height of road surface.

The dependence among adjacent laser scan lines detected by different laser detectors should be considered. We believe linear distributed segments sampling from the same road section are almost parallel and no overlap when they are projected onto XOY plane. The fourth road surface filter F4 is expressed as there should be at least K parallel linear distributed segments if they belong to the same road section.

Linear distributed point segment ri that satisfying Eq. 17 will be chosen as road surface R.

(17)

Corner fitting and extraction: Corners especially some buildings’ corners can be detected repeatedly in urban area from different angle of views. These corners are treated as landmarks in this study.

Corner fitting algorithm will be done on a segment if points belong to it are nonlinear distributed. The points are projected onto XOY plane first. Present corner fitting algorithm is a somewhat degenerate case of a poly-line simplification algorithm. We split the point list into two at the knuckle point: the point farthest from the fitted line. The two lists of points are considered as two sides of a corner, respectively and the included angle of these two sides is α. The location of the corner itself is defined by the knuckle point.

We know a corner of building is commonly a right-angle so we restrict α satisfies.

(18)

We choose α1 = 75 and α2 = 105 in this study to present a right-angle.

Since, we only want to keep some big corners locates on a higher place which are more like a part of buildings. We restrict the distance between the first and the last point of a corner segment should be longer than T and the average height of the points belong to this corner segment should higher than H.

A corner of building must can be detected repeatedly form different angle of views which means more than one laser detectors should find the same corner. The location of a corner on XOY plane detected by different laser detectors will be very close. We prescribe a corner be detected at least by 3 laser detectors at same time can be treated as a landmark.

RESULTS AND DISCUSSION

All test data is collected by our ground robot and we use MATLAB 7 as our simulation platform. We have mentioned earlier detect road surface is more useful than detect road boundary in some cases. We show two scenes in Fig. 2a and b. Green lines in Fig. 2c and e are results of road boundary detection using method in (Chen et al., 2007). It is failed to find road boundary in these scenes as the boundary between road and vegetation are irregularly. We successfully extract road surface as shown in red lines in Fig. 2d and f.

Experimental data of Fig. 2 is from only one laser detector. We choose a representative data set of 64 laser

detectors contain both road and buildings around a crossroad (Fig. 3). There are also lots of trees which are treated as noise as they affect feature extraction. The size of Fig. 3 is about 125 by 125 m2.

The result of road extraction can be seen in Fig. 4. Red lines in Fig. 4 are extracted road area. Since, it is a crossroad there are road exist in all four directions. We can see the farther the points away from HDL-64E the more sparseness they are. We can find this kind point cloud is anisotropic.

Figure 5 is a corner extraction result from a single laser detector (detector No. 3). The points are expressed as black dot and the critical point between different segments are black “`x”.

Fig. 2: (a-f) Results of road surface detecting

Fig. 3: A scan frame

Fig. 4: Road area extract from Fig. 2

Fig.5: Corner extract of list of point in No. 3 laser detector

Fig. 6: Local magnification of Fig. 5. The ‘*’ represent knuckle points

Fig. 7: 3-D display of corners (green) in Fig. 5 with road surface (blue)

There are three buildings in Fig. 3. The algorithm finds three corners of buildings in Fig. 5 shown with green lines with red “*” denote for knuckle point. Figure 6 is a local zoom out of the rectangle area in Fig. 5. There are two corners in Fig. 6 because the second floor of this building is not lie on the same plane with the first floor.

Figure 5 and 6 are result of one scan line collected by No. 3 laser detector. Figure 7 is the result of multi scan lines shown in 3-D display. Blue lines are road and green lines with red “*” as knuckle point are corners. We can see big corners of buildings can be detected by several laser detectors at same time. This is the most advantage of HDL64-E compare with traditional 3D lidar. Present algorithm holds the advantage of fast computing without learning in one hand and in another hand we have considered the dependency between adjacent scan lines when we design road and corner filters.

It is failed to detect the up-right building in Fig. 4 by our algorithm as there are lots of trees in front of it and we only extract some fragments of the corner which failed to pass corner fitting rules.

CONCLUSIONS AND FUTURE WORK

When robot navigates in urban area automatically it has to find road and land marks. We propose this road detection and corner extraction algorithm adapted for characteristics high definition lidar of HDL64-E lidar. The algorithm doesn’t have to learn parameter online which is usually did by algorithms based on random field theory. It is designed to extract landmarks (corners) and find road area cursorily but fast.

The algorithm executes fuzzy clustering and weight least-squire linear fitting to find candidate road and corners. Then we have designed some filters to refine candidate road and corners. The dependences among 64 detectors of HDL64-E are considered when we design filters.

Traditional algorithms usually find road boundary but it is hard to do this in complex urban area. We find road surface instead of road boundary and get better results.

In future work, we want to study how to deal with occlusion in urban environment. We will also pay attention to how to find more kinds of features which can be treated as landmarks. Choosing landmarks is difficult because landmarks should have fixed location and big enough so they can be detected repeatedly from different angle of views.

ACKNOWLEDGMENT

This study is supported by the National Natural Science Foundation of China under Grant No. 90820306 from 2009 to 2012.

REFERENCES

  • Bailey , T.A., 2002. Mobile robot localization and mapping in extensive outdoor environments. Ph.D. Thesis, Australian Center for Field Robotics, Department of Aerospace, Mechanical and Mechatronic, Engineering, University of Sydney.


  • Castellanos, J.A., J.M.M. Montiel, J. Neira and J.D. Tardos, 1999. Sensor influence in the performance of simultaneous mobile robot localization and map building. Proceedings of the 6th International Symposium Experimental Robotics IV, March 26-28, Sydney, Australia, pp: 287-296.


  • Dissanayake, W.M.G., P. Newman, H.F. Durrant-Whyte, S. Clark and M. Csorba, 2000. An experimental and theoretical investigation into simultaneous localisation and map building. Lecture Notes Control Inform. Sci., 250: 265-274.
    CrossRef    


  • Iagnemma, K., H. Shibly and S. Dubowsky, 2002. On-line terrain parameter estimation for planetary rovers. Proceedings of IEEE International Conference on Robotics and Automation, May 11-15, Washington, DC, USA., pp: 3142-3147.


  • Kim, G., G. Huber and M. Hebert, 2008. Segmentation of salient regions in outdoor scenes using imagery and 3-D data. Proceedings of the IEEE Workshop on Applications of Computer VIsion, IEEE Computer Society, Copper Mountain, CO, USA., Jan. 7-9, pp: 1-8.


  • Lim, E.H. and D.S. Suter, 2007. Conditional random field for 3D point clouds with adaptive data reduction. Proceedings of 2007 International Conference on Cyberworlds, Oct. 24-27, Hannover, Germany, pp: 404-408.


  • Liu, P.X. and M.Q.H. Meng, 2004. Online data-driven fuzzy clustering with applications to real-time robotic tracking. IEEE Trans. Fuzzy Syst., 12: 516-523.


  • Lu, F., 1995. Shape registration using optimization for mobile robot navigation. Ph.D. Thesis, Graduate Department of Computer Science, University of Toronto.


  • Munoz, D., N. Vandapel and M. Hebert, 2008. Directional associative markov network for 3-D point cloud classification. Proceedings of the 4th International Symposium on 3D Data Processing, Visualization and Transmission. June 18-20, Atlanta, GA, USA., pp: 1-8.


  • Nieto, J., T. Bailey and E. Nebot, 2007. Recursive scan-matching slam. Robotics Autonomous Syst., 55: 39-49.
    CrossRef    


  • Rose, K., 1998. Deterministic annealing for clustering, compression, classification, regressions and related optimization problems. Proc. IEEE, 86: 2210-2239.
    CrossRef    


  • Rose, K., E. Gurewitz and G.C. Fox, 1990. Statistical mechanics and phase transitions in clustering. Phys. Rev. Lett., 65: 945-948.
    CrossRef    


  • Triebel, R., K. Kersting and W. Burgard, 2006. Robust 3d scan point classification using associative markov networks. Proceedings of the IEEE International Conference on Robotics and Automation, May 15-19, Orlando, Florida, USA., pp: 2603-2608.


  • Triebel, R., R. Schmidt, O.M. Mozos and W. Bugard, 2007. Instance-based AMN classification for improved object recognition in 2D and 3D laser range data. Proceedings of the 20th International Joint Conference on Artificial Intelligence, Jan. 6-12, Hyderabad, India, pp: 2225-2230.


  • Urmson, C., J. Anhalt, D. Bagnell, C. Baker and R. Bittner et al., 2008. Autonomous driving in urban environments: Boss and the urban challenge. J. Field Robot., 25: 425-466.
    CrossRef    Direct Link    


  • Vandapel, N., D. Huber, A. Kapuria and M. Hebert, 2004. Natural terrain classification using 3-D ladar data. Proceedings of the IEEE International Conference on Robotics and Automation. April 04, New Orleans, LA, USA., pp: 5117-5122.


  • Wellington, C., A. Courville, A. Stentz, 2006. A generative model of terrain for autonomous navigation in vegetation. Int. J. Robot. Res., 25: 1287-1304.
    CrossRef    Direct Link    


  • Yu, W.Z., H.X. Han, L.X. De, W. Min and Y.H. Cheng, 2008. A simultaneous localization and mapping method based on fast-hough transform. Inform. Technol. J., 7: 190-194.
    CrossRef    Direct Link    


  • Yuan, X., C.X. Zhao, Y.F. Cai, H. Zhang and D.B. Chen, 2008. Road-surface abstraction using ladar sensing. Proceedings of the 10th International Conference on Control, Automation, Robotics and Vision, Dec. 17-20, Hanoi, Vietnam, pp: 1097-1102.


  • Zhang, M., Z. Shang and J. Shen, 2008. Research on the image segmentation necessity for regions-based image processing. Inform. Technol. J., 7: 269-276.
    CrossRef    Direct Link    


  • Chen, D.B., C.X. Zhao, H.F. Zhang, 2007. Quick road-boundary detection based on 2D laser rangefinder. J. Image Graphics, 12: 1604-1609.

  • © Science Alert. All Rights Reserved