HOME JOURNALS CONTACT

Information Technology Journal

Year: 2011 | Volume: 10 | Issue: 11 | Page No.: 2175-2181
DOI: 10.3923/itj.2011.2175.2181
A New Location Method for LBS Based on Image
X. U. Xinchao, X. U. Aigong, S. U. Lijuan and X. U. Yantian

Abstract: To increase the scope of LBS users, we have proposed a new location method of LBS based on image. Image mosaic was carried out by SIFT operator under VC first. Then combined with the extraction of image control points and corresponding points’ in topographic map, we completed the location by two algorithms, iterative method and direct method of space resection. The algorithms are implemented for different images. The performance of the algorithms is analyzed. The experiment results show that the accuracy of algorithms can satisfy the requirement of LBS. The location algorithms based on image is available for LBS, can greatly increase the output value of LBS.

Fulltext PDF Fulltext HTML

How to cite this article
X. U. Xinchao, X. U. Aigong, S. U. Lijuan and X. U. Yantian, 2011. A New Location Method for LBS Based on Image. Information Technology Journal, 10: 2175-2181.

Keywords: image, location, space resection, LBS, SIFT and image mosaic

INTRODUCTION

Location Based Service (LBS) refers to get the actual location information of the user through the wireless network with the positioning terminal, uses this information to provide a range of value-added services (Qin et al., 2009). There are many current LBS business at present, such as Google Buzz, Foursquare, Twitter, PleaseRobMe, FourWhere, Layar, Loopt, Brightkite, Gowallaand so on.

The system was divided into three parts, mobile terminals, wireless communication and service center. There were many popular methods of mobile terminal location in LBS, GPS location, CELL-ID based on network, TOA/TDOA technology, EOTD based on terminal and A-GPS with Network and terminal (Zeng, 2004). The Positioning accuracy varied from a few centimeters to several hundred meters according to different techniques (Ming-Cai and Cheng-Kuan, 2009). The GPS location technology had highest accuracy, it required the mobile terminal equipped with GPS. But it may lose its accuracy in urban areas, especially between tall buildings. All the location methods need transform the user’s mobile terminal except for CELL-ID, so the opportunity would be lose for majority of ordinary users to enjoy the services and LBS operator may lose this huge market. Camera has become basic configuration of most cell phone with the development of hardware technology. It provides an opportunity to make image-based location for LBS become reality.

IMAGE-BASED LOCATION OF LBS

With the development of hardware technology, Pixel of phone camera has become higher. Communication cost reduced and speed of data transmission also became faster (Deren and Xin, 2009). Those developments laid a foundation for image-based location algorithm in LBS. The CELL-ID was the only way in traditional algorithm of LBS location that didn’t require any modification of mobile terminals to be targeted but its location accuracy was a little poor. It was just about 200 m in urban areas and became lower in suburbs.

The only restriction of image-based location in LBS was the camera of terminal. When users wanted a service, what he needed to do was using his mobile terminal camera and photographing the nearest landmark or other obvious features, then sent the collected images to the service center by mobile terminal. Service center got user's approximate position first using CELL-ID and found out high-precision topographic map nearby from system. After that extracted information needed for high-precision location, such as images, camera focal length, image resolution, etc. The operator found out the significant points from the image as the control points, such as the corner of the building, or other signs of things, got their pixel coordinates in images and geodetic coordinates in topographic map. Through space resection algorithm, calculated the precise location, so as to provide high quality services for users. Figure1 shows the structure of image-based location.

KEY TECHNOLOGIES

Key Technologies of image-based location mainly include extraction of control points in image and location algorithm with image. After compared with topographic map, we extracted the exact geodetic coordinates of control points both in image and topographic map to provide the initial data of location.

Fig. 1: Structure of image-based location

Finally we got the coordinates of the shooting position after calculation.

Extraction of control points: Control points include image two major parts, points in image and in topographic map. To get user’s position through image-based location, we need three control points at least. Generally, we take more than three, eventually get the precise shooting position after adjustment by least-square algorithm. Extraction of image control points should be distributed evenly as possible. The general selection is building corner or other obvious feature. Zoom in those corners or features, coordinates of control points in image can be extracted at sub-pixel level. This will minimize the error caused by extraction. Then, we must get corresponding control points in topographic map and measure those points to get their coordinates. Thus conversion relationship between image coordinate system and coordinate system of topographic map is established. Meanwhile it also provides initial data for follow-up location algorithm (Yi et al., 2008).

Image mosaic: Image-based location needs more than three control points. When control points are less than three, we need to mosaic the images, re-select control points after. Image mosaic is a way to make up a large, seamless high-resolution image with some pieces of overlapping images. The course of image mosaic is as follow. User collected a few rotated images user-centered at first. Then corrective of brightness and color should be taken in service center. Matched those images through SIFT algorithm. Until all the images were matched and integrated, the mosaic course completed. Process is shown in Fig. 2.

SIFT (Scale-Invariant Feature Transform) algorithm was proposed in 1999 by DGLowe and perfected in 2004 (Jun-Ying, 2010). The main idea was to find extreme points in scale space, extract its location, scale and rotation invariant as matching primitives to discrimination. SIFT algorithm must get positions of key points, its scale and feature detection in scale space at first.

Fig. 2: Image mosaic process

In order to extract the scale-invariant feature points, Lowe (2004) raised Difference-of-Gaussian (DoG), subtracted adjacent original images each other after different level of Gaussian filter to get DoG pyramid then detected extreme point both in the two-dimensional space and DoG space and got initially position of feature point. To locate feature points accurately, we approximate estimated each feature point by three-dimensional quadratic Taylor series expansion Using the DoG operator D(X), removed the low contrast feature points and unstable response edge points:

After carrying on precise extraction of feature point, we selected main peak of gradient direction histogram as the main direction of feature points (Bin and Na, 2009) took the direction which peak to 80% of the main peak as the secondary direction of feature points to enhance robustness of matching. A feature point can have only one principal direction but there may have multiple secondary directions. Finally, we got SIFT operator and completed matching of feature points. Each feature point was used to describe by 4x4 seed and each seed had 8 directions of vector information, so we could obtain a SIFT operator with 4x4x8 128-dimensional and then judged the similarity by Euclidean distance. After further integration of matching result, traces of splicing should be removed. Image mosaics process completed. The mosaic image would be saved in the database. It was convenient to use when needed.

Image-based location algorithm: There are two location algorithms in this study, iterative method and direct method of space resection. Choose a certain algorithm depending on the number and distribution of control points to get optimal location results. Schematic diagram of space resection is shown in Fig. 3.

Fig. 3: Schematic diagram of space resection. Where A, B, C are control points in image corresponding to a, b, c on the ground. S is the shooting position

Iterative method of space resection: Resection is a process that reversed shooting coordinates (Xs, Ys, Zs) and photography directions (φ, ω, κ,) by a certain number of control points (Li et al., 2010). This process is also called image orientation. Φ is the slip angle. ω is tilt angle. κ is rotation angle for images. Least squares solution based on collinearity equations is used to solve the problem of single image resection. The original observation equation is nonlinear, therefore, the solving process is an iterative process. Since there are six unknown, three known control points are needed at least. In order to reduce the error caused by accuracy of control point, we usually choose four or more control points.

Collinearity equations are established the with the control points in image, the projection center point of image and the surface feature points corresponding to control points in image at first:

(1)

where, x, y represent coordinate of control point, can be measured directly in image. XA, YA, ZA represent three-dimensional coordinates of control points in topographic map corresponding to control points in image, can be accurately measured in topographic maps. f is camera focal length, need to extract from user’s request information:

a1 = cos φ cos κ-sin φ sin ω sin κ, a2 = -cos φ sin κ-sin φ sin ω cos κ, a3 = -sin φ cos ω
b1 = cos ω sin κ, b2 = cos ω cos κ, b3 = -sin ω
c1 = sin φ cos κ-cos φ sin ω sin κ, c2 = sin φ sin κ-cos φ sin ω cos κ, c3 = cos φ cos ω

In order to facilitate computer calculation, we need to linearize collinearity equation and take one degree term for solving. List and linearize the error Equation:

(2)

where, (x) and (y) are approximate coordinate component of control point in image obtained by solving central projection equation using control point coordinates in topographic map and approximate values of other elements. dXS, dYS, dZS, dφ, dω, dk are six corrections for each elements. Rewritten the equations as follow:

(3)

where, V = [Vx, Vy]T. A is the coefficient matrix composed of partial derivatives. X = [dXS, dYS, dZS, dφ, dω, dk]. L = [(x)-x, (y)-y]T.

According to principle of least squares, Unknown X can be described as follow:

(4)

where, P is a unit weight matrix.

The initial unknown was a rough approximation. Therefore, the calculation must be tended asymptotically. Adding up old approximation values and corrections as new approximation values, then, we can get new corrections by repeating the calculation process. Until the corrections smaller than a certain value, we get the shooting position S accurately.

Direct method of space resection: Direct method of space resection came from geodetic survey. It can be described as follow, the vertical angle and the horizontal angle was observed in three directions from a “new point” to three "old point". Thus, we can calculate coordinate of the "new point" according to three "old point" coordinates and angle observations (Yun-Lan et al., 2008). The problem can be considered as that how to get shooting position S directly with coordinates of three known control points Oi (i = 1, 2, 3) in topographic map and corresponding image points Ni (i = 1, 2, 3).

The Direct method process has two steps. We usualluy get the distances Si between Oi and shooting position S at first and solve the coordinate of S later:

Get the distances Si

Letting S12 be the distance between O1 and O2, Similarly, S23 be the distance between O2 and O3, S31 be the distance between O1 and O3. φ12, φ23, φ31 be corresponding angles between each ray, then according to the law of cosines the equations can be list as:

(5)

Letting:

iterative form can be show below:

(6)

Let, α = [F12 S122 F23F232 F31S312]T, b(k) = [-G12 (S1(k)-S2(k)) 2-G23 (S2(k)-S3(k))2-G31(S3(k)-S1(k)2]T:

Iterative form can be rewritten into the following form:

(7)

Distances between shooting position S and control points O1, O2 and O3 can be calculated by iteration:

Get the coordinate of S

With symmetric matrix A and Unit matrix E, Orthogonal rotation matrix R can be formed below:

(8)

Where:

As Si is the distance between known points Oi (i = 1, 2, 3) and shooting position S, αi, βi are the viewing direction from S to Oi. (XS, YS, ZS) is coordination of shooting position (Xi, Yi, Zi) is coordination of Oi (i = 1, 2, 3). Distance equations can be listed as:

(9)

Therefore, the above equation can be rewritten as follow:

(10)

Evolution equation was obtained as:

(11)

According to known control point information, we can list 6 equations. Finally, solving those equations, three parameters of rotation matrix (a, b, c) and shooting position of S (XS YS ZS) can be obtained.

EXPERIMENT AND DISCUSSION

Experiment: We need to mosaic adjacent images if there are not enough control points in one image. So image mosaic experiment was carried out. Figure 4 shows the original image before mosaic. Figure 5 shows the image after mosaic. Experiment shows that the result of mosaic process is satisfactory.

After image mosaic, we need to get control points in topographic and corresponding image control points to providing initial data for our location algorithms. Figure 6 shows the control points selected both in topographic and image. Other experiment’s data selection is similar to Fig. 6.

In order to verify the feasibility of location algorithms, we collected image using TCL Z99 mobile phone in this study and then tested the location algorithms above. For LBS, value of Z is not necessary now, so we set it to 0. The experiment results of two location algorithms are shown in Table 1 and 2.

Table 1: Results of iteration space resection location (Units: m)

Table 2: Results of direct space resection location (Units: m)

Fig. 4: Original image1 and image2

Fig. 5: Mosaic result

The average distance means average distance between shooting position and control points. Due to space reasons, details coordinates in topographic map and image are not listed here.

Figure 7 shows location mean square error of two algorithms. Let D as mean square error, Δκ as the error in X direction and Δy in Y direction, it can be calculate using follow equation:

(12)

Fig. 6: Control points in topographic and corresponding image control points

Fig. 7: Distribution of location error

DISCUSSION

The location results show that: the accuracy of iterative method of space resection is slightly better than the direct method, so we choose iterative method as the first choice. The maximum location error in iterative results is 5.14 m at D1. The maximum location error in direct results is 6.601 m at D1. The minimum location error in iterative results is 0.512 m at D4. The minimum location error in direct results is 0.416 m at D5. With the increase of the distance between shooting position and control points in topographic map is longer than other’s, the location error becomes larger. When the distances are almost equal, the location results also nearly same. Due to lack of mobile phone camera calibration parameters, so the result may be a slight bias in practice. But accuracy of result can completely satisfy the need of LBS. Image-based location of LBS is feasible.

Compared with the traditional location technologies in LBS, image-based location has the following characteristics:

Image-based location algorithm for LBS is proposed for the first time. Taken mobile terminal camera as a means of location enriches the LBS location technology
The algorithm takes mobile camera phones as clients. It brings more users to LBS operators
The algorithm is simple without reconstruction mobile phones. Users only need to collect images. location can be achieved. with high accuracy
Compared with other technologies, accuracy of our algorithm is better more except GPS. But its accuracy is enough for LBS
The location can be achieved whether indoor and outdoor

CONCLUSION

Most of traditional location methods were carried out by external aid. The accuracy was a little poor except GPS. But GPS location needed to install GPS location module model on mobile terminal. Image-based location breaks the traditional thinking, allows a more flexible way for LBS location. This study analyzed the characteristics and critical issues of image-based location for LBS. we extracted the feature points in image first, then get approximate position through CELL-ID, found out topographic maps nearby. After that we can obtain points coordinates in topographic maps accurately corresponding to image feature points. Finally, get the location of the user's position through space resection. Strategy Analytics say that camera phones sales this year increased 21% compared with last year's pictures. There will be a wide range of applications for image-based location.

ACKNOWLEDGMENTS

This study is supported by the Research Funds: basic theory of the accuracy positioning for China's lunar rover(National natural fund); Research of the Photogrammetry-based positioning technology(National 863 plan); key technologies of high-precision navigation and positioning for China's lunar rover (National 863 plan); Projects of Liaoning Province University Innovation Team; Project of Liaoning Province University Key Laboratory and Higher Education Research Program of Liaoning.

REFERENCES

  • Qin, Y., C. Jun-Liang and M. Xiang-Wu, 2009. LBS-Oriented creation method and implementation for telecommunication value-added services. J. Software, 20: 965-974.
    Direct Link    


  • Zeng, Y., 2004. Implementation of mobile LBS system. Ph.D. Thesis, University of Electronic Science and Technology, China.


  • Ming-Cai, W. and Y. Cheng-Kuan, 2009. The application and development of LBS industry in China. J. Hebei Normal Univ. (Nat. Sci. Edn.), 33: 687-692.
    Direct Link    


  • Yi, C., L. Jue and Z. Bo, 2008. Application of total least squares to space resection. J. Geomatics Inform. Sci. Wuhan Univ., 33: 1271-1274.


  • Jun-Ying, S., 2010. Mosaicing of multiple spectrum image acquired from unmanned airship with SIFT feature matching. J. Applied Sci., 28: 616-620.
    Direct Link    


  • Lowe, D.G., 2004. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision, 60: 91-110.
    CrossRef    Direct Link    


  • Bin, L. and Y. Li-Na, 2009. Algorithm for sequence image splicing based on SIFT features. Ordnance Ind. Autom., 28: 76-78.
    Direct Link    


  • Li, Y., N. Qian and Z. Zhan, 2010. Space resection of line scanner CCD image based on the description of quaternions. J. Geomatics Inform. Sci. Wuhan Univ., 35: 201-204.
    Direct Link    


  • Yun-Lan, G., C. Xiao-Jun and Z. Shi-Jian, 2008. A solution to space resection based on unit quaternion. J. Solut. Space Resect. Based Unit Quaternion, 37: 30-35.


  • Deren, L. and S. Xin, 2009. Geo-spatial urban information services based on real image: Take image city wuhan as an exampl. J. Geomatics Inform. Sci. Wuhan Univ., 34: 127-130.
    Direct Link    

  • © Science Alert. All Rights Reserved