ABSTRACT
Accurate pupil contour detection plays an important role in iris recognition systems. This study presents a new algorithm for precise pupil segmentation based on Gradient Vector Flow (GVF) snake and the Angular Integral Projection Function (AIPF) and the proposed method has four stages. In the first stage, the iris image is filtered and smoothed. Secondly, the pupil center is estimated as the center of mass of the binarized iris image. Then, the circular pupil boundary is localized by applying the AIPF followed by circle fitting. In the final stage, the precise pupil contour is detected using GVF snake initialized on the detected circular pupil boundary. Experimental results obtained by applying the proposed algorithm on CASIA V3.0 iris image database, demonstrate that the presented method has improved both the accuracy and speed of iris localization.
PDF Abstract XML References Citation
How to cite this article
DOI: 10.3923/itj.2010.1653.1658
URL: https://scialert.net/abstract/?doi=itj.2010.1653.1658
INTRODUCTION
Extracting accurate pupil contour is significantly important to iris recognition systems. It has great impact to iris matching accuracy due to the fact that most important iris information exists in the collarette area which is located near the pupil boundary.
Many pupil segmentation methods have been proposed yet. The most prominent work is that of Daugman (1993, 2004), in which the integrodifferential operator was proposed to segment the iris region. Another influential method based on the edge detection and Hough transform was proposed by Wildes (1997). Later, many variations and alternatives to these segmentation methods have been presented by Kong and Zhang (2001), Ma et al. (2004), Nishino and Nayar (2004) and Tisse et al. (2002) and more recently, other methods are presented by Proenca and Alexandre (2006), Sudha et al. (2009) and Zuo et al. (2008). However, all above mentioned methods are considering the pupil contour as a standard circle which may degrade the detection accuracy, this is because in many cases small areas of pupil are taken as iris region. In addition, most of these methods are computationally intensive since they are based on edge detection.
As recent trend in image segmentation, active contour models are deployed to locate accurate object boundaries. First presented by Kass et al. (1987) as an energy minimizing process, active contour model or snake is evolved under the controls of internal force, image force and external constrains force. Later on, several methods have been proposed to improve its performance (Abrantes and Marques, 1996; Cohen, 1991; Davatzikos and Prince, 1995; Williams and Shah, 1992). However, the problems related to the initialization, low speed and poor convergence to boundary concavities, still hold. In particular, Xu and Prince (1998) have proposed other improved model based on new external force called Gradient Vector Flow (GVF) to enhance the snake ability to move into boundary concavities, but it was computationally expensive.
In this study, a new method for the detection of the precise pupil contour is proposed, it improves the computational efficiency of the GVF snake model by providing an accurate initial contour, which in turn reduces the number of snake iterations. Thus, in the proposed method, the circular pupil boundary is localized first, then the precise pupil boundary is detected. In our earlier study (Mohammed et al., 2010), the Angular Integral Projection Function (AIPF) as a general function to perform integral projection along angular directions has been proposed and applied effectively to detect the circular pupil boundary in iris images. After the approximate pupil center is detected, the AIPF is applied to localize the circular pupil boundary by detecting pupil boundary points followed by circle fitting. Then the accurate pupil contour is detected based on GVF snake model which is initialized on the pupil circle localized from the previous stage. Experimental results on CASIA V3.0 iris images (CASIA, 2006) show that the proposed method can converge accurately to the final pupil contour within few iterations, which confirm the performance of the proposed method.
PROPOSED ALGORITHM
The proposed algorithm for the detection of the precise pupil contour is mainly composed of four stages. The first stage is for image preprocessing. And in the second stage, the pupil center is approximated. While in the third stage, the circular pupil boundary is localized and the last stage is for detecting the precise pupil contour. The details are as follows:
Image deionizing and enhancement: Due to the illumination source, pupil region of iris images from CASIA V3.0 database is commonly contaminated by several reflection spots, which result in failure to estimate the pupil center. One simple scheme to remove these noisy spots, is by first evaluating the image complement of the iris image then filling the resulted dark holes based on connectivity of 4-connected pixels. Then the image complement is evaluated again. And finally, since the above technique results in sharp image, a Gaussian smoothing filter is applied. Figure 1a-c illustrate this technique.
Pupil center estimation: A gray level histogram-based approach is utilized to estimate the pupil center. Here, as commonly known, almost pupil regions are located near iris image center. Taking account of that and in order to reduce the effect of the dark areas caused by eyelashes and low illumination regions and to reduce computation time, the middle sub image is considered to perform histogram analysis (Fig. 2a). Then from the computed histogram curve, a threshold value is selected to perform image binarization. However, as shown in Fig. 2b, the thresholded image still has outlier pixels caused by eyelashes. Thus, morphological processing is followed. And finally the approximate pupil center (xp, yp) is determined by calculating the center of mass of the segmented pupil, as in Fig. 2c.
Circular pupil boundary localization: In this stage, the circular pupil boundary is localized using the AIPF followed by circle fitting. Where a set of pupil boundary points is detected by applying the AIPF, then a circle is fitted to these boundary points by making use of the least mean square method.
Angular integral projection function: In our previous work (Mohammed et al., 2010), the Angular Integral Projection Function (AIPF) has been proposed as a general function to perform integral projection along angular directions. Generally, the AIPF is applied to detect boundary points between different image regions. The AIPF computes the integral of a line within image space, it is defined as follows:
![]() | (1) |
where, (x0, y0) is the image center, I(x, y) is the gray level of the pixel at location (x, y), θ is the angle of line normal with x-axis, ρ is the distance from the image center to the line and h represents the number of points within the line. Here, it is pointed out, that the application of AIPF for a set of w parallel image lines with the same parameters (θ, h), yields an integration rectangle with hxw dimensions and having θ with x-axis.
![]() | |
Fig. 1: | Image filtering and smoothing. (a) Original image, (b) complemented image after hole filling and (c) processed image |
![]() | |
Fig. 2: | Pupil center estimation. (a) The middle sub-image, (b) binary image and (c) segmented pupil |
![]() | |
Fig. 3: | Precise pupil contour detection. (a) Circular pupil boundary (initial snake), (b) initial snake points and (c) final pupil contour |
Pupil circle localization: In order to detect a set of pupil boundary points, the AIPF is applied considering the pupil center obtained from the previous stage as the image center and θ runs in [0, 2π]. Then, a boundary point is selected after computing the gradient of the projection curve resulted from each integration rectangle on θ direction. Here, a four steps technique is applied to detect the boundary point as:
• | In the gradient curve, values under zero are taken as zero |
• | Detect all possible peaks in the curve |
• | All peaks below 35% of the maximum peak value are filtered out |
• | Select the first peak in the resulted curve as the detected boundary point |
Finally, the circular pupil boundary is localized through fitting a circle to the above detected boundary points by making use of the least square method, as shown in Fig. 3a.
Precise pupil contour detection: In the proposed algorithm, a GVF snake model is applied to detect the precise pupil contour. It is initialized on the localized pupil circle, which consequently increases the performance of the detection of the final contour in terms of accuracy and processing time.
GVF snake model: The traditional snake model is a curve which is parametrically represented as x(s)=[x(s),y(s)], s∈[0,1], it moves through an image domain to minimize the energy functional:
![]() | (2) |
where, the first term represents the internal energy function, α and β are weighting parameters that control the snake tension and rigidity, respectively. And x(s) and xN(s) denote the first and the second derivatives of x(s) with respect to s. While the second term represents the external energy function which is derived from the image. As a snake should satisfy the Euler equation in the process of minimizing E, thus the above terms can be reformulated as:
![]() | (3) |
The GVF snake model presented by Xu and Prince (1998) is distinguished from nearly all other snake formulations in that, its external force cannot be written as negative gradient of a potential function. Consequently, it cannot be formulated using standard energy minimization framework, instead it is specified directly from the force balance condition:
![]() | (4) |
where, Fint is the internal force and Fext is the external force. And Fint=α x" (s) -βx""(s) and Fext= -∇Eext. Here, by taking x as a function of both time t and s, the snake is made dynamic. And in order to find a solution to (Eq. 3), the partial derivative of x according to t is set equal to the left hand side of Eq. 3 as follows:
![]() | (5) |
In his snake formulation, Xu proposed a new static external force field named as gradient vector flow (GVF) field. Which is defined as the vector field v(x, y) =[u(x,y),v(x,y)] that minimizes the energy functional:
![]() | (6) |
where, μ is the regulation parameter and ∇f is the gradient of edge map f derived from the original image. Now, in order to obtain the corresponding dynamic snake equation, the potential force -∇ Eext was replaced in Eq. 5 with v(x,y) yielding:
![]() | (7) |
Then for the detection of the contour of an object, an initial contour consisting of n snake points (snakles) is required. Next, the snake curve is deformed iteratively by minimizing the energy functional Eq. 7, such that in each iteration a new snake curve with lower energy and better fitting to the object boundary is obtained. After each iteration, however, a snake interpolation is done in order to keep the distance between the snake points in a specified range.
Precise pupil contour convergence: Snake models are generally sensitive to the location of the initial contour. As also proved by Mohammed et al. (2009, 2010) the circular pupil boundary has been localized accurately. Thus, in the proposed algorithm, the initial contour is set as the previously obtained pupil circle, which implies the convergence to the accurate pupil contour with less iterations. Where the proposed initial contour consists of n snakles, as shown in Fig. 3b. And for more computational improvement, a selected sub-image based on the location of the initial pupil contour, is considered to compute the GVF field. Here, a central problem with the snake model, is the determination of the weighting parameters α and β, the regulation parameter μ and the iterations number m. Doing extensive experiments, best snake parameters are selected out of wide range of values.
The snake will still deform till reach the maximum number of iterations m. Figure 3c shows the final pupil contour.
RESULTS
The performance of the proposed algorithm is investigated with emphasis on the detection accuracy and computational speed. Extensive experiments are performed on CASIA (2006). Including 2655 iris images, this database is widely employed as public domain database. All experiments are implemented in MATLAB 7.0.1 on a PC with 3 GHz processor and 1024 MB of RAM.
In detail, two sets of parameters have been selected in the experimentation. For the first set, which is the set of parameters required for the application of the AIPF, the parameters h = 15 pixels and Δθ = 15° are selected similar to Mohammed et al. (2009, 2010), wherein the AIPF has been applied efficiently for the same parameters. And the w parameter is set to 75 pixel as the maximum pupil radius in the iris database. The second set of parameters includes the parameters of the GVF snake model, these parameters are selected empirically based on the best results obtained for the snake as: α = 2, β = 4.9 and μ = 0.1. While the number of points in the initial circular contour, is set to 65 points. And based on the closeness of the initial contour to the final pupil contour, the maximum number of the snake iterations required to converge is selected empirically as m = 4. Figure 4 shows samples of detected pupil contour.
![]() | |
Fig. 4: | Detected pupil contour samples |
![]() | |
Fig. 5: | Pupil localization performance compared with Daugmans method. (a) Pupil center and (b) pupil radius |
Then to evaluate the performance of pupil contour detection, subjective and ground truth-based evaluation approaches are adopted. In the subjective evaluation, the detected pupil contour is compared visually with optimal localization results of pupil contour. And for comparative analysis, the Daugman (1993) integrodifferential operator is also implemented as a prevailing method for iris segmentation. The performance of the detection results are reported in Table 1.
While in the ground truth-based approach, ground truth is created. It contains the manually localized pupil features (center, radius) for each iris image in the database (Mohammed et al., 2010). The detection accuracy is then calculated as the distance between the center and radius of the average circle extracted from the final pupil contour and those of ground truth (Zheng et al., 2005). Figure 5a and b show the accuracy performance for the pupil center and radius respectively, for both the Daugmans and proposed method. Where the proposed algorithm has 80.65% of the detected pupil centers and 94.83% of the estimated pupil radii within 1 pixel confidence.
Table 1: | Accuracy and time of the proposed method |
![]() | |
From those above results, it is clear that the proposed algorithm has improved the accuracy performance. Moreover it needs less computation time due to the few iterations required to converge to the final pupil contour.
CONCLUSION
In this study, a new algorithm for precise pupil contour detection is presented. Following image preprocessing, the pupil center is approximated, then the AIPF is applied with conjunction of curve fitting to locate the circular pupil boundary. Then a GVF snake initialized on the localized circular pupil boundary, is applied to detect the precise pupil contour. Thus, the proposed method has solved the problem of snake sensitivity to the initial contour and speeds up the convergence to the final contour. Experiments implemented on CASIA V3.0 iris images proved the efficiency of the proposed algorithm in terms of accuracy and computational speed.
ACKNOWLEDGMENTS
This study was supported in part by the Natural Science Foundation of China (NSFC) under Contract No. 60872099 and the National High-Tech Research and Development Plan of China (863) under Contract No. 2006AA01Z308.
REFERENCES
- Abrantes, A.J. and J.S. Marques, 1996. A class of constrained clustering algorithms for object boundary extraction. IEEE Trans. Image Process., 5: 1507-1521.
CrossRefDirect Link - Cohen, L.D., 1991. On active contour models and balloons. CVGIP: Image Understanding, 53: 211-218.
CrossRef - Daugman, J.G., 1993. High confidence visual recognition of persons by a test of statistical independence. IEEE Trans. Pattern Anal. Mach. Intell., 15: 1148-1161.
CrossRefDirect Link - Daugman, J., 2004. How iris recognition works. IEEE Trans. Circuits Syst. Video Technol., 14: 21-30.
CrossRefDirect Link - Davatzikos, C. and J. Prince, 1995. An active contour model for mapping the cortex. IEEE Trans. Med. Imag., 14: 65-80.
CrossRefDirect Link - Kass, M., A. Witkin and D. Terzopoulos, 1988. Snakes: Active contour models. Int. J. Comput. Vision, 1: 321-331.
CrossRefDirect Link - Kong, W. and D. Zhang, 2001. Accurate iris segmentation based on novel reflection and eyelash detection model. Proceedings of the International Symposium on Intelligent Multimedia, Video and Speech Processing, May 2-4, 2001, Hong Kong, pp: 263-266.
CrossRefDirect Link - Mohammed, G.J., H.B. Rong and A. Al-Kazzaz, 2009. A new localization algorithm for iris recognition. Inform. Technol. J., 8: 226-230.
CrossRefDirect Link - Proenca, H. and L.A. Alexandre, 2006. Iris segmentation methodology for non-cooperative recognition. IEE Proc. Vis. Image Signal Process., 153: 199-205.
CrossRefDirect Link - Sudha, N., N. Puhan, H. Xia and X. Jiang, 2009. Iris Recognition on Edge Maps. IET Comput. Vis., 3: 1-7.
CrossRefDirect Link - Tisse, C., L. Martin, L. Torres and M. Robert, 2002. Person identification technique using human iris recognition. Proceedings of the 15th International Conference on Vision Interface VI, Calgary, May 27-29, Canada, pp: 294-299.
Direct Link - Wildes, R.P., 1997. Iris recognition: An emerging biometric technology. IEEE Proc., 85: 1348-1363.
CrossRefDirect Link - Williams, D.J. and M. Shah, 1992. A fast algorithm for active contours and curvature estimation. CVGIP: Image Understand., 55: 14-26.
Direct Link - Xu, C. and J.L. Prince, 1998. Snakes, shapes and gradient vector flow. IEEE Trans. Image Process., 7: 359-369.
PubMedDirect Link - Zheng, Z., J. Yang and L. Yang, 2005. A robust method for eye features extraction on color image. Pattern Recognit. Lett., 26: 2252-2261.
CrossRef - Zuo, J., N. Ratha and J. Connell, 2008. A new approach for iris segmentation. Proceedings of the Computer Society Conference on Computer Vision and Pattern Recognition Workshops, June 23-28, 2008, Anchorage, Alaska, pp: 1-6.
CrossRef