Personal Identification System Based on Palmprint
P. Esther Rani
The need for a secured personal identification system has led researchers into the field of biometrics. Currently a lot of research work is carried out in this area. One of the most promising traits is the palmprint which is widely used because of the ease with which it can be acquired, accuracy and stability. The palmprint contains principal lines, ridges and wrinkles which can be used to uniquely identify a person. It is user friendly, contains larger surface area and more features can be extracted using low resolution images. Also it is highly discriminative and even identical twins have different palmprint. In this proposed study, features are extracted from palmprint based on local gabor XOR pattern and principal component analysis. Each of these features is extracted and matched using Euclidean distance measure. Finally the scores generated by the individual matchers are combined using score level fusion. This fusion technique improves the performance of the biometric system and is found to provide low error rates and high recognition accuracy.
Received: November 27, 2013;
Accepted: March 07, 2014;
Published: April 29, 2014
A biometric system is essentially a pattern recognition system which identifies
a person based on the physiological or behavioral characteristics possessed
by the person (Jain et al., 1999). Biometric features
have been widely used in personal authentication system because it is more reliable
when compared to conventional methods like knowledge based methods e.g., password,
PIN number and token based methods e.g., passports, ID cards. Different physical
or behavioral characteristics like fingerprint, face, iris, palmprint, hand
geometry, voice, gait, signature etc., have been widely used in biometric systems.
Among these traits hand based biometrics such as palmprint, fingerprint and
hand geometry are very popular because of their high user acceptance. The most
widely used feature is the fingerprint. It has several advantages of small chip
size, easy to acquire and highly accurate, making it the most popular biometric
technology. But it is difficult to extract features from damaged fingers and
also has less user acceptability since people associated with police process
(Zhang, 2004). Hand geometry has advantages like small
feature size and low computation complexity but it has the disadvantage of high
cost and low accuracy (Fung et al., 1997). Palmprint
contain many features like principal lines, wrinkles, ridges, datum points and
minutiae features. It is highly unique and even identical twins have different
palmprints (Kong et al., 2006a, b).
It is also rich in texture and in the proposed study Gabor filters are used
for texture feature extraction. Also principal component analysis is used to
extract the global features from the palmprint. Finally the matching scores
of the above features are combined using Minimum Distance Rule (MDR).
Personal identification based on palmprint has been the area of research for the past ten years. There are a number of algorithms proposed in the literature using palmprint and they may be classified as:
Many researchers have extracted the palm lines for personal identification.
Mainly edge detection methods are used for this purpose. The lines may be matched
directly or represented in other formats for matching. Wong
et al. (2008) used Sobel operators with different orientations to
extract the line information from the normalized palmprint images. The feature
vectors are either zero or one and Hamming distance is used for matching. Wang
and Ruan (2006a) used two stage steerable filters for global and local filtering.
In the first stage, steerable filters are used on the whole image to extract
the palm lines and approximate directional angles. In the second stage, they
extract palm lines and connect the broken lines in local regions of the same
palmprint image. The proposed method is claimed to be effective and fast in
extracting the palm lines from online images.
The subspace methods like Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and independent component analysis are the techniques adopted in the literature for feature representation and also in dimensionality reduction. The coefficients obtained in the subspace are used as features and distance metric and other classifiers are used for matching. Instead of applying the subspace methods directly, they may also use Gabor filters, Discrete Cosine Transform (DCT), wavelets in their methods. The subspace feature extraction methods have strong representations, low computation, easy to implement and good separation, so they are widely used in many fields like face, palm print recognition and so on.
The statistical features can be easily extracted and represented for identification.
Local or global statistical features can be obtained. In local statistical method
the image is transformed into some other domain. The transformed image is then
sub divided into small regions and mean, variance of these regions is stored
as feature vectors. The image transforms such as fourier transforms, Gabor and
wavelet transform techniques are used. Lu et al.
(2002) use histograms of local binary patterns as features. Global statistical
methods compute the global features such as moments and center of gravity directly
obtained from the transformed images moments. Wu et al.
(2004) used fuzzy directional element energy features, a statistical feature
containing some line structural information about palm prints. Euclidean distance
is used for matching.
Different coding algorithms are proposed in the literature and they provide
high recognition rates. Kumar and Shen (2004) used the
Real Gabor Function (RGF) on palmprint. The RGF filter was implemented using
13X13 spatial masks with six different orientations. They extracted circular
ROI and for each of the circular concentric band, the mean and variance was
estimated. The ordered set of feature vectors is called as the palmcode. Competitive
coding scheme was developed by Kong and Zhang (2004)
which takes in to account, the orientation field of the palmprint constituted
by the palm lines. 2D Gabor filter is used to extract the orientation field
and a novel coding scheme is used to generate a bitwise feature representation
and bit wise angular distance is used to compare the two feature codes. Better
performance was achieved in comparison to palm code and fusion code.
The hybrid methods combine different image processing techniques for feature
extraction in palmprint and they employ some classifiers like neural network.
Zhu and Xing (2009) proposed a new hierarchical palmprint
recognition method. First, major lines are extracted by Canny detectors on four
gradient images. Then dual tree complex wavelet is applied to the palmprint
image to extract texture features. The two features are shown to complement
each other and a recognition rate of 97.82% is achieved. Zhang
et al. (2010) used multispectral palmprint images to acquire more
descriptive information. A data acquisition device was designed to capture palmprint
images under blue, green and red and Near Infra Red (NIR) illuminations. The
method employed orientation based coding texture features.
MATERIALS AND METHODS
Palmprint is one of the important biometric traits used among the people. Any palmprint recognition process has the following major steps: (1) Image preprocessing, (2) Feature extraction and (3) Matching. In our proposed method, the input image is enhanced using median filter. Then the central palm area is extracted. After that the features are extracted from the selected ROI of palm image. Here, the texture feature and the global PCA features are extracted. Finally, based on the feature extracted the palm image is recognized using Euclidean distance measure. The block diagram of the proposed work is shown in Fig. 1.
|| Block diagram of the proposed system
|| (a) ROI selection-poly U database and (b) Extracted ROI
The description of the different blocks is explained as below.
Preprocessing: The first step in any biometric authentication system
after capturing the image is preprocessing. Preprocessing is used to align different
palm print images and to segment the central parts for feature extraction. Most
of the preprocessing algorithms employ the key points between fingers to set
up a coordinate system. Preprocessing involves generally five common steps:
(1) Binarizing the palm images, (2) Extracting the contour of hand and/or fingers,
(3) Detecting the key points, (4) Establishing a coordinate system and (5) Extracting
the central parts. Initially the images in the database are filtered using median
filter. This serves to enhance the edges in the palmprint images. The entire
palmprint is not used for feature extraction but a Region of Interest (ROI)
is extracted from the enhanced palmprint image. The Fig. 2
shows the reference points used and the extracted central palm area. The detailed
steps are explained in the previous study (Rani and Lakshmi,
Feature extraction: Feature extraction plays an important role in image
identification and verification. The palmprint contains principal lines that
can be used to represent features but these lines are not sufficient to uniquely
represent a person because the principal lines in some people are similar (Zhang
et al., 2003). Hence, the texture features are extracted from the
palmprint using Gabor filter.
Local gabor XOR pattern (LGXP): The circular Gabor filter (Zhang
et al., 2003). is most suitable for texture analysis and has the
form as shown in Eq. 1:
u is the frequency of the sinusoidal wave; θ is the orientation of the
function and σ is the standard deviation of the Gaussian envelope. Such
Gabor filters have been widely used in various applications like fingerprint
recognition, face recognition and texture analysis.
The Gabor filter G(x, y, θ, u, σ) forms the complex valued function. Decomposing G(x, y, θ, u, σ) into real and imaginary parts gives Eq. 2:
where, R (x, y, θ, u, σ) and I (x, y, θ, u, σ) represent the real and imaginary parts of the Gabor filter. In order to provide more robustness to brightness variation a zero mean Gabor filter is necessary. The mean value of the imaginary part of the Gabor filter is automatically zero because of the odd symmetry of the sine function but the mean of the real part of the filter is not zero because of the even symmetry of the cosine function. A zero mean Gabor filter is obtained using Eq. 3 given below:
where, (2n+1)2 represents the size of the filter. The magnitude
and phase part are then computed using the Eq. 4 and 5:
The phase values are then quantized and the LXP operator is applied to the quantized phases of the central pixel and each of its neighbors and finally the resulting binary labels are concatenated together as the local pattern of the central pixel as shown in Fig. 3.
The pattern map described above for a 3x3 sub image is calculated and same is obtained for n number of the sub image. Finally the concatenated pattern map for the filtered image is given as Eq. 6:
Principal component analysis: Next the PCA feature is computed. PCA
is also known as Karhunen-Loeve (K-L) transform. PCA is a classic appearance-based
technique used to extract global features in many applications such as iris
recognition face recognition and image compression. It is a way of identifying
patterns in data and expressing the data in such a way as to highlight their
similarities and differences. The objectives of PCA are to reduce the dimension
of the data set and identify new meaningful underlying variables. The key idea
is to project the objects to an orthogonal subspace for their compact representations.
It usually involves a mathematical procedure that transforms a number of correlated
variables into a smaller number of uncorrelated variables which are called principal
components. The basic approach is to compute the eigenvectors of the covariance
matrix and approximate the original data by a linear combination of the leading
eigenvectors. The personal identification using PCA is found in (Lu
et al., 2003).
Three experiments were conducted and the identification results are compared. The experimental results are evaluated on the polyU database that contains 7752 grayscale images corresponding to 386 different palms in BMP image format. Around twenty samples from each of these palms were collected in two sessions where around 10 samples were captured in the first session and the second session, respectively. The average interval between the first and the second collection was two months.
LGXP feature: In the experiment 4 samples of each 150 persons captured
during the first session are used in training set and remaining samples are
used in the testing phase. The success of 2D Gabor phase coding depends on the
selection of the Gabor filter parameters, θ, σ and u. The optimized
parameters of the Gabor filter are given as, θ = 30°, σ = 5.5
and u = 0.091 (Zhang et al., 2003).The dimensions
of the PCA feature is selected as 100. In the training phase the LGXP feature
is obtained for above parameters and stored in the database as master template.
In the testing phase, the LGXP feature is obtained for the test image and matching
is done using Euclidean distance. Since, the database contains four images of
each person, for each test image four matching scores are generated by the matcher.
|| Encoding method of LGXP
The minimum of these distances is taken as the matching score and thus the
name Minimum Distance Rule (MDR). After calculating the distance, the image
can be recognized by using the thresholding technique which is described in
the following pseudo code.
MOLGXP feature: In the above experiment only single orientation was used and selected to be θ = 30° but here the value of the LGXP features are obtained for six different orientations of θ = 30, 60, 90, 120, 1500 and 180°.
They are concatenated to form the total feature set. This feature set is called
as multiple orientation LGXP feature (MOLGXP).
MOLGXP and PCA feature: In addition to the MOLGXP the PCA feature is
also computed for each of the palmprint image. In the testing phase the minimum
Euclidean distance is computed for MOLGXP feature and PCA feature. Let m1
represent the matching score from the MOLGXP matcher and m2 from
the PCA matcher. The combined score m using the sum rule (Ross
et al., 2006) is given by:
The Fig. 4a and b shows the real and imaginary
parts of the filtered palmprint image and Fig. 5a and b
shows the magnitude and phase part of the filtered image for different orientations.
|| (a) Real part and (b) Imaginary part for different orientations
|| (a) Magnitude part and (b) Phase part for different orientations
|| Receiver operating characteristics
The Fig. 6 shows the Receiver Operating Characteristics (ROC) as a plot of genuine acceptance rate against the false acceptance rate at different threshold values for all the three experiments.
This sub section presents the comparative analysis of the proposed approach.
We have compared the recognition accuracy of the proposed approach with some
existing approaches. Computing the False Acceptance Rate (FAR) and False Rejection
Rate (FRR) is the common way to measure the biometric recognition accuracy.
FAR is the percentage of incorrect acceptances i.e., percentage of distance
measures of different peoples images that fall below the threshold. FRR
is the percentage of incorrect rejections-i.e., percentage of distance measures
of same peoples images that exceed the threshold. Genuine Acceptance Rate
(GAR) gives the recognition rate and is given by GAR = 1-FRR.
|| Performance measure for the proposed and the existing methods
Table 1 gives the percentage of the recognition rates and
the accuracy rates. The performance measure for the proposed and the existing
methods are described in the Table 1.
In this study, we have proposed a palmprint recognition system based on LGXP and principal components features. Different experiments have been conducted. In the first case the LGXP feature for a single orientation is alone considered. A recognition rate of 94.34% is achieved and it is improved to 97.08% when the same LGXP feature is considered for six different orientations with the features being concatenated. In the third case principal component features are extracted and the matching score is combined with score of MOLGXP feature using the score level fusion. This further improved the recognition rate to 98.97%. The proposed technique is found to perform better when compared to the competitive code and fusion code.
1: Ross, A.A., K.J. Nandakumar and K. Anil, 2006. Handbook of Multibiometrics. Springer, New York, ISBN-13: 9780387222967, pp: 37-57.
2: Kong, A.W.K., D. Zhang and G. Lu, 2006. A study of identical twins' palmprints for personal verification. Pattern Recognit., 39: 2149-2156.
CrossRef | Direct Link |
3: Kong, A., D. Zhang and M. Kamel, 2006. Palmprint identification using feature-level fusion. Pattern Recognit., 39: 478-487.
CrossRef | Direct Link |
4: Kong, A.W.K. and D. Zhang, 2004. Competitive coding scheme for palmprint verification. Proceedings of the 17th International Conference on Pattern Recognition, August 23-26, 2004, Cambridge, pp: 520-523.
5: Kumar, A. and H.C. Shen, 2004. Palmprint identification using palmcodes. Proceedings of the IEEE 1st Symposium on Multi-Agent Security and Survivability, December 18-20, 2004, USA., pp: 258-261.
6: Jain, A.K., R. Bolle and S. Pankanti, 1999. Biometrics: Personal Identification in Networked Society. Kluwer Academic Publishers, Boston/Dordrecht/London.
7: Zhang, D., Z. Guo, G. Lu, D. Zhang and W. Zuo, 2010. An online system of multispectral palmprint verification. IEEE Trans. Instrum. Meas., 59: 480-490.
8: Zhang, D., 2004. Palmprint Authentication. Kluwer Academic Publication, Boston/Dordrecht/London, ISBN-13: 9781402080975, Pages: 242.
9: Zhang, D., W.K. Kong, J. You and M. Wong, 2003. Online palmprint identification. IEEE Trans. Pattern Anal. Mach. Intell., 25: 1041-1050.
10: Liu, D., D.M. Sun and Z.D. Qiu, 2008. Wavelet decomposition 4-feature parallel fusion by quaternion euclidean product distance matching score for palmprint verification. Proceedings of the 9th International Conference on Signal Processing, October 26-29, 2008, Beijing, China, pp: 2104-2107.
11: Rani, P.E. and R.S. Lakshmi, 2012. An efficient palmprint recognition system based on extensive feature sets. Eur. J. Sci. Res., 71: 520-537.
12: Li, F. and M.K.H. Leung, 2006. Hierarchical identification of palmprint using line-based hough transform. Proceedings of the 18th International Conference on Pattern Recognition, August 20-24, 2006, Hong Kong, pp: 149-152.
13: Lu, G., K. Wang and D. Zhang, 2002. Wavelet-based feature extraction for palmprint identification. Proceeding of the 2nd International Conference on Image and Graphics, August 16-18, 2002, Hefei, China, pp: 780-784.
14: Fung, G.S.K., R.W.H. Lau and J.N.K. Liu, 1997. A signature based password authentication method. Proceedings of the IEEE International Conference on Computational Cybernetics and Simulation, Systems, Man and Cybernetics, October 12-15, 1997, Orlando, Florida, USA., pp: 631-636.
15: Lu, G., D. Zhang and K. Wang, 2003. Palmprint recognition using eigenpalms features. Pattern Recognit. Lett., 24: 1463-1467.
CrossRef | Direct Link |
16: Guo, J., L. Gu, Y. Liu, Y. Li and J. Zeng, 2010. Palmprint recognition based on kernel locality preserving projections. Proceedings of the 3rd International Congress on Image and Signal Processing, October 16-18, 2010, Yantai, pp: 1909-1913.
17: Lu, J., Y. Zhao, Y. Xue and J. Hu, 2008. Palmprint recognition via locality preserving projections and extreme learning machine neural network. Proceedings of the 9th International Conference on Signal Processing, October 26-29, 2008, Beijing, pp: 2096-2099.
18: Jin, S.N. and H.R. Kang, 2005. Palmprint identification algorithm using hu invariant moments and otsu binarization. Proceedings of the 4th Annual ACIS International Conference on Computer and Information Science, July 14-16, 2005, South Korea, pp: 94-99.
19: Yang, J., G. Shi, S. Chang, Z. Tan and Z. Shang, 2009. A novel method of minutiae filtering based on line feature extraction. Proceedings of the International Conference on Intelligent Human-Machine Systems and Cybernetics, August 26-27, 2009, Hangzhou, Zhejiang, pp: 343-346.
20: Wong, K.Y.E., A. Chekima, J.A. Dargham and G. Sainarayanan, 2008. Palmprint identification using sobel operator. Proceedings of the 10th International Conference on Control, Automation, Robotics and Vision, December 17-20, 2008, Hanoi, pp: 1338-1341.
21: Zhu, L. and R. Xing, 2009. Hierarchical palmprint recognition based on major line feature and dual tree complex wavelet texture feature. Proceedings of the 6th International Conference on Fuzzy Systems and Knowledge Discovery, August 14-16, 2009, Tianjin, pp: 15-19.
22: Zhu, L., S. Zhang and R. Xing, 2008. Palmprint recognition based on PFI and fuzzy logic. Proceedings of the 5th International Conference on Fuzzy Systems and Knowledge Discovery, October 18-20, 2008, Shandong, pp: 178-182.
23: Yu, P., P. Yu and D. Xu, 2010. Comparison of PCA, LDA and GDA for palmprint verification. Proceedings of the International Conference on Information, Networking and Automation, October 18-19, 2010, Kunming, pp: V1-148-V1-152.
24: Jia, W. and D.S. Huang, 2007. Palmprint verification based on robust orientation code. Proceedings of the International Joint Conference on Neural Networks, August 12-17, 2007, Oriando, Florida, USA., pp: 2510-2514.
25: Yang, W., S. Wang, L. Jie and G. Shao, 2008. A new palmprint identification technique based on a two-stage neural network classifier. Proceedings of the 4th International Conference on Natural Computation, October 18-20, 2008, Jinan, pp: 18-23.
26: Wu, X., K. Wang and D. Zhang, 2004. Palmprint recognition using directional line energy feature. Proceedings of 17th International Conference on Pattern Recognition, Vol. 4, August 23-26, 2004, Cambridge, pp: 475-478.
27: Xu, X. and Z. Guo, 2010. Multispectral palmprint recognition using quaternion principal component analysis. Proceedings of the International Workshop on Emerging Techniques and Challenges for Hand-Based Biometrics, August 22-22, 2010, Istanbul, pp: 1-5.
28: Han, Y.F., Z. Sun and T. Tan, 2009. Palmprint recognition using coarse-to-fine statistical image representation. Proceedings of the 16th IEEE International Conference on Image Processing (ICIP), November 7-10, 2009, Cairo, pp: 1969-1972.
29: Wang, Y. and Q. Ruan, 2006. Palm-line extraction using steerable filters. Proceedings of the 8th International Conference on Signal Processing, Vol. 3, November 16-20, 2006, Beijing -.
30: Wang, Y. and Q. Ruan, 2006. Kernel fisher discriminant analysis for palmprint recognition. Proceedings of the 18th International Conference on Pattern Recognition, August 20-24, 2006, Hong Kong, pp: 457-460.
31: Zhao, S., Y. Xu and Y.P. Liu, 2010. Palmprint verification based on orthogonal code. Proceedings of the 3rd International Conference on Information and Computing, Vol. 3, June 4-6, 2010, Wuxi, Jiang Su, pp: 221-224.
32: Guo, Z., W. Zuo, L. Zhang and D. Zhang, 2009. Palmprint verification using consistent orientation coding. Proceedings of the 16th IEEE International Conference on Image Processing, November 7-10, 2009, Cairo, pp: 1985-1988.