HOME JOURNALS CONTACT

Research Journal of Information Technology

Year: 2013 | Volume: 5 | Issue: 2 | Page No.: 200-208
DOI: 10.17311/rjit.2013.200.208
AAAM-face Based Authentication System for Information Security
N.R. Raajan, M.V. Priya, S. Shiva Prasad and R. Rakesh

Abstract: Information security plays a vital role in many applications where the millions of people access and share their data. The advancement in technology leads us to the accurate prediction of output; this advancement can be attained in 3D face recognition using AAAM technique. This paper deals with Alternative Advanced Active Appearance Models (AAAM) method of 3D face recognition. For better results more accurate localisation of facial features are necessary, this is achieved using our technique (AAAM). It makes use of the reference points obtain from one image or image sequence, thus it can be used for a range of face image interpretation. By using stereoscopy we can obtain the depth map through which the recognition is highly accurate and efficient.

Fulltext PDF Fulltext HTML

How to cite this article
N.R. Raajan, M.V. Priya, S. Shiva Prasad and R. Rakesh, 2013. AAAM-face Based Authentication System for Information Security. Research Journal of Information Technology, 5: 200-208.

Keywords: 3D, stereoscopy, AAM, security, face, depth map and AAAM

INTRODUCTION

In the world of speeding technology, the need for making the personal database of an individual as highlysecured one is inevitable. The term secured refers to the database should be accessed only by that particular individual. The secureness for one’s personal database can be provided in different ways like fingerprint recognition, voice recognition, eye scan, etc. The secureness’ provided by all these techniques can be easily breached. The desire of human tendency to the best in personal database security can be achieved through the technology of 3D face recognition. There is also a hybrid approach available (Raajan et al., 2012) and generic illumination based harmonic relighting (Qing et al., 2002). There are various techniques in face recognition systems such as principle component analysis (Khan et al., 2004), Skin Color Segmentation and Template Matching Algorithms (Li et al., 2011), Skin Detection Algorithms (Tabassum et al., 2010). The 3D technique of face recognition can be done in many methods; especially the AAAM method plays a prominent role in 3D face recognition by throwing away the drawbacks in other methods. In AAAM method the recognition is done by considering various features of the human face under different facial expression, hence the occurrence of discombobulating is avoided entirely.

In this study, AAAM method of 3D face recognition is done and for obtaining better results more accurate localisation of facial features. This method employs stereoscopy through which depth map has been determined.

In AAM (Asthana et al., 2009) (Active Appearance Modelling) detection of face is done in the following manner. First a 2D image of the face to be detected is taken as input, then peculiar spots on image are marked accordingly using a software and the distance of those marked points are found. The distance between the marked spots is calculated and compared with the database available already. In some cases the facial expressions may vary while taking the 2D image for reference with the database, in such cases the misalignment of input 2D image with the database can be overcome by taking more number of reference points for calculating the distance. In such cases at least few points may align with those in the database and the face will get detected.

AAAM: The drawbacks in AAM technique like misalignment of location and distance between the spots on the input 2d image and the database can be overcome by an advanced technique called AAAM. In AAAM the need to go for more number of reference points while variations in facial expressions is avoided. The points which are constant in their respective positions at any time of facial expressions are alone taken into account and the distance between them are calculated and referred with the database. Thus the complications in AAM can be overcome by AAAM, with less effort the output same as that of the AAM is achieved using AAAM.

METHODOLOGY

The most effective biometric method of security management is 3D face recognition using the AAAM technique. Our method is more resistive towards the variations occurring due to difference in facial expressions. Since our method considers only the points between which the distance doesn’t alter under any facial expressions, the drawbacks created in AAM is avoided in our method.

Steps in AAM: In AAM technique (Corcoran et al., 2007) the input image which is to be verified with the predefined database contains the following parameters ‘X ‘, ’b’ controlling the shape and ‘g‘ the texture in the model frame:

(1)

(2)

where, are the mean shape and texture in a mean shaped patch, respectively. The variation modes derived are described using the matrices Og, Os obtained from the training set.

Global transformations (Corcoran et al., 2007) are applied to the model frame points and the shape, Y, is generated in the image frame. Then the texture in the image frame is generated by applying offset and scaling to the intensities in the model frame generated already. The texture is generated for full reconstruction, in a mean shaped patch. The model points are made to rest on the image points by warping, y.

AAM can minimize the problems like sum-of-squares of the form:

(3)

where, p contains the function returning the m residual at p, with s = s(p) by Tailor series, near p we have:

(4)

where, J is the next Jacobian of:

(5)

In this case, E(p+dp) = (Jdp+s)R(Jdp+s) = dpRJR Jdp+2dpRJRp+constant.

On differentiating w.r.t. p gives gradient:

(6)

By equating the gradient to zero the minimum can be find, which gives:

(7)

On solving Eq. 4, displacement, dp can be found from current location, p, to the minimum. We assume that J to be constant and can be determined from the training set bs for higher efficiency of the AAM, Let:

where, A is the Pseudo-inverse of Jo.

Solution is provided by solving Eq. 4 and by the matrix multiplication dp =-As. The basic AAM algorithm is formed accordingly. Initialize: Set po, so = s(po), d = 0, ij = 0.125.

Loop:

dpd = -Asd
i = 1.0
pd+1 = Update (pd, idpd)
sd+1 = s(pd+1)
if|sd+1|2>|sd|2 and i>ij
then i = 0.5i, go to step3
s = s+1
Untili<ij or d>max

This technique is used for comparing and verifying the faces with the database once the 3D face is generated.

Steps in AAAM: The above steps mentioned in the AAM technique are complicated and time consuming. In AAAM suggested by us all those complicated steps are avoided and also time is saved. Thus by avoiding the complications and time consumption the output obtained is same as that of the output obtained by AAM technique. The outputs obtained by our AAAM technique are shown below:

Stereoscopy: There are various techniques used to generate 3D such as Local shape difference boosting (Wang and Li, 2011; Wang et al., 2010), gabor filters (Li et al., 2011), Nonnegative Matrix Factorization (Yang and He, 2010), using Adaptive Multi scale Retinex (Sani et al., 2010) and using Neural Network (Wakaf and Saii, 2009).

Fig. 1: Stereoscopic model

An advanced 3D technique in which the 3D image of the given input image is obtained by overlapping its right and left image.

(8)

From the stereo camera lenses an object A is at distance I. The lenses are spaced at a distance B apart as in Fig. 1. An object at A is formed at A1 and A2, while at the distance of infinity an object is formed at O1 and O2. Half of the stereoscopic deviation is P/2 since the situation is symmetric.

We have or from similar triangles.

The ratio I1/I is the magnification M. From this we get basic stereoscopic equation:

(9)

The above equation can be written as:

(10)

If the image is far away from the lens, using low magnification approximation.

The parallax with respect to infinity is given by the Eq. 10. If we have the far object at Imax and the nearer object at Imin, then the stereoscopic deviation equation is given in a general way as:

(11)

The stereoscopic deviation is inversely proportional to the distance and directly proportional to the focal length of the stereo base.

Stereoscopic depth calculation: Depth of the given face can be calculated by measuring the variations in the intensity of the stripes projected on the face as follows:

where, i and j are image coordinates, x and y are the world coordinates of the measurement point, I0(i,j) and I(i,j) are intensities of the original image and observation patterns, respectively. M(n) is the intensity modulation function of the projection pattern, R(x,y) is the surface reflection function of the object and are constants.

EXPERIMENTAL RESULTS

We have taken the input images of Fig. 2a-c and by determining the depth map using stereoscopy, the 3D wireframe outputs for sample image 1 with Front wireframe, left Side wireframe and right side wireframe are given in Fig. 3a-c, respectively. Figure 4a-c represents the 3D wireframe outputs for sample image 2 with front wireframe, left side wireframe and right side wire frame, respectively.

Fig. 2(a-c): Input sample (a) Image 1, (b) Image 2 and (c) Image 3

Fig. 3(a-c): Sample image of 3D wireframe outputs, (a) Front wireframe, (b) Left side wireframe and (c) Right side wireframe

Fig. 4(a-c): Sample image of 3D wireframe outputs (a) Front wireframe, (b) Left side wireframe and (c) Right side wireframe

Fig. 5(a-c): Sample image of 3D wireframe outputs (a) Front wireframe, (b) Left side wireframe and (c) Right side wireframe

For sample image 3, the 3D wireframe outputs for with Front wireframe, left side wire frame and right side wireframe are given in Fig. 5a-c, respectively.

The mesh output patterns 1, 2, 3 and 4 for Lena image are given in Fig. 6a-d, respectively. It can be selected by the user manually as in Fig. 7a. Certain points are taken into account such as the eyes and the mouth as in Fig. 7b. Finally a 3D mesh is obtained. Reconstruction of wireframe from mesh and the construction of wire frame are given in Fig. 8a and b, respectively.

Fig. 6(a-d): Lena image of 3D mesh outputs (a) Mesh output 1, (b) Mesh output2, (c) Mesh output3 and (d) Mesh output4

Fig. 7(a-b): 3D mesh outputs of (a) Construction of structural mesh and (b) After construction processes of triangulation

Fig. 8(a-b): 3D mesh outputs of (a) Reconstruction of wireframe from mesh and (b) Construction of wireframe

CONCLUSION

For the purpose of handling and storing the data in a secured way, we implement network security technique. Being non-volatile memory it stores adequate amount of data, which can be accessed whenever necessary. The advancement in 3D technology is achieved through our AAAM technique in a simple and efficient manner. Thus the drawbacks in AAM technique can be fulfilled by using the AAAM technique. Also by half information (Harguess and Aggarwal, 2009) available face can be recognized. They can be mainly used in search of criminal databases by the investigators.

REFERENCES

  • Asthana, A., J. Saragih, M. Wagner and R. Goecke, 2009. Evaluating aam fitting methods for facial expression recognition. Proceedings of the 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, September 10-12, 2009, Amsterdam, Netherlands, pp: 1-8.


  • Corcoran, P., M.C. Ionita and I. Bacivarov, 2007. Next generation face tracking technology using AAM techniques. Proceedings of the International Symposium on Signals, Circuits and Systems, Volume 1, July 13-14, 2007, Iasi, Romania, pp: 1-4.


  • Harguess, J. and J.K. Aggarwal, 2009. A case for the average-half-face in 2D and 3D for face recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, June 20-25, 2009, Miami, FL., USA., pp: 7-12.


  • Yang, H. and G. He, 2010. Online face recognition algorithm via nonnegative matrix factorization. Inform. Technol. J., 9: 1719-1724.
    CrossRef    Direct Link    


  • Khan, M.A.U., M.K. Khan, M.A. Khan, M.T. Ibrahim, M.K. Ahmed and J.A. Baig, 2004. Principal component analysis of directional images for face recognition. Inform. Technol. J., 3: 290-295.
    CrossRef    Direct Link    


  • Qing, L., S. Shan, W.E.N. Gao and B.O. Du, 2002. Face recognition under generic illumination based on harmonic relighting. Computing, 19: 1-14.


  • Raajan, N.R., M.V. Priya, S. Suganya, D. Parthiban, A.J. Philomina, B. Monisha and M.R. Kumar, 2012. A new approach of stereoscopic imaging analysis for biometric recognition using traditional eigenface technique. Proceedings of the International Conference on Computing, Communication and Applications, February 22-24, 2012, Dindigul, Tamil Nadu, India, pp: 1-4.


  • Sani, M.M., K.A. Ishak and S.A. Samad, 2010. Classification using adaptive multiscale retinex and support vector machine for face recognition system. J. Applied Sci., 10: 506-511.
    CrossRef    Direct Link    


  • Tabassum, M.R., A.U. Gias, M.M. Kamal, S. Islam and H.M. Muctadir et al., 2010. Comparative study of statistical skin detection algorithms for sub-continental human images. Inform. Technol. J., 9: 811-817.
    CrossRef    Direct Link    


  • Wang, Y., J. Liu and X. Tang, 2010. Robust 3D face recognition by local shape difference boosting. IEEE Trans. Pattern Anal. Mach. Intell., 32: 1858-1870.
    CrossRef    


  • Li, W., Y. Lin, H. Li, Y. Wang and W. Wu, 2011. Multi-channel gabor face recognition based on area selection. Inform. Technol. J., 10: 2126-2132.
    CrossRef    Direct Link    


  • Wang, Z. and S. Li, 2011. Face recognition using skin color segmentation and template matching algorithms. Inform. Technol. J., 10: 2308-2314.
    CrossRef    Direct Link    


  • Wakaf, Z. and M.M. Saii, 2009. Frontal colored face recognition system. Res. J. Inform. Technol., 1: 17-29.
    CrossRef    Direct Link    

  • © Science Alert. All Rights Reserved