Saturday, June 26, 2010

Biometric Recognition Methodology part 1/4 - Intro and Preprocessing

METHODOLOGY


                  
                  
3.1 Introduction
  
    This chapter describes the implementation of the chosen method using the suitable theory. Hence the methodology which described how the difference magnificent mathematical is combined together to achieve the research objectives. There are four (4) phases in the proposed face recognition system namely; Preprocessing, Feature Extraction, Training and Recognition. Each phase is briefly described as follows:


a)    Preprocessing. In this phase, the face dataset acquisition and the preprocessing of the face images are performed.


b)    Feature extraction. Those face library images were prepared for the feature extraction phase. This phase is performed to find the useful feature such as eigenvalues $(\lambda_i )$, eigenvectors$(\ev_i )$ , eigenfaces$(\U_i )$  and feature vectors(\Omega) .


c)    Training phase. Trained feature vectors then used for backpropagation neural network training to generalized the neural network weights for recognition phase.


d)    Recognition phase. The set of chosen eigenfaces, feature vectors and neural network weights is then used for recognition phase. The recognition begins by selecting a face image from face library which the system considered the unknown face.
  
Figure 3.1 illustrates the methodology used to recognize an unknown human face. The figure clearly shows where the four phases is located. For the training and recognition, three (3) models are purposed in the research. Each these phase is then described with their algorithm in this chapter.


Figure 3.1: Proposed modeling System



3.2 Preprocessing
   
    This section is to prepare the face image ready for the feature extraction phase. There are three (3) primaries were done namely; face dataset acquisition, changing format, face library phase and training set acquisition (Figure 3.2). Each phase is described in the following paragraph.
Figure 3.2: Diagram for Preprocessing Technique


3.2.1 Face Dataset Acquisition
   
    The face dataset were collected via internet sources that involved the Olivetti Research Laboratory (ORL). The ORL dataset include 10 different images of 40 distinct individuals. The database included with grayscale face database with contains images with different varying lighting slightly and facial expressions where can represent the environment changes for real time purposed. However due the limitation of the available computational capacity, the experiments took a sample of 150 face images – ten (10) face images of 15 persons.
   
    All face images are stored in a face dataset library in the system. Every action such as training phase, eigenfaces is performed from the face library. After the acquisition and preprocessing, face is added to the face library with its weight vectors. Eigenfaces weight vectors of each image are empty until a training set is chosen and eigenfaces is produced.


3.2.2 Change Format


    The face dataset should be prepared for feature extraction phase. The ORL Face Database was manually altering the file format into Portable Grey Map (PGM) using suitable image processing software. The PGM format is chosen because it is a lowest common denominator grayscale image file format.






3.2.3 Face Library Formation Phase


    For convenience, each face datasets were place in a different folder name with the name of dataset. Each folder contain subfolders with each subfolder present the different individuals. Then, each subfolder had given the started name with symbol “s” and followed by the number 1 until last individuals.  The face images are numbered with 1 until the last face images for each individual and placed in each subfolder.


    The ORL Face Database, the appereances is not synchronized within the number (from 1 to 10) with others individuals. Each individual or class some of the images were taken at different times, varying lighting slighty, facial expressions (open/clased eyes, smiling/non-smiling) and facial details (glasses/ no-glasses). These face images a taken against a dark homogeneous background and the individuals are in up-right, frontal position with tolerance for side movement. For the original ORL face image (Figure 3.3), the size is 92 x 112. The face images also resizes into 41 x 50 and  20 x 24 to compute the PCA and neural network capabililities.


Figure 3.3: ORL Face dataset with number formation phase


3.2.4 Training Set Acquisitions


Generally the entire process is done by using the matrix operations such as addition, subtractions, and multiplications. Matrix operation is chosen because due to potential that made powerful tools for arithmetic solution. In addition the understanding of matrix transformations is required for successfully implement those methods.


In the proposed system, the process is started by gathering the face images into one big matrix. Imagine the first face image with size XY, M. Turk converted this image into one vectors with the columns size is P = XY and row is the identity of face image. This process continued by others face images that are used in the training phase. Figure 3.4 illustrated how a set of face images is converted into a matrix.




Figure 3.4: Training set acquisitions

The face acquisitions is simply described in mathematical equations where for each training set, $\Gamma_1, \Gamma_2, \ldots, \Gamma_m$  calculated the average of set with its size is (1 x P);
                            $\psi = \frac{1}{M} \sum_{n=1}^{M}\Gamma_n$      (3.1)
Each face differs from the average by the vectors and its size is  (1 x P)


                            $\Phi = \Gamma_i - \psi$      (3.2)


Complete the training set acquisitions with the updated difference matrix with its size (M x P);
                                 $A = (\Phi_1, \Phi_1, \ldots,\Phi_M)$   (3.3)

0 comments:

Post a Comment