Monday, March 28, 2011

Face Recognition Result and Discussion Part 1/4

CHAPTER 4
RESULT AND DISCUSSION
4.1 Introduction

This chapter described the results produced based on the methodology explained in Chapter 3. The results are shown and discussions are provided for each experiment. These experiments are divided into three (3) main parts Principal Components Analysis, training and recognition result and experimental result. Prototype model that is designed for this research purpose is also demonstrated in this chapter. 




4.2 Principal Component Analysis


Sample face images from ORL face dataset is shown in Figure 4.1 respectively. The sample showed seven different persons with different conditions. For easy explanations only three face images from each class or persons is taken as training set. Thus, 21 face images is used as a training set and 49 face images as testing set. The training set is then converted into a big matrix,  with its size  where m is the number of training set and P is equal to number of pixels of each face image.


 Figure 4.1: Example ORL dataset
 

The first step of combinations of those 21 face images produced a mean-face image (Figure 4.2). However this mean-face image does not have more information about the training set except for a sort of middle point. Thus this mean-face image only gives the set of middle face pixels among the training set.


 Figure 4.2: Mean Face


To measure more information of those training face images , the covariance matrix is implemented to determine on how the matrix dimension is vary from the mean with respect to each dimension. Figure 4.3 shows the covariance matrix surface map where it was cleared shows the matrix is symmetrical about the main diagonal  and square . The exact value is not important as its sign. If the value is negative, it indicates that both dimension increase together. Otherwise if the value is negative, then as one dimension is decrease, the other decreases. In the last scene, if the covariance is zero, it indicates both dimensions are independent to each other. This covariance matrix is then used to calculate the eigenvalues and eigenvectors using numerical Jacobi’s method. Refer to Appendix A to find the exact values of covariance matrix.
  
Figure 4.3: Covariance matrix from  surface map


From Figure 4.4, the eigenvalues decreases quickly as their number increases. The eigenvectors with higher eigenvalues provide more information. The all-off main diagonals are zero except the main diagonal which stored eigenvalues. The eigenvectors (Figure 4.5) is showed with its specific eigenvalues where the lower eigenvalues obtained a similar value for a set of eigenvectors. The  eigenvector of a matrix determine a direction in which the effect of the matrix is particularly simple: the matrix is expands or shrinks any vector lying in that direction by a scalar multiple and the expansion or contraction factors is given by the corresponding eigenvalues (M.T. Health, 2002). Refer to Appendix B for eigenvalues and Appendix C for eigenvectors.



Figure 4.4: Eigenvalues matrix surface map

 Figure 4.5: Eigenvector matrix surface map




4.2.1 Eigenfaces


This experiment mainly used the PCA to extract important features within the trained face images. The research implements the eigenvectors through the face images, these face images is known as a sort of ghostly face images which is called as eigenfaces. Figure 4.6 show the set of eigenfaces corresponding to their trained face images. Each eigenfaces deviates from uniform gray where some facial feature differs among the set of training faces. Eigenfaces can be viewed as a sort of map of the variations between faces.                                                
             
                                             
                                                   
Figure 4.6: Set of eigenfaces obtained through PCA



4.2.2 Feature Extraction


This section describes the weight features   produced by previous set of eigenfaces (Figure 4.6) with mean-adjusted face images.  Entire examples of this section are given for class 1 and 2 for trained class and class 8 for untrained class.  Refer to Appendix D for full result.

Tables 4.1 illustrate the original weight features with only M = 7 eigenfaces were used to perform weight vectors where M is equal the number of class or individual used for training phase (M. Turk, 1990). 

Table 4.1a: Original weight features for Trained Face

Table 4.1b: Original weight features for untrained face belong to the same class 



4.2.3 Normalization

Based on equation in previous chapter, the original weight vectors are normalized to transform the output to meet neural network algorithm. Table 4.2 shows the simple normalization which is clearly showed the normalization is within 0 and 1. 


Table 4.2a: Simple Normalization for Trained Face 

Table 4.2b: Simple Normalization for Untrained face belong to the same class 

Whereas, Table 4.3 demonstrate the Improve Unit Range (IUR) normalization with the number range is within 0.1 to 0.9. This range of number is simply can be adjusted through the formula at chapter 3.

Table 4.3a: IUR normalization for Trained Face 

Table 4.3b: IUR Normalization for Untrained face belong to the same class

The last normalization technique used in the experiments which known as Improve Linear Scaling (ILS) is shown in Table 4.4. The range of the number is still 0 to 1 but the minimum and the maximum is unknown because it’s calculating the variance among the original data. Refer to Appendix E to G for detail result of all normalization technique. 

Table 4.4a: ILS normalization for Trained Face  


Table 4.4b: ILS Normalization for Untrained face belong to the same class 

1 comments:

Ritesh Ranjan said...

Hi, I have worked on LBP histogram matching technique for face recognition. I also have very good results.

Post a Comment