Sunday, November 21, 2010

Biometric Recognition Methodology part 2/4 - Feature Extraction

3.3 Feature Extraction

Previously each face image, $\Gamma_i$ of size   is converted into a big matrix where each row, M presented the image and column is P = XY  and revenue difference matrix A with its size (M x P).  This section (Figure 3.5) described the eigenvalues and eigenvectors using Jacobi’s method, dimension reductions, eigenfaces transformations, features vectors representations and how the eigenfaces is used to rebuild the face images.

 Figure 3.5: Diagram for Eigenfaces Formations



3.3.1 Eigenvalues and Eigenvectors implementations
   
    In the PCA approach, the eigenvalues and the eigenvectors plays a major role to perform the eigenfaces. Besides, the identification of eigenvalues is pertinent and it is the most challenging aspect for eigenfaces approach [Health, 2002]. This process is initiated by creating the covariance matrix;


                                        $C = A x A^T = \frac{1}{M} \sum_{i=1}^{M}\phi_i\phi^T$   (3.4)


with $A^T$ is a transpose matrix and its size is (M x M) . The covariance matrix is symmetrical matrix about the main diagonal.  This is important property of the eigenfaces method. Imagine for a face image of size XY pixels, the covariance matrix size is N x N , where P = XY. This covariance matrix cause complexity especially the aspects of computational and speed. On the other hand, the proposed eigenfaces method calculates the eigenvectors of  M x M matrix with M is equally the number of images in the training set and obtains  N x N matrix by using the eigenvectors M x M of  matrix.


    Since the covariance matrix C is symmetrical, the value above main diagonal is considered since (a,b) = (b,a). The procedure continues until all values above main diagonal is almost zero, $\epsilon$    (example $\epsilon = 0.0000001$). The algorithm eigenvalues is shown in Figure 3.6.

Figure 3.6: Eigenvalues Algorithm


Earlier, for each literation to find the eigenvalues, the coordinate (p,q) and value c ,s  should be set by for later use to find the eigenvectors matrix. Figure 3.7 is performed to find a set of eigenvector.

Figure 3.7:  Eigenvector algorithm

From the prior Figure 3.6 and 3.7, the set of eigenvalues and eigenvector matrix with size  (M x M)  is obtained as:
Both matrix eigenvalues and eigenvectors are depended with each other where the relationship can be described as; 

In fact, an eigenvalues turn out with the eigenvectors with the highest eigenvalues of the guidance component of the data set.  The eigenvectors with the largest eigenvalues was the one that pointed down in the middle of the data.  It is most significant relationship between the data dimensions.  Usually it arranges the eigenvalues and eigenvectors from highest to lowest, producing the components in order of significance. 

The number of eigenvectors can be reduce into some dimensions with originally the size of eigenvectors is M x M .  If some dimensions have been reduced, the eigenvectors dimension turn into $M x M_t$ which  $M_t$ is the new number of columns.

This eigenvalues and eigenvectors is an important property for the eigenfaces method. If the equation (3.4) the covariance matrix used is  $C = A^t x A$ where $A^t$ size is (P x M) and A  is (M x P) , the size will be  (P x P) where P = X x Y . The huge covariance matrix is causing computational and time complexity. Then, the unique matrix transformations allow the size of covariance matrix with (M x M) , where M is the number of face images used in the training phase.




3.3.2 Eigenfaces Transformations


Previously, the eigenvectors $V_(mm)$, eigenvalues,$\lambda_(mm)$ , covariance matrix $C_(mm) = A_(mp)  x  A_(pm) $ are obtained which the matrix size is included in the bracket for additional understanding of matrix transformations policy,

$C_(mm) x V_(mm)= \lambda_(mm)  x  V_(mm) $  (3.16)


the value of  C is substitutes in this equation,
                                      
both side are multiplied by  A,



the necessary matrix arrangements are made. As $\lambda_(i)$ is a scalar, this arrangement can be done,


group $V_(mm) x A_(mp)$ and call a variable  $L_(mp) = V_(mm) x A_(mp)$. Next equation showed the 



is part of the eigenvector of  with its size  M x P.

To form the eigenfaces from the previous eigenvectors, V with its size (M x M), which $M_t  \le M$ , new columns after dimension reductions become. 


3.3.3 Dimensions Reductions

The challenge of the proposed system is to reduce the variation between face images is the changes under environmental such as lighting that creates the noise in the images. The implementations of eigenfaces created set of humans face representation mostly like “Ghostly Face” that it is ignored all the noise including lighting and face emotion. However, the noise still can exist in the set of eigenvectors. Thus, the eigenvectors with lower eigenvalues or specific eigenvectors can be reduced because lower eigenvectors only create noise in the image [Yambor WS, 2000] and [Moon H, 2001]. 

The last eigenvectors with lower eigenvalues by the amount of variance found between images can be reduced. Here, three variations have been proposed to choose eigenvectors [Yambor WS, 2000]. Firstly, the last 40% of the eigenvectors have been removed [Moon H, 2001]. Then, the second variation (equation 3.25) uses the minimum number of eigenvectors to guarantee that energy  e is greater than threshold (typically 0.9). It is the ratio of the sum of all eigenvalues up to and including i over the sum of all eigenvalues:


Conversely, it is also possible that the first eigenvectors encode information that is not relevant to identify image, such as lighting (Moon, 2001). Thus, the last variation (equation 3.26) depends upon the stretching dimension. The stretch,  for the  eigenvectors is the ratio of that eigenvalues over the largest eigenvalue :


where the eigenvectors with  greater than a threshold (0.01) is chosen. 

3.3.4 Feature Vectors
The set of eigenfaces  $U_i$ with its size $M_t  x P$ is used to generate feature vectors for training and unknown face image. For the training phase (with the mean difference face $\phi_i$,  with size of  P x 1), consist of face images is present by:



with  k = 1,2,...m and  i = 1,2,...m. The weight is the representation of each training face image and its size is $M_t  x 1$ . Otherwise, for an unknown face image   with its size  P x 1 to be classify, its weight features is transformed by;


where the mean face image, $\psi$ is previously have been computed in the training phase with its size P x 1 . The weight of feature vectors is:


the additional for the training phase which include more faces is the size of weight feature vectors are   $(M x M_t)$ where the rows,  M present each faces identity,  and the column,  present the value each weight.

Equation (3.25) describes contribution of each chosen eigenfaces in representing the train images and unknown input face image. The feature vectors is then used for the learning phase.

3.3.5 Rebuilding a Face Image
A face can be approximately reconstructed by using its feature vector and the eigenfaces as:


is the projected image. The equation (3.26) tells that the face image is rebuilt just by adding each eigenfaces (3.27) and mean face image. 

0 comments:

Post a Comment