Friday, May 28, 2010

LITERATURE REVIEW PART 3/3 - Artificial Neural Network

2.4 Artificial Neural Network


Artificial Neural Network (ANNs) has a large appeal to many AI researchers. A neural network can be defined as model of reasoning based on the human brain. The brain consists of a closely interconnected set of nerve cells or basic information-preprocessing units, called neurons. The human brain incorporates nearly 10 billion neurons and 60 trillion connections, synapses between them [Shepherd, 1990]. By using multiple neurons simultaneously, the brain can perform its functions much faster than the fastest computers in existence today [Negnevitsky, 2002].

2.4.1 Architecture



A multilayer perceptron is a feed-forward neural network with one or more hidden layers. Typically, the network consists of an input layer of source neurons that at least one hidden layer of neurons and an output layer of neurons (Figure 2.3). The input signals are propagated in a forward direction on a layer-by-layer basis. The backpropagation algorithm perhaps is the most popular and widely used neural paradigm. It based on the generalized delta rule proposed by research group in 1985 headed by Dave Rumelhart based at Stanford University, California, USA.


Figure 2.3: Feed-forward Neural Network


Before the network can be used, it requires target patterns or signals as it a supervised learning algorithm. Training patterns are obtained from the samples of the types of inputs to be given to the backpropagation neural network and their targets are identified by the researchers. The objective of the algorithm is to find the next value of adaptation weight which is also known the Generalized Delta Rule (G.D.R).


The hidden layer weights are adjusted using the errors from the subsequent layer. Thus, the errors computed at the output layer are used to adjust the weight between the last hidden and the output layer. Likewise, an error value computed from the last hidden layer outputs are used to adjust the weight in the next to the last hidden layer and so on until the weight connections to the first hidden layer are adjusted. In this way, errors are propagated backwards layer by layer with corrections being made to the corresponding layer weights in an iterative manner. The process is repeated a number of times for each pattern in the training set until the total error converges to a minimum or until some limit is reached in the number of training iterations completed [Patterson, 1999].


 2.4.2 The Activation Function


    The activation function has the characteristics of continuity, differentiability and non-decreasing uniformity. There is several activation functions used in neural network. There is several activation functions used in the neural network. Binary sigmoid and bipolar sigmoid are generally used in the neural network training. The binary sigmoid which has a normalized range within 0 and 1 and bipolar sigmoid is normalized within -1 to +1 are used in backpropagation training.



2.5 Summary



    Face recognition is a challenging ordeal, many contributions are made and many variations of approaches have been made. This work is focus on using Principal Component Analysis (PCA) to extract patterns and backpropagation neural network to recognize biometric recognition for an unknown face image.
 
    This chapter reports the previous work that used PCA to extract patterns from face images. Since the image also in a matrix form, PCA which a proven methods in dimensional set of data is still continuous used to find better performance. Jacobi’s method is chosen since its capability to find a set of eigenvectors and eigenvalues is better among other method. Hence the combination of PCA and neural network is chosen as backpropagation neural network is good for classification task.


    Therefore, the next chapter will discuss the methodology on how these chosen methods are applied in this work.

0 comments:

Post a Comment