Section A

An Application of a Parallel Algorithm on an Image Recognition

Ran Baik 1
Author Information & Copyright
1Department of Computer Engineering, Honam University, GwangJu, Korea, baik@honam.ac.kr
*Corresponding Author: Ran Baik, #417 Eodeung-daero, Gwangsan-gu, Gwangju, Republic of Korea, Tel: 82-62-940-5425, E-mail:baik@honam.ac.kr.

© Copyright 2017 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Dec 28, 2017 ; Revised: Dec 29, 2017 ; Accepted: Dec 30, 2017

Published Online: Dec 31, 2017

Abstract

This paper is to introduce an application of face recognition algorithm in parallel. We have experiments of 25 images with different motions and simulated the image recognitions; grouping of the image vectors, image normalization, calculating average image vectors, etc. We also discuss an analysis of the related eigen-image vectors and a parallel algorithm. To develop the parallel algorithm, we propose a new type of initial matrices for eigenvalue problem. If A is a symmetric matrix, initial matrices for eigen value problem are investigated: the “optimal” one, which minimize CAF and the “super optimal”, which minimize IC1AF. In this paper, we present a general new approach to the design of an initial matrices to solving eigenvalue problem based on the new optimal investigating C with preserving the characteristic of the given matrix A. Fast all resulting can be inverted via fast transform algorithms with O(N log N) operations.

Keywords: pattern recognition; eigenvalue problem; principle components; initial matrices; Symmetric matrices; Gram-Schmidt

I. INTRODUCTION

In the pattern recognition system, it is considered that is the characteristic of the vectors on the image. The number of the characteristic vectors that is composed to the dimension of vectors is also very important in the rate of recognition. As the more characteristic factors is, it will be affected weakness on the system. Furthermore, it will be slow to read the learning.

g and recognition rate of the image and we need to the group of the learning for the modeling. The number of the characteristic vectors is getting more i.e. the characteristic vectors that is composed the dimensions of the vectors is higher. On the most of pattern recognition, the capacity of the pattern recognition is better until some range, but it will happen rather decreasing than increasing. By the principle components analysis, what is the optimal reduction from higher dimension of characteristic vectors to lower dimension of characteristic vectors. The Principle component is one of multivariate data processing methods that reduced from high dimension to low dimension. Characteristic Data is presented standard axis as many as the number of dimensions of the characteristic vectors. The n-dimension is presented by the data of n standard axes. If the dimension is reduced, it will be reduced the number of the standard axes. Therefore, the standard axes of statistics multivariate data is to project on a characteristic vector, it will be reduced. PCA (Principle Component Algorithm) is to rearrange the characteristic vectors and transform a standard axis based on the set of uncorrelated variates. In this paper, we analyze the extractions of the characteristic vectors and the algorithm in the parallel cases.

II. DEVELOPMENT OF IMAGE RECONGNITION’S PARALLEL ALOGORITHM

Why do we need this method to rearrange the characteristic vectors? We need the simple example the concrete the cases. We have a simple example of PCA as below:

2.1. Distributions and transformations of group of the characteristic vectors

A group of random vectors that the value is below will be distributed by given condition.

  1. To produce natural vector sets

  2. To rearrange the given vectors from (1) that average of all vectors are zero.

  3. To reproduce the vectors that the radius from origin is less than 0.5 and ignore others, i.e. greater than 0.5. We can scatter the data that is composed 2-dimension of 1000 random vectors (Figure 1)

jmis-4-4-219-g1
Fig 1. Distribution of set of vectors
Download Original Figure

Also we consider the step to figure out from x-y coordinates to polar coordinates transformations of the distribution of the given set of the vectors (Fig. 2 and Fig. 3).

jmis-4-4-219-g2
Fig 2. Steps for transformation of the set of vectors.
Download Original Figure
jmis-4-4-219-g3
Fig 3. Principle components of the given vectors.
Download Original Figure

In this work, we need to compute the eigenvalues of a covariance matrix to know the principal components of the images. Furthermore, we use the eigenvectors from the eigenvalues to get the information of Principle components.

There are several methods to solve the eigen problem, but there are sequential processes to get all eigenvalues. Also, the structure of the given matrix is broken to do for it. In our work, we want to keep the structure of the given matrix and develop the parallel algorithm. We compare the complexity to compute all eigenvalues of the given matrix between existing algorithm and our algorithm.

2.2. A Construction of an Image Recognition’s Algorithm
2.2.1 Development of an algorithm of eigenface on PCA

Each pixel of a face-image is composed of on the face space. An eigenface is the base vectors which is composed the face space. The base vectors is presented the common characteristics among all candidate face images. The eigen face is produced from difference vectors between an average face image and each candidate face image. These are eigenvectors of the covariance matrix. Since a set of eigenvalues of the covariance matrix is a representation of the range of distribution of average face image, the eigenface which is composed the largest eigenvalue and its eigenvector is the closest face with the origin face. If the eigenvalue is getting smaller, it will decrease the characteristic of the face image. Therefore, we discuss the largest eigenvalue and the smallest eigenvalue of our face image and study the algorithm to compute them in this section.

We apply the algorithm for computing the largest eigenvalue and the smallest eigenvalue independently. We compare the difference between the largest eigenface and the smallest eigenface. We discuss analysis of the algorithm in parallel. First of all, we develop a method to compute the extreme eigenvalues. Ii is modified by Newton method with a parameter t. It is a procedure to develop the algorithm as follows;

We denote by Mn the space of n-by-n symmetric matrices.

We denote by σ(A) the set of eigenvalues of AMn.

Let AMn be a symmetric matrix. Then there is an orthogonality U = [U1,….,Un] ∈ Mn such that

A = U [ λ 1 0 0 λ n ] U T , λ k R .

The following is the usual Newton method for obtaining an eigenpair of a Symmetric matrix AMn [3]:

Consider G: Rn×R → Rn × R such that

G ( X a ) = [ ( a I A ) X X T X 1 ]
(2)

Then L is the set of solution for G(Xa)=0.

Asumming the matrix

[ a I A X 2 X T 0 ] M n + 1

is invertible, then the usual Newton iteration is

[ X a ] = [ X a ] [ a I A X 2 X T 0 ] 1 [ ( a I A ) X X T X 1 ]

It is well known that the Newton’s method has a local quadratic convergence rate [3], that is there is a small neighborhood Nk for each eigenpair [Ukλk] such that

if [Xa][X(0)a(0)]Nk then

[ X ( i + i ) a ( i + 1 ) ] [ U k λ k ] 2 2 C [ X ( i ) a ( i ) ] [ U k λ k ] 2 2

for all i = 0,1,…. where C < ∞ is a positive constant.

We call Nk the quadratic convergence neighborhood of the eigenpair [Ukλk]. Although the specific determination of each Nk is an extremely difficult task, if the method converges to a point in L then we know the rate of convergence will eventually be quadratic. It can be shown easily that the Newton’s method is not global.

We modify the Newton method in order to have a global convergence [2]. There are several considerations to give for the Modification.

First, under the modification we desire the pair [X(i)a(i)] gets closer to an eigenpair at each step of the iteration, i.e., dL([X(i+1)a(i+1)])dL([X(i)a(i)]) where dL is a suitable distance measure from a point to L. It will ensure the points under the iteration remain in D. Second, we want to modify the method the least amount as possible in order to preserve the original properties of the Newton’s Method, for example, local quadratic convergence. Third, the modified method should be simple to implement, requires almost the same procedures as the original Newton iteration.

2.3 Newton Iteration and Its Modification

Consider the usual Newton iteration (2):

[ X a ] = [ X a ] [ a I A X 2 X T 0 ] 1 [ ( a I A ) X X T X 1 ]

Choose a parameter t > 0 so that the method takes the form

[ α I A X 2 X T 0 ] [ X α ] =   [ I 0 0 t ] [ 0 X X T 1 / α ] [ X α ] .
(3)

Then

( α I A ) X +   α X =   α X = α X  
(4)

and

2 X T X = t ( 2 X T X + 1 )
(5)

Then the parameterized Newton method takes a form:

X = 1 β ( α I A ) 1 X,
(6)

and

α =   α t β
(7)

Now, suppose [Xα]D and let L={[Ukλk]| Uk Rn, Uk2=1

and λkR} be the set of all eigenpairs of A. Define a distance measure from a point [Xα] to L by

d L ( [ X ( i + 1 ) α ( i + 1 ) ]   )   ( α I A ) X 2
(8)

Clearly, dL([Xα] ) 0, dL([Xα] )=0 implies [Xα]L, and dL: DR+ is continuous (since D is compact, dL is actually uniformly continuous).

We have the following.

Lemma 1[4]: Let AMn be Symmetric. Consider the parameterized Newton’s method X= 1βˇ(aIA)1X, and α= αtβ, where β=XT(αIA) 1X, and

β ˇ = ( α I A )   1 X 2 = ( X T ( α I A )   2 X ) 1 / 2 .

Then dL([Xα] ) is minimized at t=(ββˇ)2 with

min t d L   ( [ X α ]   )   =   1 β ˇ ( 1 ( β β ˇ ) 2 ) 1 / 2 .

Therefore, we have the following modification of the Newton’s method:

X =   1 β ˇ ( a I A ) 1 X
(9)
α =   α β β ˇ 2 .
(10)

In this 2.3, we discuss it about an application of eigenfaces from 25 different expressions of a face.

2.4 An application of the face recognition

There are 5 steps to construct for face images.

Step 1: An input of the origin images

The size of face image is N × N and the number of the face images that are recognized as candidates are M such that each is N2 × 1 column vector. Denote the set of face vectors that are recognized as candidates by S (Fig 4).

S = { Γ 1 ,   Γ 2 , Γ 3 ,   ,   Γ M   }
(11)
jmis-4-4-219-g4
Fig 4. Recognition candidate face DB.
Download Original Figure

Step 2: Normalization of the image

We normalize the image on the basis of setting average and variance to reduce errors caused by the light and background (Figure 5).

jmis-4-4-219-g5
Fig 5. Normalized Recognition candidate face DB.
Download Original Figure

Step 3: Computation of the average Face’s vector

We compute the average face vector from the face set S of recognition candidates (Figure 6).

jmis-4-4-219-g6
Fig 6. Average face
Download Original Figure
Ψ = 1 M n = 1 M Γ n
(12)

In the computation for eigenpairs we develop the parallel algorithm with modified Newton method each eigenpair. Even though, the given matrix is ill conditioned, we can get all eigenpairs. This is an ill-conditioned matrix case:

Example1: Let H=[11/21/31/n1/21/31/n12n1] be a n by n Hilbert matrix H. The Hilbert matrix is a well-known example of an ill-conditioned positive. Because the smallest eigenvalue λ12 of H∈M12 is so near zero, many conventional algorithms produce λ12 = 0. Our method gives the following experimental results:

Set the convergence criterion, ε=2 × 10−16, i.e., (HαkI)Xk2<ε.

Suppose D={[Xα]| X {e1,e2, ,en}, and α{ h1,1,h2,2, ,h12,12} is the initial set of points where ei is the ith column of the identity matrix and hi,i is the ith diagonal entry of H.

Table 1: Eigenvalues of H12 by Modified Newton Method
Eigenvalues Eigenvalues of H ( H a k I ) X k 2
1st 1.7953720595620 4.5163365159057D-17
2nd 0.38027524595504 9.5107769421299D-17
3rd 4.4738548752D-02 9.3288777118150D-17
4th 3.7223122378D-03 9.5107769421299D-17
5th 2.3308908902D-04 6.5092594645258D-17
6th 1.1163357483D-05 6.6374428417771D-17
7th 4.0823761104D-07 1.9236667674542D-16
8th 1.1228610666D-08 4.9553614188006D-17
9th 2.2519644461D-10 6.0015952254039D-17
10th 3.1113405079D-12 6.5125904614112D-17
11th 2.6487505785D-14 1.0932505712948D-16
12th 1.1161909467D-16 6.0015952254039D-17
Download Excel Table

If we have some clustering case on the target matrix, we can also overcome all eigenpairs[Figure 7].

jmis-4-4-219-g7
Fig. 7. MNM Process.
Download Original Figure

To overcome the lost eigenpairs from MNM by the Gram-Schmidt orthogonalization process.

Suppose both [x1, α1]T and [x2, α2] T converge to [U11]T.

  1. Choose a vector x = x1 (or x2).

  2. Apply the Gram-Schmidt orthogonalization process.

    z = x - < U1, x> U1, xx = z/‖z‖2

  3. Apply MNM with [xx, α1]T (or [xx, α2]T).

    [xx, α1]T (or [xx, α2]T) will converge to [U2, λ2].

It is shown the MNM in (10), (11) and this is algorithm as below:

Modified Newton’s Algorithm

Set αk(1) and xk(1).

(αk(1),xk(1)) is an initial eigenpairs where αk(1) and xk(1), k = 1, …, n are the diagonal entries of the given matrix and {e1,e2, , en }.

For k = 1, …, n do (in parallel)

For j = 1, … do until convergence

  1. Solve for yk(j), (αk(j)IA)yk(j)=xk(j)

  2. Compute βk(j)=xk(j)Tyk(j)

  3. Compute βˇk(j)=yk(j)2.

  4. Compute xk(j+1)=1βˇk(j)yk(j)

  5. Compute αk(j+1)=αk(j) 1βk(j)

End

End

We apply the MNM(Modified Newton Method) to generate eigen-imgages at each step for image recognition problems.

In this case, we work to find the all eigenpairs of a covariance matrix which is generated by the characteristic vectors.

Step 4: Solving of the eigen problem of a Covariance Matrix C

We compute the eigenvalue λi and the corresponding eigenvectors μi of the covariance matrix C. It is based on our algorithm. The eigenvalue present an average face of the given image and it produce a eigen-face(Fig. 8).

μ l = 1 M n = 1 M λ l k Φ k , l = 1 , 2 , ... , M
(13)
jmis-4-4-219-g8
Fig. 8. Eigen-Faces
Download Original Figure

III. CONCLUSION

We can get each component ωk to be projected by the eigenvalues λk after their new faces are input. It is shown how to compute each eigenface component as follows :

ω k = μ k T ( Γ Ψ ) , k = 1 , 2 , ... , M
(14)

M′ is the number that there are excluded the smallest eigenface corresponding the smallest eigenvalues. From the weight, we can get the component vector Ω of eigenface that are expressed input face images. If we get the set Ω = { ω1, ω2,…, ωM′}, we can expect the errors as follows εk = ‖Ω − Ωk2.

Step 5: A Reconstruction of the given image

We compare the origin image and reconstructed image which is applied our algorithm (Figure 9).

jmis-4-4-219-g9
Fig. 9. An Input image and reconstruction of a Face with the algorithm
Download Original Figure

REFERENCES

[1].

A. Webb, Statistical Pattern Recognition, 2nd: John Wiley & Sons Inc., 2002.

[2].

J. M. Ortega, Numerical Analysis, A Second Course: SIAM Series in Classical in Applied Mathematic: Philadelphia, SIAM Publications, 1990.

[3].

J. M. Ortega and W. C. Rheinboldt, Iterative Solution Of Nonlinear Equations In Several Variables: Academic Press, New York and London, 1970.

[4].

R. Baik, K. Datta and Y. Hong, “Computing for Eigenpairs on Globally Convergent Iterative for Hermitian Matrices,” Lecture Note in Computer Science, vol. 3514, pp. 1-8, 2005.

[5].

Karabi Datta and Yoopyo Hong and Ran Baik Lee, “Parameterized Newton’s iteration for computing an Eigenpairs of a Real Symmetric Matrix in an Interval,” Computational Methods in Applied Mathematics, Vol. 3, pp. 517-535, 2003.

[6].

S. Thoedoridis and K. Koutrombas, Pattern Recognition: Academic Press, 1998.