文档库 最新最全的文档下载
当前位置:文档库 › Kernel Machine Based Learning for Multi-View Face Detection and Pose Estimation

Kernel Machine Based Learning for Multi-View Face Detection and Pose Estimation

The 8th IEEE Int. Conference on Computer Vision (ICCV 2001)

Kernel Machine Based Learning

For Multi-View Face Detection and Pose Estimation

Stan Z.Li,Lie Gu,Bernhard Sch¨o lkopf,HongJiag Zhang

Contact:szli@https://www.wendangku.net/doc/1c19212428.html,

QingDong Fu,Yimin Cheng

Dept.of Electronic Science and Technology

University of Science and Technology of China

March2001

Technical Report

MSR-TR-2001-07

Microsoft Research

Microsoft Corporation

One Microsoft Way

Redmond,W A98052

https://www.wendangku.net/doc/1c19212428.html,

Abstract

Face images are subject to effects of view changes and artifacts such as variations in illumination and facial shape. Such effects cause the distribution of data points to be highly nonlinear and complex.It is desirable to learn a nonlinear mapping from the input image to a low dimen-sional space such that the distribution becomes simpler, tighter and therefore more predictable for better modeling of faces.

In this paper,we present a kernel machine based ap-proach for learning such nonlinear mappings to provide ef-fective view-based representations for multi-view face de-tection and pose estimation.One view-subspace is learned for each view from a set of face images of that view,by using kernel principal component analysis(KPCA).Pro-jections of the data onto the view-subspaces are then com-puted as view-based features.Multi-view face detection and pose estimation are performed by classifying each face into one of the facial views or into the nonface class,by us-ing a multi-class kernel support vector classi?er(KSVC). It is shown that fusion of evidences from all views can pro-duce better results than using the result for a single view. Experimental results show that our approach yields high detection and low false alarm rates in face detection and good accuracy in pose estimation,and outperforms its lin-ear counterpart composed of linear principal component analysis(PCA)feature extraction and Fisher linear dis-criminant based classi?cation(FLDC).

1Introduction

In the past most research on face detection focused on frontal faces(see e.g.[21,10,14,16,20]).However, approximately75%of the faces in home photos are non-frontal[8]and therefore a system for frontal faces only is very limiting.

Multi-view face detection and pose estimation require to model faces seen from various view points,under vari-ations in illumination and facial shape.Appearance based methods[11,12,7]avoid dif?culties in3D modeling by using images or appearances of the object viewed from possible viewpoints.The appearance of an object in a2D image depends on its shape,re?ectance property,pose as seen from the viewing point,and the external illumination conditions.The object is modeled by a collection of ap-pearances parameterized by pose and illumination.Object detection and recognition are performed by comparing the appearances of the object in the image and in the model.

In view-based representation,the view is quantized into a set of discrete values such as the view angles.A view subspace de?nes the manifold of possible appearances of the object at that view,subject to illumination.One may use one of the following two methods when construct-ing subspaces:(1)Quantize the pose into several discrete

ranges and partition the data set into several subsets,each composed of data at a particular view;then construct a sub-space from each subset[15].(2)With training data labeled and sorted according to the view value(and perhaps also illumination values),one may be able to construct a mani-fold describing the distribution across views[11,5,1].

Linear principal component analysis(PCA)is a pow-erful technique for data reduction and feature extraction from high dimensional data.It has been used widely in appearance based applications such as face detection and recognition[21,10,20].The theory of PCA is based on an assumption that the data is a Gaussian distribution.

PCA gives accurate density models when the assumption is valid.This is,however,not the case in many real-world applications.

The distribution of face images under a perceivable vari-ation in viewpoint,illumination or expression is highly nonconvex and complex[2],and can hardly be well de-scribed by using linear PCA.To obtain a better descrip-tion of the variations,nonlinear methods,such as princi-pal curves[6]and splines[11],may be used.Over the last years,progress has been made for non-frontal faces.

Wiskott et al.[24]build elastic bunch graph templates to describe some key facial feature points and their rela-tionships and use them for multi-view face detection and recognition.

Gong and colleagues study the trajectories of faces in linear PCA feature spaces as they rotate[5],and use ker-nel support vector machines for multi-pose face detection and pose estimation[13,9].Their systems use not only in-formation on the face appearance but also constraints from color and motion.

Schneiderman and Kanade[17]use a statistical model to represent object’s appearances over a small range of views,to capture variation that cannot be modeled explic-itly.This includes variation in the object itself,variation due to lighting,and small variations in pose.Another sta-tistical model is used to describe non-objects-of-interest.

Each detector is designed for a speci?c view of the object, and multiple detectors that span a range of the object’s ori-entation are used.The results of these individual detectors are then combined.

In the present paper,we present a kernel machine learn-ing based approach for extracting nonlinear features of face images and using them for multi-view face detection and pose estimation.Kernel PCA[19]is applied on a set of view-labeled face images to learn nonlinear view-subspaces.Nonlinear features are the projections of the data onto these nonlinear view-subspaces.

KPCA feature extraction effectively acts a nonlinear mapping from the input space to an implicit high dimen-sional feature space.It is hoped that the distribution of the 1

mapped data in the implicit feature space has a simple dis-tribution so that a simple classi?er(which need not to be a linear one)in the high dimensional space could work well.

Face detection and pose estimation are jointly per-formed by using kernel support vector classi?ers (KSVC’s),based on the nonlinear features.The main op-eration here is to classify a windowed pattern into one of the view classes plus the nonface class.In this multi-class classi?cation task,evidences from different view channels are effectively fused to yield a better result than can be pro-duced by any single channel.

Results show that the proposed approach yields high detection and low false alarm rates in face detection,and good accuracy in pose estimation.These are compared with results obtained by using a linear counterpart of our system,i.e.,a system building on linear PCA and linear classi?cation methods.

The remainder of the paper is organized as follows:Sec-tion2introduces basic concepts of kernel learning meth-ods,that is,KSVC and KPCA.Section3described the pro-posed approach for face detection and pose estimation and presents a system implementing our methods.Section4 presents experimental results.

2Kernel Learning Methods

The kernel methods generalize linear SVC and PCA to nonlinear ones.The trick of kernel methods is to perform dot products in the feature space by using kernel functions in input space so that the nonlinear mapping is performed implicitly in the input space[22,19].

2.1Support Vector Classi?er

Consider the problem of separating the set of training vectors belonging to two classes,given a set of training data where is a feature vector and its class label.

Assume(1)that the two classes can be separated by a hyperplane and(2)no knowledge about the data distribution is available.From the point of view of sta-tistical learning theory,of all the boundaries determined by and,the one that maximizes the margin is preferable, due to a bound on its expected generalization error.The optimal values for

and can be found by solving the following con-strained minimization problem

(1)

(2)

Solving it requires the construction of a so-called dual problem,using Lagrange multipliers,

and results in a classi?cation function

(3)

Most of the take the value of zero;those with nonzero are the“support vectors”.

In non-separable cases,slack variables measur-ing the misclassi?cation errors can be introduced,and a penalty function added to the objective function[4].The optimization problem is now treated as to minimize an up-per bound on the total classi?cation error as well as,ap-proximately,a bound on the VC dimension of the clas-si?er.The solution is identical to the separable case ex-cept for a modi?cation of the Lagrange multipliers into

[22].

A linearly non-separable but nonlinearly(better)sepa-

rable case may be tackled as follows:First,map the data from the input space to a high dimensional feature space by such that the mapped data is linearly separable in the feature space.Assuming there ex-ists a kernel function such that, then a nonlinear SVM can be constructed by replacing the inner product in the linear SVM by the kernel function

(4)

This corresponds to constructing an optimal separating hy-perplane in the feature space.

In solving the class problem,the one-against-the-rest method[18,3,23]is used to construct classi?ers.

The-th one separates class from the other classes.A maximum selection across the classi?ers or some another measure is used for the?nal decision.

2.2Kernel PCA

We begin by describing linear PCA.Given a set of examples in represented by column vectors,subtract them by the their mean vector to obtain the centered ex-amples().The covariance matrix is

(5)

Linear PCA is an algorithm which diagonalizes the co-variance matrix by performing a linear transformation.

The corresponding transformation matrix is constructed by solving the following eigenvalue problem

(6)

for eigenvalues and nonzero eigenvectors.The above is equivalent to

(7) 2

Sort in descending order and use the?rst principal components as the basis vector of a lower dimensional subspace.The transformation matrix can be formed by using,normalized to unit length,as the-th column of a matrix.The projection of a point

into the-dimensional subspace can be calculated as

(8) Its reconstruction from is

(9) This is the best approximation of the in any -dimensional subspace in the sense of minimum overall squared error.

Let us now generalize classic PCA to kernel PCA.Let

be a mapping from the input space to a high dimensional feature space.Now we introduce the idea of KPCA by which a nonlinear PCA representation in the input space is obtained by using a linear PCA in.The covariance matrix in is

(10)

and the eigenvalue problem is.Corresponding to Eq.(7)are the equations in

(11) Because all for nonzero must lie in the span of the ’s,there exist coef?cients such that

(12)

De?ning the matrix,the eigenvalue prob-lem can be converted into the following[19]

(13) for nonzero eigenvalues.

Sort in descending order and use the?rst principal components as the basis vector in(In fact, there are usually some zero eigen-values,in which case ).The vectors spans a linear subspace,called KPCA subspace,in.The projection of a point onto the -th kernel principal component is calculated as

(14)

3Multi-View Face Detection and Pose Esti-mation

In the following,the problem of multi-view face detec-tion and pose estimation is formulated.The proposed ker-nel approach for solving the problem is described.

3.1Problem Description

Let be a windowed grey-level image,or ap-pearance,of a face,possibly preprocessed.Assume that all left rotated faces(those with view angles between and )are mirrored to right rotated so that every view angle is between and;this does not cause any loss of gen-erality.Quantize the pose into a set of discrete values.

We choose for10equally spaced angles, ,,,with corresponding to the right side view and to the frontal view.

Assume that a set of training face images are provided for the learning;see Fig.1for some examples.The im-ages are subject to changes not only in the view,but also in illumination.The training set is view-labeled in that each face image is manually labeled with its view value as close to the truth as possible,and then assigned into one of groups according to the nearest view value.This pro-duces view-labeled face image subsets for learning view-subspaces of faces.Another training set of nonface images is also needed for training face detection.

Now,there are classes.This will be indexed in the following by,with correspond-ing to the views of faces and corresponding to the nonface class.The two tasks,face detection and pose estimation,are performed jointly by classifying the input into one of the classes.If the input is classi?ed into one of the face classes,a face is detected and the corre-sponding view is the estimated pose;otherwise the input pattern is considered as a nonface pattern(without a view).

3.2Kernel Machine Based Learning

The learning for face detection and pose estimation us-ing kernel machines is carried out in two stage:one for KPCA view-subspace learning,and one for KSVC classi-?er training.

This is illustrated through the system structure shown in Fig.2.

Stage1training aims to learn the KPCA view-subspaces from the face view subsets.One set of ker-

Figure1.Multi-view face examples.

3

Figure2.The structure of the multiple KPCA

and SVCs and the composite face detector

and pose estimator.

nel principal components(KPCs)are learned from each view subset.The?rst most signi?cant com-ponents are used as the basic vectors to construct the view-subspace.The learning in this stage yields view-subspaces,each determined by a set of support vectors and the corresponding coef?cients.The KPCA in each view channel effectively performs a nonlinear mapping from the input image space(possibly pre-processed)to the dimensional output KPCA feature space.

Stage2aims to train KSVC’s to differentiate be-tween face and nonface patterns for face detection.This requires a training set consisting of a nonface subset as well as view face subsets,as mentioned earlier.One KSVC is trained for each view to perform the-class classi?cation based on the features in the corresponding KPCA subspace.The projection onto the KPCA subspace of the corresponding view is used as the feature vector.The one-against-the-rest method[18,3,23]is used for solv-ing the multi-class problem in a KSVC.This stage gives KSVCs.

3.3Face Detection and Pose Estimation

In the testing stage,a test sample is presented to the KPCA feature extractor for each view to obtain the fea-ture vector for that view.The corresponding KSVC of that view calculates an output vector

as the responses of the classes to the input.This is done for all the view channels so that such output vectors are produced.

The value is the evidence for the judgement that the input belongs to class in terms of the features in the-th view KPCA subspace.The?nal classi?cation decision is

made by fusing the evidences from all the view channels.

A simple way for the fusing is to sum the evidences;that

is,for each class,the following

(15)

is calculateed to give the overall evidence for classifying into class.The?nal decision is made by maximizing evidence:belongs to if.

4Experimental Result

4.1Data Description

A data set consisting of face view subsets and a non-

face subset is given.It is randomly partitioned into three data sets for the use in different stages.Fig.1shows the partition and the sizes of the three data sets.Set1is used for learning the KPCA view-subspaces,Sets1and2to-gether are used for training the multi-class KSVC’s,and Set3is used for testing.

https://www.wendangku.net/doc/1c19212428.html,position of three data sets

View Set1Set2Set3

50020002209

50020001709

50020001394

50020001137

50020001189

50020001143

50020001304

50020001627

50020001553

50020001309

Tot.Faces50002000014574

Nonfaces010*******

4.2Training

For the KPCA,a polynomial kernel is selected,

,with.

For the KSVC,an RBF kernel is selected,

with.The selections are empirical.

The quality of the KPCA subspace modeling depends on the size of training data.A larger training data normally leads to a better generalization quality,but also increases the computational costs in both learning and projection, KPCA learning and projection being where most compu-tational expenses occur in our system.Table2shows the error rates for various sample size.We use500examples per view for the KPCA learning of the view-subspaces to balance the tradeoff.

4

Table2.Error rates with different numbers of

training examples per view

Using Num.(%)Missing(%)False A.

3008.160.30

500 6.820.31

2000 5.130.21

3000 5.030.12

4.3Test Results

Four methods are compared between the linear PCA based and the KPCA based approaches.The KSVC is used for multi-class classi?cation based on KPCA features,

and a linear classi?er,i.e.Fisher linear discriminant based classi?er(FLDC),is used for classi?cation based on linear

PCA.The reason for the use of the linear FLD is that a lin-ear SVC would have nearly half of the training samples as its support vectors.The FLD classi?er is combined with

the one-against-all strategy for the multiple class problem.

The classi?cation results are demonstrated through clas-si?cation matrices(c-matrices);see Fig.3and4.The entry of the c-matrix gives the number of examples whose ground truth(manually labeled,actually)class label is

(in row)and which are classi?ed into class(in column) by the system.The?rst rows and10columns cor-respond to the10views()of the ground truth and the classi?cation result,respectively,whereas the last row and column correspond to the nonface class.

From these c-matrices,the corresponding missing and

false alarm rates for face detection can be calculated,as shown in Table3;and also the accuracy for pose estima-tion shown in Table4where the accuracy is de?ned as the percentage of examples whose pose estimates are within the range of(i.e.the elements on the diagonal line and one off-diagonal line on each side)and the accuracy de?ned as the percentage of examples whose pose estimates are within the range of(i.e.the elements on the diagonal line and and those on the two off-diagonal line on each side).From these we can see the the kernel PCA approach produces much better results than linear PCA.

Finally,the face detection and pose estimation are per-formed on real images.Testing images collected from VCD movies are used for the evaluation.The images are scanned at different scales and locations.The pattern in each sub-window is classi?ed into face/nonface;if it is a face,the pose is estimated.Fig.5shows some examples. 5Conclusion

A kernel machine based approach has been presented for learning view-based representations for multi-view face

C-matrix for the90degree channel result 1902554203004001 541539085010103 892130906403027

42201121100200012 52465114694050055

204041121127101427

04030712382124027

6010028147012726

000000131482169

0010000130127230 1008331015023734137592 C-matrix for the70degree channel result 1889653901005005 501500076010101 9811323017305029

64501123120100027 47216114172830050

20101112911640913

05010212452028029

000011111484181162

300000231480282

0030000110127021 1149127010153226157550 C-matrix for the40degree channel result 19688141210000012 371480064010300 87013100116050114

14301125130401026 282141144174110033

202011115137002010

01000312251819028

30000291469101662

000000241478052

007000070125528 8310233015095342177584 C-matrix for the0degree channel result 1967484315000009 531514079071503 7401323075000216

2280112290800034 363131146113150071

204021120125403016

00040312241827027

000004141496151454

000000231478141

000000080125222 7511623011064228107556 C-matrix for the result with all channel fused 1981493611000000 441518077040100 770132908605016

23201126120100019 31113114492810029

001011123124901310

01000312411519024

000012131505151348

000000231486043

001000080126817 7410826015034132147653

Figure3.Classi?cation statistics for the ker-

nel PCA approach as demonstrated by c-

matrices(see text for the interpretation of

c-matrices).

5

C-matrix for the90degree channel result 13522713712103111568261382 24174642923427114232042830 511628418910827271213177 0201462091907616252448256 21443119151525917348154 955511333639426015020629125 281081715234450544220761178 1510007651702881906989 20069330169218129108 019182925843192287391186 5365083742091741602042992874934364 C-matrix for the70degree channel result 1517422493511983157251545 1376693521492763182322664 612120912954618171225172 021231304210926362654265 533414226519111141566122 2015452402582071251322366 531057225028524121227157 922048153398597340140171 02000030781638276 0181321292324136215299100 5284484793072331552143073176064511 C-matrix for the40degree channel result 14843555011493202471302 3497413441241932813261128 142344893611172412102318234 01481182138616112026132 006118533821611037630108 003462442882591631642279 222455825045851026981287 203037561962251995297 112438940190208130129 3172029261233139222292150 3243443351912002151853143486554203 C-matrix for the0degree channel result 4700000000015 1539117436391500212301682 2924932020640120033128 0221067548170111352 928928644237022181466414815 1305177925834627519521058420 11412044019246551121387504 53107053488910819143 60361337151291371291466 00119004134810713 1451122712284163513804795236873611 C-matrix for the result with all channel fused 147328427522193081265 3218894221582321616131180 14151319223958129714206 025192256167868262428207 1536421835723912248654212 130123121326022210816333119 1912449330848145726273267 4000185520034121262113 168009650255306198205 017183325013109200331157 3343203362091871771942592685453918 Figure4.Classi?cation statistics for the lin-ear PCA approach as demonstrated by c-matrices.

Table3.Face Detection Error Rates

Method Missing(%)False A.(%) KPCA-+KSVC 2.16 3.27

KPCA-+KSVC 2.20 3.81

KPCA-+KSVC 2.43 3.38

KPCA-+KSVC 2.13 3.73

KPCA+KSVC’s fused 2.15 2.50

PCA-+FLD22.2644.40

PCA-+FLD24.6642.53

PCA-+FLD21.3446.55

PCA-+FLD24.6554.00

PCA+FLDC’s fused19.4150.18 Table4.Face Pose Estimation Accuracy

Method Acc.Acc.

KPCA-+KSVC99.1496.80

KPCA-+KSVC99.2796.84

KPCA-+KSVC99.3597.0

KPCA-+KSVC99.1197.06

KPCA+KSVC’s fused99.4697.52

PCA-+FLD92.1081.40

PCA-+FLD93.1583.34

PCA-+FLD93.6884.25

PCA-+FLD91.4783.38

PCA+FLDC’s fused94.0684.56

detection and recognition.The main part of the work is the use of KPCA for extracting nonlinear features for each view by learning the nonlinear view-subspace using kernel PCA.This is to construct a mapping from the input im-age space,in which the distribution of data points is highly nonlinear and complex,to a lower dimensional space in which the distribution becomes simpler,tighter and there-fore more predictable for better modeling of faces.

The kernel learning approach leads to an architecture composed of an array of KPCA feature extractors,one for each view,and an array of corresponding KSVC multi-class classi?ers for face detection and pose estimation.Ev-idences from all views are fused to produce better results than the result from a single view.Results show that the kernel learning approach outperforms its linear counterpart and yields high detection and low false alarm rates in face detection,and good accuracy in pose estimation.

6

Figure5.Multi-view face detection results.

The estimated views are as follows:From

left to right,in the two images on the top,the

estimated angles are10,0,60,50degrees,

respectively;in the bottom image,they are

80,60,0degrees.

References

[1]S.Baker,S.Nayar,and H.Murase.Parametric feature

detection.IJCV,27(1):27–50,March1998.

[2]M.Bichsel and A.P.Pentland.“Human face recog-

nition and the face image set’s topology”.CVGIP: Image Understanding,59:254–261,1994.

[3]V.Blanz, B.Sch¨o lkopf,H.B¨u lthoff, C.Burges,

V.Vapnik,and https://www.wendangku.net/doc/1c19212428.html,parison of view–based object recognition algorithms using realistic3D mod-els.In C.von der Malsburg,W.von Seelen,J.C.

V orbr¨u ggen,and B.Sendhoff,editors,Arti?cial Neu-ral Networks—ICANN’96,pages251–256,Berlin, 1996.Springer Lecture Notes in Computer Science, V ol.1112.

[4]C.Cortes and V.Vapnik.Support vector networks.

Machine Learning,20:273–297,1995.

[5]S.Gong,S.McKenna,and J.Collins.“An investiga-

tion into face pose distribution”.In Proc.IEEE In-ternational Conference on Face and Gesture Recog-nition,Vermont,1996.

[6]T.Hastie and W.Stuetzle.“Principal curves”.

Journal of the American Statistical Association, 84(406):502–516,1989.

[7]J.Hornegger,H.Niemann,and R.Risack.

Appearance-based object recognition using op-

timal feature transforms.PR,33(2):209–224,

February2000.

[8]A.Kuchinsky,C.Pering,M.L.Creech,D.Freeze,

B.Serra,and J.Gwizdka.”FotoFile:A consumer

multimedia organization and retrieval system”.In

Proc.ACM HCI’99Conference,1999.

[9]Y.M.Li,S.G.Gong,and H.Liddell.“support vector

regression and classi?cation based multi-view face

detection and recognition”.In IEEE Int.Conf.Oo

Face&Gesture Recognition,pages300–305,France,

2000.

[10]B.Moghaddam and A.Pentland.“Probabilistic vi-

sual learning for object representation”.IEEE Trans-

actions on Pattern Analysis and Machine Intelli-

gence,7:696–710,July1997.

[11]H.Murase and S.K.Nayar.“Visual learning and

recognition of3-D objects from appearance”.Inter-

national Journal of Computer Vision,14:5–24,1995.

[12]S.Nayar,S.Nene,and H.Murase.Subspace methods

for robot vision.RA,12(5):750–758,October1996.

[13]J.Ng and S.Gong.“performing multi-view face de-

tection and pose estimation using a composite sup-

port vector machine across the view sphere”.In Proc.

IEEE International Workshop on Recognition,Anal-

ysis,and Tracking of Faces and Gestures in Real-

Time Systems,pages14–21,Corfu,Greece,Septem-

ber1999.

[14]E.Osuna,R.Freund,and F.Girosi.“Training support

vector machines:An application to face detection”.

In CVPR,pages130–136,1997.

[15]A.P.Pentland, B.Moghaddam,and T.Starner.

“View-based and modular eigenspaces for face recog-

nition”.In Proceedings of IEEE Computer Society

Conference on Computer Vision and Pattern Recog-

nition,pages84–91,1994.

[16]H.A.Rowley,S.Baluja,and T.Kanade.“Neural

network-based face detection”.IEEE Transactions on

Pattern Analysis and Machine Intelligence,20(1):23–

28,1998.

[17]H.Schneiderman and T.Kanade.“a statistical

method for3d object detection applied to faces and

cars.In Proceedings of IEEE Computer Society Con-

ference on Computer Vision and Pattern Recognition,

2000.

7

[18]B.Sch¨o lkopf,C.Burges,and V.Vapnik.Extracting

support data for a given task.In U.M.Fayyad and

R.Uthurusamy,editors,Proceedings,First Interna-

tional Conference on Knowledge Discovery&Data

Mining,Menlo Park,1995.AAAI Press.

[19]B.Sch¨o lkopf,A.Smola,and K.-R.M¨u ller.Nonlinear

component analysis as a kernel eigenvalue problem.

Neural Computation,10:1299–1319,1998.Techni-

cal Report No.44,1996,Max Planck Institut f¨u r bi-

ologische Kybernetik,T¨u bingen.

[20]K.-K.Sung and T.Poggio.“Example-based learn-

ing for view-based human face detection”.IEEE

Transactions on Pattern Analysis and Machine Intel-

ligence,20(1):39–51,1998.

[21]M.A.Turk and A.P.Pentland.“Face recognition

using eigenfaces.”.In Proceedings of IEEE Computer

Society Conference on Computer Vision and Pattern

Recognition,pages586–591,Hawaii,June1991. [22]V.N.Vapnik.Statistical learning theory.John Wiley

&Sons,New York,1998.

[23]J.Weston and C.Watkins.Multi-class support vector

machine.Technical Report CSD-TR-98-04,Depart-

ment of Computer Science,Royal Holloway,Univer-

sity of London,Egham,UK,1998.

[24]L.Wiskott,J.Fellous,N.Kruger,and C.V.malsburg.

”face recognition by elastic bunch graph matching”.

IEEE Transactions on Pattern Analysis and Machine

Intelligence,19(7):775–779,1997.

8

相关文档
相关文档 最新文档