文档库 最新最全的文档下载
当前位置:文档库 › Face recognition based on fitting a 3D morphable model

Face recognition based on fitting a 3D morphable model

Face recognition based on fitting a 3D morphable model
Face recognition based on fitting a 3D morphable model

Face Recognition Based on

Fitting a3D Morphable Model

Volker Blanz and Thomas Vetter,Member,IEEE

Abstract—This paper presents a method for face recognition across variations in pose,ranging from frontal to profile views,and across a wide range of illuminations,including cast shadows and specular reflections.To account for these variations,the algorithm simulates the process of image formation in3D space,using computer graphics,and it estimates3D shape and texture of faces from single images.The estimate is achieved by fitting a statistical,morphable model of3D faces to images.The model is learned from a set of textured3D scans of heads.We describe the construction of the morphable model,an algorithm to fit the model to images,and a framework for face identification.In this framework,faces are represented by model parameters for3D shape and texture.We present results obtained with4,488images from the publicly available CMU-PIE database and1,940images from the FERET database.

Index Terms—Face recognition,shape estimation,deformable model,3D faces,pose invariance,illumination invariance.

?

1I NTRODUCTION

I N face recognition from images,the gray-level or color values provided to the recognition system depend not only on the identity of the person,but also on parameters such as head pose and illumination.Variations in pose and illumination,which may produce changes larger than the differences between different people’s images,are the main challenge for face recognition[39].The goal of recognition algorithms is to separate the characteristics of a face,which are determined by the intrinsic shape and color(texture)of the facial surface,from the random conditions of image generation.Unlike pixel noise,these conditions may be described consistently across the entire image by a relatively small set of extrinsic parameters,such as camera and scene geometry,illumination direction and intensity. Methods in face recognition range within two fundamental strategies:One approach is to treat these parameters as separate variables and model their functional role explicitly. The other approach does not formally distinguish between intrinsic and extrinsic parameters,and the fact that extrinsic parameters are not diagnostic for faces is only captured statistically.

The latter strategy is taken in algorithms that analyze intensity images directly using statistical methods or neural networks(for an overview,see Section3.2in[39]).

To obtain a separate parameter for orientation,some methods parameterize the manifold formed by different views of an individual within the eigenspace of images[16], or define separate view-based eigenspaces[28].Another way of capturing the viewpoint dependency is to represent faces by eigen-lightfields[17].

Two-dimensional face models represent gray values and their image locations independently[3],[4],[18],[23], [13],[22].These models,however,do not distinguish between rotation angle and shape,and only some of them separate illumination from texture[18].Since large rota-tions cannot be generated easily by the2D warping used in these algorithms due to occlusions,multiple view-based 2D models have to be combined[36],[11].Another approach that separates the image locations of facial features from their appearance uses an approximation of how features deform during rotations[26].

Complete separation of shape and orientation is achieved by fitting a deformable3D model to images.Some algorithms match a small number of feature vertices to image positions,and interpolate deformations of the surface in between[21].Others use restricted,but class-specific deformations,which can be defined manually[24],or learned from images[10],from nontextured[1]or textured 3D scans of heads[8].

In order to separate texture(albedo)from illumination conditions,some algorithms,which are derived from shape-from-shading,use models of illumination that explicitly consider illumination direction and intensity for Lamber-tian[15],[38]or non-Lambertian shading[34].After analyzing images with shape-from-shading,some algo-rithms use a3D head model to synthesize images at novel orientations[15],[38].

The face recognition system presented in this paper combines deformable3D models with a computer graphics simulation of projection and illumination.This makes intrinsic shape and texture fully independent of extrinsic parameters[8],[7].Given a single image of a person,the algorithm automatically estimates3D shape,texture,and all relevant3D scene parameters.In our framework,rotations in depth or changes of illumination are very simple operations,and all poses and illuminations are covered by a single model.Illumination is not restricted to Lambertian reflection,but takes into account specular reflections and

.V.Blanz is with the Max-Planck-Institut fu¨r Informatik,Stuhlsatzen-

hausweg85,66123Saarbru¨cken,Germany.

E-mail:blanz@mpi-sb.mpg.de.

.T.Vetter is with the University of Basel,Departement Informatik,

Bernoullistrasse16,4057Basel,Switzerland.

E-mail:thomas.vetter@unibas.ch.

Manuscript received9Aug.2002;accepted10Mar.2003.

Recommended for acceptance by P.Belhumeur.

For information on obtaining reprints of this article,please send e-mail to:

tpami@https://www.wendangku.net/doc/fc10207381.html,,and reference IEEECS Log Number117108.

0162-8828/03/$17.00?2003IEEE Published by the IEEE Computer Society

cast shadows,which have considerable influence on the appearance of human skin.

Our approach is based on a morphable model of3D faces that captures the class-specific properties of faces.These properties are learned automatically from a data set of 3D scans.The morphable model represents shapes and textures of faces as vectors in a high-dimensional face space, and involves a probability density function of natural faces within face space.

Unlike previous systems[8],[7],the algorithm presented in this paper estimates all3D scene parameters automati-cally,including head position and orientation,focal length of the camera,and illumination direction.This is achieved by a new initialization procedure that also increases robustness and reliability of the system considerably.The new initialization uses image coordinates of between six and eight feature points.Currently,most face recognition algorithms require either some initialization,or they are, unlike our system,restricted to front views or to faces that

are cut out from images.

In this paper,we give a comprehensive description of the algorithms involved in1)constructing the morphable model from3D scans(Section3),2)fitting the model to images for3D shape reconstruction(Section4),which includes a novel algorithm for parameter optimization (Appendix B),and3)measuring similarity of faces for recognition(Section5).Recognition results for the image databases of CMU-PIE[33]and FERET[29]are presented in Section5.We start in Section2by describing two general strategies for face recognition with3D morphable models. 2P ARADIGMS FOR M ODEL-B ASED R ECOGNITION In face recognition,the set of images that shows all individuals who are known to the system is often referred to as gallery[39],[30].In this paper,one gallery image per person is provided to the system.Recognition is then performed on novel probe images.We consider two particular recognition tasks:For identification,the system reports which person from the gallery is shown on the probe image.For verification,a person claims to be a particular member of the gallery.The system decides if the probe and the gallery image show the same person(cf.[30]).

Fitting the3D morphable model to images can be used in two ways for recognition across different viewing conditions: Paradigm1.After fitting the model,recognition can be based on model coefficients,which represent intrinsic shape and texture of faces,and are independent of the imaging conditions.For identification,all gallery images are ana-lyzed by the fitting algorithm,and the shape and texture coefficients are stored(Fig.1).Given a probe image,the fitting algorithm computes coefficients which are then compared with all gallery data in order to find the nearest neighbor.Paradigm1is the approach taken in this paper (Section5).

Paradigm 2.Three-dimension face reconstruction can also be employed to generate synthetic views from gallery or probe images[3],[35],[15],[38].The synthetic views are then transferred to a second,viewpoint-dependent recogni-tion system.This paradigm has been evaluated with10face recognition systems in the Face Recognition Vendor Test 2002[30]:For9out of10systems,our morphable model and fitting procedure(Sections3and4)improved performance on nonfrontal faces substantially.

In many applications,synthetic views have to meet standard imaging conditions,which may be defined by the properties of the recognition algorithm,by the way the gallery images are taken(mug shots),or by a fixed camera setup for probe images.Standard conditions can be estimated from an example image by our system(Fig.2). If more than one image is required for the second system or no standard conditions are defined,it may be useful to synthesize a set of different views of each person.

3A M ORPHABLE M ODEL OF3D F ACES

The morphable face model is based on a vector space representation of faces[36]that is constructed such that any convex combination1of shape and texture vectors S i and T i of a set of examples describes a realistic human face:

S?

X m

i?1

a i S i;T?

X m

i?1

b i T i:e1T

Continuous changes in the model parameters a i generate a smooth transition such that each point of the initial surface moves toward a point on the final surface.Just as in morphing,artifacts in intermediate states of the morph are avoided only if the initial and final points are correspond-ing structures in the face,such as the tip of the nose. Therefore,dense point-to-point correspondence is crucial for defining shape and texture vectors.We describe an automated method to establish this correspondence in Section3.2,and give a definition of S and T in Section3.3.

3.1Database of Three-Dimensional Laser Scans The morphable model was derived from3D scans of 100males and100females,aged between18and45years. One person is Asian,all others are Caucasian.Applied to image databases that cover a much larger ethnic variety

Fig.1.Derived from a database of laser scans,the3D morphable face model is used to encode gallery and probe images.For identification,the model coefficients i, i of the probe image are compared with the stored coefficients of all gallery images.

1.To avoid changes in overall size and brightness,a i and b i should sum to1.The additional constraints a i;b i2?0;1 imposed on convex combina-tions will be replaced by a probabilistic criterion in Section3.4.

(Section 5),the model seemed to generalize well beyond ethnic boundaries.Still,a more diverse set of examples would certainly improve performance.

Recorded with a Cyberware T M 3030PS laser scanner,the scans represent face shape in cylindrical coordinates relative to a vertical axis centered with respect to the head.In 512angular steps covering 360 and 512vertical steps h at a spacing of 0.615mm,the device measures radius r ,along with red,green,and blue components of surface texture R;G;B .We combine radius and texture data:

I eh; T?r eh; T;R eh; T;G eh; T;B eh; TeTT ;h; 2f 0;...;511g :

e2T

Preprocessing of raw scans involves:1.filling holes and removing spikes in the surface with an interactive tool,

2.automated 3D alignment of the faces with the method of 3D-3D Absolute Orientation [19],

3.semiautomatic trimming along the edge of a bathing cap,and

4.

a vertical,planar cut behind the ears and a horizontal cut at the neck,to remove the back of the head,and the shoulders.

3.2Correspondence Based on Optic Flow

The core step of building a morphable face model is to establish dense point-to-point correspondence between each face and a reference face.The representation in cylindrical coordinates provides a parameterization of the two-dimensional manifold of the facial surface by para-meters h and .Correspondence is given by a dense vector field v eh; T?eáh eh; T;á eh; TTT such that each point I 1eh; Ton the first scan corresponds to the point I 2eh táh; tá Ton the second scan.We employ a modified optic flow algorithm to determine this vector field.The following two sections describe the original algorithm and our modifications.

Optic Flow on Gray-Level Images .Many optic flow algorithms (e.g.,[20],[25],[2])are based on the assumption that objects in image sequences I ex;y;t Tretain their bright-nesses as they move across the image at a velocity ev x ;v y TT .This implies

dI dt ?v x @I @x tv y @I @y t@I @t

?0:e3T

For pairs of images I 1;I 2taken at two discrete moments,

temporal derivatives v x ,v y ,@I in (3)are approximated by finite differences áx ,áy ,and áI ?I 2àI 1.If the images are not from a temporal sequence,but show two different objects,corresponding points can no longer be assumed to have equal brightnesses.Still,optic flow algorithms may be applied successfully.

A unique solution for both components of v ?ev x ;v y TT from (3)can be obtained if v is assumed to be constant on each neighborhood R ex 0;y 0T,and the following expression [25],[2]is minimized in each point ex 0;y 0T:

E ex 0;y 0T?

X x;y 2R ex 0

;y 0

T

v x @I ex;y T@x tv y @I ex;y T

@y táI ex;y T 2

:e4T

We use a 5?5pixel neighborhood R ex 0;y 0T.In each point ex 0;y 0T,v ex 0;y 0Tcan be found by solving a 2?2linear system (Appendix A).

In order to deal with large displacements v ,the algorithm of Bergen and Hingorani [2]employs a coarse-to-fine strategy using Gaussian pyramids of downsampled images:With the gradient-based method described above,the algorithm computes the flow field on the lowest level of resolution and refines it on each subsequent level.

Generalization to three-dimensional surfaces.For pro-cessing 3D laser scans I eh; T,(4)is replaced by

BLANZ AND

VETTER:FACE RECOGNITION BASED ON FITTING A 3D MORPHABLE MODEL 1065

Fig.2.In 3D model fitting,light direction and intensity are estimated automatically,and cast shadows are taken into account.The figure shows original PIE images (top),reconstructions rendered into the originals (second row),and the same reconstructions rendered with standard illumination (third row)taken from the top right image.

E ?X h; 2R

v h @I eh ; T@h tv @I eh ; T@

táI 2;e5Twith a norm I k k 2?w r r 2tw R R 2tw G G 2tw B B 2:

e6T

Weights w r ,w R ,w G ,and w B compensate for different

variations within the radius data and the red,green,and blue texture components,and control the overall weighting of shape versus texture information.The weights are chosen heuristically.The minimum of (5)is again given by a 2?2linear system (Appendix A).

Correspondence between scans of different individuals,who may differ in overall brightness and size,is improved by using Laplacian pyramids (band-pass filtering)rather than Gaussian pyramids (low-pass filtering).Additional quantities,such as Gaussian curvature,mean curvature,or surface normals,may be incorporated in I eh; T.To obtain reliable results even in regions of the face with no salient structures,a specifically designed smoothing and interpola-tion algorithm (Appendix A.1)is added to the matching procedure on each level of resolution.

3.3Definition of Face Vectors

The definition of shape and texture vectors is based on a reference face I 0,which can be any three-dimensional face model.Our reference face is a triangular mesh with 75,972vertices derived from a laser scan.Let the vertices k 2f 1;...;n g of this mesh be located at eh k ; k ;r eh k ; k TTin cylindrical and at ex k ;y k ;z k Tin Cartesian coordinates and have colors eR k ;G k ;B k T.Reference shape and texture vectors are then defined by

S 0?ex 1;y 1;z 1;x 2;...;x n ;y n ;z n TT ;e7TT 0?eR 1;G 1;B 1;R 2;...;R n ;G n ;B n TT :

e8T

To encode a novel scan I (Fig.3,bottom),we compute the flow field from I 0to I ,and convert I eh 0; 0Tto Cartesian coordinates x eh 0; 0T,y eh 0; 0T,z eh 0; 0T.Coordi-nates ex k ;y k ;z k Tand color values eR k ;G k ;B k Tfor the

shape and texture vectors S and T are then sampled at h 0k ?h k táh eh k ; k T, 0k ? k tv eh k ; k T.

3.4Principal Component Analysis

We perform a Principal Component Analysis (PCA,see [12])on the set of shape and texture vectors S i and T i of example faces i ?1...m .Ignoring the correlation between shape and texture data,we analyze shape and texture separately.For shape,we subtract the average s ?1

P m

i ?1S i from each shape vector,a i ?S i às ,and define a data matrix A ?ea 1;a 2;...;a m T.

The essential step of PCA is to compute the eigenvec-tors s 1;s 2;...of the covariance matrix C ?1

m

AA T ?1m

P m i ?1a i a T

i ,which can be achieved by a Singular Value Decomposition [31]of A .The eigenvalues of C ,

2S;1! 2

S;2!...,are the variances of the data along each eigenvector.By the same procedure,we obtain texture eigenvectors t i and variances 2T ;i .Results are visualized in Fig.4.The eigenvectors form an orthogonal basis,

S ?s t

X m à1i ?1

i ás i ;T ?t t

X m à1i ?1

i át i e9T

and PCA provides an estimate of the probability density

within face space:

p S eS T$e

à12

P

i 2i 2S;i

;p T eT T$e

à12

P

i 2

i 2T ;i

:e10T

3.5Segments

From a given set of examples,a larger variety of different faces can be generated if linear combinations of shape and texture are formed separately for different regions of the face.In our system,these regions are the eyes,nose,mouth,and the surrounding area [8].Once manually defined on the reference face,the segmentation applies to the entire morphable model.

For continuous transitions between the segments,we apply a modification of the image blending technique of [9]:x;y;z coordinates and colors R;G;B are stored in arrays x eh; T,...based on the mapping i !eh i ; i Tof the reference face.The blending technique interpolates x;y;z and R;G;B across an overlap in the eh; T-domain ,which is large for low spatial frequencies and small for high frequencies.

1066IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL.25,NO.9,SEPTEMBER

2003

Fig.3.For 3D laser scans parameterized by cylindrical coordinates eh; T,the flow field that maps each point of the reference face (top)to the corresponding point of the example (bottom)is used to form shape and texture vectors S and T

.

Fig.4.The average and the first two principal components of a data set of 2003D face scans,visualized by adding ?3 S;i s i and ?3 T ;i t i to the average face.

4M ODEL -B ASED I MAGE A NALYSIS

The goal of model-based image analysis is to represent a novel face in an image by model coefficients i and i (9)and provide a reconstruction of 3D shape.Moreover,it automatically estimates all relevant parameters of the three-dimensional scene,such as pose,focal length of the camera,light intensity,color,and direction.

In an analysis-by-synthesis loop,the algorithm finds model parameters and scene parameters such that the model,rendered by computer graphics algorithms,pro-duces an image as similar as possible to the input image I input (Fig.5).2The iterative optimization starts from the average face and standard rendering conditions (front view,frontal illumination,cf.Fig.6).

For initialization,the system currently requires image coordinates of about seven facial feature points,such as the corners of the eyes or the tip of the nose (Fig.6).With an interactive tool,the user defines these points j ?1...7by alternately clicking on a point of the reference head to select a vertex k j of the morphable model and on the correspond-ing point q x;j ;q y;j in the image.Depending on what part of the face is visible in the image,different vertices k j may be selected for each image.Some salient features in images,such as the contour line of the cheek,cannot be attributed to a single vertex of the model,but depend on the particular viewpoint and shape of the face.The user can define such points in the image and label them as contours.During the fitting procedure,the algorithm determines potential con-tour points of the 3D model based on the angle between surface normal and viewing direction and selects the closest contour point of the model as k j in each iteration.

The following section summarizes the image synthesis from the model,and Section 4.2describes the analysis-by-synthesis loop for parameter estimation.

4.1Image Synthesis

The three-dimensional positions and the color values of the model’s vertices are given by the coefficients i and i and (9).Rendering an image includes the following steps.

4.1.1Image Positions of Vertices

A rigid transformation maps the object-centered coordi-nates x k ?ex k ;y k ;z k TT of each vertex k to a position relative to the camera:

ew x;k ;w y;k ;w z;k TT ?R R R x k tt w :

e11T

The angles and control in-depth rotations around the vertical and horizontal axis,and defines a rotation around the camera axis.t w is a spatial shift.

A perspective projection then maps vertex k to image plane coordinates p x;k ;p y;k :

p x;k ?P x tf

w x;k

w z;k

;p y;k ?P y àf

w y;k

w z;k

:e12T

f is the focal length of the camera which is located in the origin,and eP x ;P y Tdefines the image-plane position of the optical axis (principal point).

4.1.2Illumination and Color

Shading of surfaces depends on the direction of the surface normals n .The normal vector to a triangle k 1k 2k 3of the face mesh is given by a vector product of the edges,ex k 1àx k 2T?ex k 1àx k 3T,which is normalized to unit length and rotated along with the head (11).For fitting the model to an image,it is sufficient to consider the centers of triangles only,most of which are about 0:2mm 2in size.The

BLANZ AND VETTER:FACE RECOGNITION BASED ON FITTING A 3D MORPHABLE MODEL 1067

2.Fig.5is illustrated with linear combinations of

example faces according to (1)rather than principal components (9)for visualization.

Fig.5.The goal of the fitting process is to find shape and texture coefficients i and i describing a three-dimensional face model such that rendering R produces an

image I model that is as similar as possible to I input .

Fig.6.Face reconstruction from a single image (top,left)and a set of feature points (top,center):Starting from standard pose and illumination (top,right),the algorithm computes a rigid transformation and a slight deformation to fit the features.Subsequently,illumination is estimated.Shape,texture,transformation,and illumination are then optimized for the entire face and refined for each segment (second row).From the reconstructed face,novel views can be generated (bottom row).

three-dimensional coordinate and color of the center are the arithmetic means of the corners’values.In the following, we do not formally distinguish between triangle centers and vertices k.

The face is illuminated by ambient light with red,green, and blue intensities L r;amb,L g;amb,L b;amb and by directed, parallel light with intensities L r;dir,L g;dir,L b;dir from a direction l defined by two angles l and l:

l?ecose lTsine lT;sine lT;cose lTcose lTTT:e13TThe illumination model of Phong(see[14])approxi-mately describes the diffuse and specular reflection of a surface.In each vertex k,the red channel is

L r;k

?R káL r;ambtR káL r;dirán k;l

h itk sáL r;dir r k;b v k

h i ;

e14T

where R k is the red component of the diffuse reflection coefficient stored in the texture vector T,k s is the specular reflectance, defines the angular distribution of the specular reflections,b v k is the viewing direction,and r k?2án k;l

h i n kàl is the direction of maximum specular reflection[14].

Input images may vary a lot with respect to the overall tone of color.In order to be able to handle a variety of color images as well as gray-level images and even paintings,we apply gains g r;g g;g b,offsets o r;o g;o b,and a color contrast c to each channel.The overall luminance L of a colored point is[14]

L?0:3áL rt0:59áL gt0:11áL b:e15TColor contrast interpolates between the original color value and this luminance,so,for the red channel,we set

I r?g ráecL rte1àcTLTto r:e16T

Green and blue channels are computed in the same way. The colors I r,I g,and I b are drawn at a positionep x;p yTin the final image I model.

Visibility of each point is tested with a z-buffer algorithm,and cast shadows are calculated with another z-buffer pass relative to the illumination direction(see,for example,[14].)

4.2Fitting the Model to an Image

The fitting algorithm optimizes shape coefficients ?e 1; 2;...TT and texture coefficients ?e 1; 2;...TT along with22rendering parameters,concatenated into a vector : pose angles , ,and ,3D translation t w,focal length f, ambient light intensities L r;amb;L g;amb;L b;amb,directed light intensities L r;dir;L g;dir;L b;dir,the angles l and l of the directed light,color contrast c,and gains and offsets of color channels g r;g g;g b,o r;o g;o b.

4.2.1Cost Function

Given an input image

I inputex;yT?eI rex;yT;I gex;yT;I bex;yTTT;

the primary goal in analyzing a face is to minimize the sum of square differences over all color channels and all pixels between this image and the synthetic reconstruction,

E I?

X

x;y

I inputex;yTàI modelex;yT

2

:e17T

The first iterations exploit the manually defined feature pointseq x;j;q y;jTand the positionsep x;k

j

;p y;k

j

Tof the corresponding vertices k j in an additional function

E F?

X

j

k

q x;j

q y;i

à

p x;k

j

p y;k

j

k2:e18T

Minimization of these functions with respect to , , may cause overfitting effects similar to those observed in regression problems(see,for example,[12]).We therefore employ a maximum a posteriori estimator(MAP):Given the input image I input and the feature points F,the task is to find model parameters with maximum posterior probability pe ; ; j I input;FT.According to Bayes rule,

p ; ; j I input;F

àá

$p I input;F j ; ;

àá

áP ; ;

eT:e19TIf we neglect correlations between some of the variables, the right-hand side is

p I input j ; ;

àá

áp F j ; ;

eTáP eTáP eTáP eT:e20TThe prior probabilities Pe Tand Pe Twere estimated with PCA(10).We assume that Pe Tis a normal distribution and use the starting values for i and ad hoc values for R;i.

For Gaussian pixel noise with a standard deviation I, the likelihood of observing I input,given ; ; ,is a product of one-dimensional normal distributions,with one distribu-tion for each pixel and each color channel.This can be

rewritten as peI input j ; ; T$expeà1

2 2

I

áE IT.In the same way, feature point coordinates may be subject to noise,so

peF j ; ; T$expeà1

2 2

F

áE FT.

Posterior probability is then maximized by minimizing

E?à2álog p ; ; j I input;F

àá

E?

1

I

E It

1

F

E Ft

X

i

2

i

S;i

t

X

i

2

i

T;i

t

X

i

ià i

eT2

R;i

:

e21TAd hoc choices of I and F are used to control the relative weights of E I,E F,and the prior probability terms in(21).At the beginning,prior probability and E F are weighted high. The final iterations put more weight on E I and no longer rely on E F.

4.2.2Optimization Procedure

The core of the fitting procedure is a minimization of the cost function(21)with a stochastic version of Newton’s method(Appendix B).The stochastic optimization avoids local minima by searching a larger portion of parameter space and reduces computation time:In E I,contributions of the pixels of the entire image would be redundant. Therefore,the algorithm selects a set K of40random triangles in each iteration and evaluates E I and its gradient only at their centers:

1068IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL.25,NO.9,SEPTEMBER2003

E I;approx:?X

k2K

k I inputep x;k;p y;kTàI model;kTk2:e22T

To make the expectation value of E I;approx:equal to E I,we set the probability of selecting a particular triangle propor-tional to its area in the image.Areas are calculated along with occlusions and cast shadows at the beginning of the process and once every1,000iterations by rendering the entire face model.

The fitting algorithm computes the gradient of the cost

function(21),(22)analytically using chain rule.Texture coefficients i and illumination parameters only influence the color values I model;k of a vertex.Shape coefficients i and rigid transformation,however,influence both the image coordinatesep x;k;p y;kTand color values I model;k due to the effect of geometry on surface normals and shading(14).

The first iterations only optimize the first parameters i; i;i2f1;...;10g and all parameters i.Subsequent iterations consider more and more coefficients.From the principal components of a database of200faces,we only use the most relevant99coefficients i, i.After fitting the entire face model to the image,the eyes,nose,mouth,and the surrounding region(Section3.5)are optimized sepa-rately.The fitting process takes4.5minutes on a work-station with a2GHz Pentium4processor.

5R ESULTS

Model fitting and identification were tested on two publicly available databases of images.The individuals in these databases are not contained in the set of3D scans that form the morphable face model(Section3.1).

The colored images in the PIE database from CMU[33] vary in pose and illumination.We selected the portion of this database where each of68individuals is photographed from three viewpoints(front,side,and profile,labeled as camera27,05,22)and at22different illuminations (66images per individual).Illuminations include flashes from different directions and one condition with ambient light only.

From the gray-level images of the FERET database [29],we selected a portion that contains11poses(labeled ba–bk)per individual.We discarded pose bj,where participants have various facial expressions.The remain-ing10views,most of them at a neutral expression,are available for194individuals(labeled01013–01206). While illumination in images ba–bj is fixed,bk is recorded at a different illumination.

Both databases cover a wide ethnic variety.Some of the faces are partially occluded by hair and some individuals wear glasses(28in the CMU-PIE database,none in the FERET database.)We do not explicitly compensate for these effects.Optimizing the overall appearance,the algorithm tends to ignore image structures that are not represented by the morphable model.

5.1Results of Model Fitting

The reconstruction algorithm was run on all4,488PIE and1,940FERET images.For all images,the starting condition was the average face at a front view,with frontal illumination,rendered in color from a viewing distance of two meters(Fig.6).

On each image,we manually defined between six and eight feature points(Fig.7).For each viewing direction, there was a standard set of feature points,such as the corners of the eyes,the tip of the nose,corners of the mouth,ears,and up to three points on the contour(cheeks, chin,and forehead).If any of these were not visible in an image,the fitting algorithm was provided with fewer point coordinates.

Results of3D face reconstruction are shown in Figs.8 and9.The algorithm had to cope with a large variety of illuminations.In the third column of Fig.9,part of the specular reflections were attributed to texture by the algorithm.This may be due to shortcomings of the Phong illumination model for reflection at grazing angles or to a prior probability that penalizes illumination from behind too much.

The influence of different illuminations is shown in a comparison in Fig. 2.The fitting algorithm adapts to different illuminations,and we can generate standard images with fixed illumination from the reconstructions. In Fig.2,the standard illumination conditions are the estimates obtained from a photograph(top right).

For each image,the fitting algorithm provides an estimate of pose angle.Heads in the CMU-PIE database are not fully aligned in space,but,since front,side,and profile images are taken simultaneously,the relative angles between views should be constant.Table1shows that the error of pose estimates is within a few degrees.

5.2Recognition From Model Coefficients

For face recognition according to Paradigm1described in Section2,we represent shape and texture by a set of coefficients ?e 1;...; 99TT and ?e 1;...; 99TT for the entire face and one set , for each of the four segments of the face(Section3.5).Rescaled according to the standard deviations S;i, T;i of the3D examples(Section3.4),we combine all of these5á2á99?990coefficients i

S;i

, i

T;i

to a vector c2I R990.

Comparing two faces c1and c2,we can use the sum of the Mahalanobis distances[12]of the segments’shapes and textures,d M?k c1àc2k2.An alternative measure for similarity is the cosine of the angle between two vectors

[6],[27]:d A?c1;c2

h i

c1

k kác2

k k

.

Another similarity measure that is evaluated in the following section takes into account variations of model

BLANZ AND VETTER:FACE RECOGNITION BASED ON FITTING A3D MORPHABLE MODEL1069

Fig.7.Up to seven feature points were manually labeled in front and

side views,up to eight were labeled in profile views.

coefficients obtained from different images of the same person.These variations may be due to ambiguities of the fitting problem,such as skin complexion versus intensity of illumination,and residual errors of optimization.Estimated from the CMU-PIE database,we apply these variations to the FERET images and vice versa,using a method motivated by Maximum-Likelihood Classifiers and Linear Discriminant Analysis (see [12]):Deviations of each persons’coefficients c from their individual average are

pooled and analyzed by PCA.The covariance matrix C W of this within-subject variation then defines

d W ?

c 1;c 2h i W c 1k k W ác 2k k W

;with c 1;c 2h i W ?c 1;C à1

W c 2 :e23T

5.3Recognition Performance

For evaluation on the CMU-PIE data set,we used a front,side,and profile gallery,respectively.Each gallery con-tained one view per person,at illumination number 13.The

1070IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL.25,NO.9,SEPTEMBER

2003

Fig.8.Reconstructions of 3D shape and texture from FERET images (top row).In the second row,results are rendered into the original images with pose and illumination recovered by the algorithm.The third row shows novel

views.

Fig.9.Three-dimensional reconstructions from CMU-PIE images.Top:originals,middle:reconstructions rendered into originals,bottom:novel views.The pictures shown here are difficult due to harsh illumination,profile views,or eye glasses.Illumination in the third image is not fully recovered,so part of the reflections are attributed to texture.

gallery for the FERET set was formed by one front view (pose ba )per person.The gallery and probe sets are always disjoint,but show the same individuals.

Table 2provides a comparison of d M ,d A ,and d W for identification (Section 2).d W is clearly superior to d M and d A .All subsequent data are therefore based on d W .The higher performance of angular measures (d W and d A )compared to d M indicates that directions of coefficient vectors c ,relative to the average face c ?0,are diagnostic for faces,while distances from the average may vary,causing variations in d M .In our MAP approach,this may be due to the trade off between likelihood and prior prob-ability ((19)and (21)):Depending on image quality,this may produce distinctive or conservative estimates.

A detailed comparison of different probe and gallery views for the PIE database is given in Table 3.In an identification task,performance is measured on probe sets of 68á21images if probe and gallery viewpoint is equal (yet illumination differs;diagonal cells in the table)and 68á22images otherwise (off-diagonal cells).Overall performance is best for the side-view gallery (95.0percent correct).Table 4lists the percentages of correct identifications on the FERET set,based on front view gallery images ba ,along with the

estimated head poses obtained from fitting.In total,identification was correct in 95.9percent of the trials.

Fig.10shows face recognition ROC curves [12]for a verification task (Section 2):Given pairs of images of the same person (one probe and one gallery image),hit rate is the percentage of correct verifications.Given pairs of images of different persons,false alarm rate is the percentage that is falsely accepted as the same person.For the CMU-PIE database,gallery images were side views (camera 05,light 13),the probe set was all 4,420other images.For FERET,front views ba were gallery,and all other 1,746images were probe images.At 1percent false alarm rate,the hit rate is 77.5percent for CMU-PIE and 87.9percent for FERET.

BLANZ AND VETTER:FACE RECOGNITION BASED ON FITTING A 3D

TABLE 1

The Precision of Pose Estimates in Terms of the

Rotation

Angle between Two Views for Each Individual in the CMU-PIE Database Angles are a 3D combination of , ,and .The table lists averages and standard deviations,based on 68individuals,for illumination number 13.True angles are computed from the 3D coordinates provided with the database.

TABLE 2

Overall Percentage of Successful Identifications

for Different Criteria of Comparing Faces

For CMU-PIE images,data were computed for the side view gallery.

TABLE 3

Mean Percentages of Correct Identification on the CMU-PIE Data Set,Averaged over

All Lighting Conditions

for Front,Side,and Profile View Galleries

In brackets are percentages for the worst and best illumination within each probe set.

TABLE 4

Percentages of Correct Identification on the FERET Data Set

llery

images were front views ba . is the average es

Fig.10.ROC curves of verification across pose and illumination from a single side view for the CMU-PIE data set (a)and from a front view for FERET (b).At 1percent false alarm rate,hit rate is 77.5percent for CMU-PIE and 87.9percent for FERET.

6C ONCLUSIONS

In this paper,we have addressed three issues:1)learning class-specific information about human faces from a data set of examples,2)estimating3D shape and texture,along with all relevant3D scene parameters,from a single image at any pose and illumination,and3)representing and comparing faces for recognition tasks.Tested on two databases of images covering large variations in pose and illumination,our algorithm achieved promising results (95.0and95.9percent correct identifications,respectively). This indicates that the3D morphable model is a powerful and versatile representation for human faces.In image analysis,our explicit modeling of imaging parameters,such as head orientation and illumination,may help to achieve an invariant description of the identity of faces.

It is straightforward to extend our morphable model to different ages,ethnic groups,and facial expressions by including face vectors from more3D scans.Our system currently ignores glasses,beards,or strands of hair covering part of the face,which are found in many images of the CMU-PIE and FERET sets.Considering these effects in the algorithm may improve3D reconstructions and identification.

Future work will also concentrate on automated initi-alization and a faster fitting procedure.In applications that require a fully automated system,our algorithm may be combined with an additional feature detector.For applica-tions where manual interaction is permissible,we have presented a complete image analysis system.

A PPENDIX A

O PTIC F LOW C ALCULATION

Optic flow v between gray-level images at a given point ex0;y0Tcan be defined as the minimum v of a quadratic function(4).This minimum is given by[25],[2]

Wv?àbe24T

W?

P

@x I2

P

@x Iá@y I

P

@x Iá@y I

P

@y I2

!

;

b?

P

@x IááI P

@y IááI

:

v is easy to find by means of a diagonalization of the2?2 symmetrical matrix W.

For3D laser scans,the minimum of(5)is again given by (24),but now

W?

P

@h I

k k2

P

@h I;@ I

P

@h I;@ I

P

@ I

2

!

;

b?P

@h I;áI

h i

P

@ I;áI

!

;

e25T

using the scalar product related to(6).v is found by diagonalizing W.A.1Smoothing and Interpolation of Flow Fields

On regions of the face where both shape and texture are almost uniform,optic flow produces noisy and unreliable results.The desired flow field would be a smooth interpolation between the flow vectors of more reliable regions,such as the eyes and the mouth.We therefore apply a method that is motivated by a set of connected springs or a continuous membrane,that is fixed to reliable landmark points,sliding along reliably matched edges,and free to assume a minimum energy state everywhere else.Adjacent flow vectors of the smooth flow field v seh; T,are connected by a potential

E c?

X

h

X

v seht1; Tàv seh; T

k k2

t

X

h

X

v seh; t1Tàv seh; T

k k2:

e26T

The coupling of v seh; Tto the original flow field v0eh; Tdepends on the rank of the2?2matrix W in(25),which determines if(24)has a unique solution or not:Let 1! 2 be the two eigenvalues of W and a1,a2be the eigenvectors. Choosing a threshold s!0,we set

E0eh; T?

0if 1; 2s

a1;v seh; Tàv0eh; T

h i2if 1!s! 2

v seh; Tàv0eh; T

k k2if 1; 2!s:

8

<

:

In the first case,which occurs if W%0and@h I;@ I%0 in R,the output v s will only be controlled by its neighbors. The second case occurs if(24)restricts v0only in one direction a1.This happens if there is a consistent edge structure within R,and the derivatives of I are linearly dependent in R.v s is then free to slide along the edge.In the third case,v0is uniquely defined by(24)and,therefore,v s is restricted in all directions.To compute v s,we apply Conjugate Gradient Descent[31]to minimize the energy

E? E ct

X

h;

E0eh; T:

Both the weight factor and the threshold s are chosen heuristically.During optimization,flow vectors from reli-able,high-contrast regions propagate to low-contrast regions,producing a smooth interpolation.Smoothing is performed at each level of resolution after the gradient-based estimation of correspondence.

A PPENDIX B

S TOCHASTIC N EWTON A LGORITHM

For the optimization of the cost function(21),we developed a stochastic version of Newton’s algorithm[5]similar to stochastic gradient descent[32],[37],[22].In each iteration, the algorithm computes E I only at40random surface points(Section4.2).The first derivatives of E I are computed analytically on these random points.

Newton’s method optimizes a cost function E with respect to parameters j based on the gradient r E and the

Hessian H,H i;j?@2E

@ i@ j

.The optimum is

?? àHà1r E:e27T

1072IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL.25,NO.9,SEPTEMBER2003

For simplification,we consider i as a general set of model parameters here and suppress , .Equation(21)is then

Ee T?1

2

I

E Ie Tt

1

2

F

E Fe Tt

X

i

ià i

eT2

2

S;i

e28T

and

r E?1

2

I

@E I

@ i

t

1

2

F

@E F

@ i

tdiag

2

2

S;i

!

e à:e29T

The diagonal elements of H are

H i;i?1

2

I

@2E I

@ 2

i

t

1

2

F

@2E F

@ 2

i

t

2

2

S;i

:e30T

These second derivatives are computed by numerical differentiation from the analytically calculated first deriva-tives,based on300random vertices,at the beginning of the optimization and once every1,000iterations.The Hessian captures information about an appropriate order of magni-tude of updates in each coefficient.In the stochastic Newton algorithm,gradients are estimated from40points and the updates in each iteration do not need to be precise.We therefore ignore off-diagonal elements(see[5])of H and set Hà1%diage1=H i;iT.With(27),the estimated optimum is

?i ?

1

2

I

@2E I

@ 2

i

it1

2

F

@2E F

@ 2

i

ià1

2

I

@E I

@ i

à1

2

F

@E F

@ i

t2

S;i

i

1

2

I

@2E I

@ 2

i

t1

2

F

@2E F

@ 2

i

t2

2

S;i

:e31T

In each iteration,we perform small steps ! t e ?à Twith a factor (1.

A CKNOWLEDGMENTS

The database of laser scans was recorded by N.Troje in the group of H.H.Bu¨lthoff at MPI for Biological Cybernetics, Tu¨bingen.Portions of the research in this paper use the FERET database of facial images collected under the FERET program,and the CMU-PIE database.The authors wish to thank everyone involved in collecting these data.The authors thank T.Poggio and S.Romdhani for many discussions and the reviewers for useful suggestions, including the title of the paper.This work was partially funded by the DARPA HumanID project.

R EFERENCES

[1]J.J.Atick,P.A.Griffin,and A.N.Redlich,“Statistical Approach to

Shape from Shading:Reconstruction of3D Face Surfaces from Single2D Images,”Computation in Neurological Systems,vol.7, no.1,1996.

[2]J.R.Bergen and R.Hingorani,“Hierarchical Motion-Based Frame

Rate Conversion,”technical report,David Sarnoff Research Center,Princeton N.J.,1990.

[3] D.Beymer and T.Poggio,“Face Recognition from One Model

View,”Proc.Fifth Int’l https://www.wendangku.net/doc/fc10207381.html,puter Vision,1995.

[4] D.Beymer and T.Poggio,“Image Representations for Visual

Learning,”Science,vol.272,pp.1905-1909,1996.

[5] C.M.Bishop,Neural Networks for Pattern Recognition.Oxford Univ.

Press,1995.

[6]V.Blanz,“Automatische Rekonstruktion der dreidimensionalen

Form von Gesichtern aus einem Einzelbild,”PhD thesis,Tu¨bin-gen,Germany,2000.[7]V.Blanz,S.Romdhani,and T.Vetter,“Face Identification across

Different Poses and Illuminations with a3D Morphable Model,”

Proc.Fifth Int’l Conf.Automatic Face and Gesture Recognition,pp.202-207,2002.

[8]V.Blanz and T.Vetter,“A Morphable Model for the Synthesis of

3D Faces,”Computer Graphics Proc.SIGGRAPH’99,pp.187-194, 1999.

[9]P.J.Burt and E.H.Adelson,“Merging Images through Pattern

Decomposition,”Proc.Applications of Digital Image Processing VIII, no.575,pp.173-181,1985.

[10] C.S.Choi,T.Okazaki,H.Harashima,and T.Takebe,“A System of

Analyzing and Synthesizing Facial Images,”Proc.IEEE Int’l Symp.

Circuit and Systems(ISCAS’91),pp.2665-2668,1991.

[11]T.F.Cootes,K.Walker,and C.J.Taylor,“View-Based Active

Appearance Models,”Proc.Int’l Conf.Automatic Face and Gesture Recognition,pp.227-232,2000.

[12]R.O.Duda,P.E.Hart,and D.G.Stork,Pattern Classification,second

ed.John Wiley&Sons,2001.

[13]G.J.Edwards,T.F.Cootes,and C.J.Taylor,“Face Recogition Using

Active Appearance Models,”https://www.wendangku.net/doc/fc10207381.html,puter Vision (ECCV’98),1998.

[14]J.D.Foley,A.van Dam,S.K.Feiner,and J.F.Hughes,Computer

Graphics:Principles and Practice,second ed.Addison-Wesley,1996.

[15] A.S.Georghiades,P.N.Belhumeur,and D.J.Kriegman,“From

Few to Many:Illumination Cone Models for Face Recognition Under Variable Lighting and Pose,”IEEE Trans.Pattern Analysis and Machine Intelligence,vol.23,no.6,pp.643-660,2001.

[16] D.B.Graham and N.M.Allison,“Face Recognition from Un-

familiar Views:Subspace Methods and Pose Dependency,”Proc.

Int’l Conf.Automatic Face and Gesture Recognition,pp.348-353,1998.

[17]R.Gross,I.Matthews,and S.Baker,“Eigen Light-Fields and Face

Recognition Across Pose,”Proc.Int’l Conf.Automatic Face and Gesture Recognition,pp.3-9,2002.

[18]P.W.Hallinan,“A Deformable Model for the Recognition of

Human Faces under Arbitrary Illumination,”PhD thesis,Harvard Univ.,Cambridge,Mass.,1995.

[19]R.M.Haralick and L.G.Shapiro,Computer and Robot Vision,vol.2.

Addison-Wesley,1992.

[20] B.K.P.Horn and B.G.Schunck,“Determining Optical Flow,”

Artificial Intelligence,vol.17,pp.185-203,1981.

[21]T.S.Huang and L.A.Tang,“3D Face Modeling and Its Applica-

tions,”Int’l J.Pattern Recognition and Artificial Intelligence,vol.10, no.5,pp.491-519,1996.

[22]M.Jones and T.Poggio,“Multidimensional Morphable Models:A

Framework for Representing and Matching Object Classes,”Int’l https://www.wendangku.net/doc/fc10207381.html,puter Vision,vol.29,no.2,pp.107-131,1998.

[23] https://www.wendangku.net/doc/fc10207381.html,nitis, C.J.Taylor,and T.F.Cootes,“Automatic Face

Identification System Using Flexible Appearance Models,”Image and Vision Computing,vol.13,no.5,pp.393-401,1995.

[24] D.G.Lowe,“Fitting Parameterized Three-Dimensional Models to

Images,”IEEE Trans.Pattern Analysis and Machine Intelligence, vol.13,no.5,pp.441-450,May1991.

[25] B.D.Lucas and T.Kanade,“An Iterative Image Registration

Technique with an Application to Stereo Vision,”Proc.Int’l Joint Conf.Artificial Intelligence,pp.674-679,1981.

[26]T.Maurer and C.von der Malsburg,“Single-View Based

Recognition of Faces Rotated in Depth,”Proc.Int’l Conf.Automatic Face and Gesture Recognition,pp.248-253,1995.

[27]H.Moon and P.J.Phillips,“Computational and Performance

Aspects of PCA-Based Face-Recognition Algorithms,”Perception, vol.30,pp.303-321,2001.

[28] A.Pentland,B.Moghaddam,and T.Starner,“View-Based and

Modular Eigenspaces for Face Recognition,”Proc.IEEE Conf.

Computer Vision and Pattern Recognition,pp.84-91,1994.

[29]P.J.Phillips,H.Wechsler,J.Huang,and P.Rauss,“The Feret

Database and Evaluation Procedure for Face Recognition Algo-rithms,”Image and Vision Computing J.,vol.16,no.5,pp.295-306, 1998.

[30]P.J.Phillips,P.Grother,R.J.Michaels,D.M.Blackburn,E.Tabassi,

and M.Bone,“Face Recognition Vendor Test2002:Evaluation Report,”NISTIR6965,Nat’l Inst.of Standards and Technology, 2003.

[31]W.H.Press,S.A.Teukolsky,W.T.Vetterling,and B.P.Flannery,

Numerical Recipes in C.Cambridge Univ.Press,1992.

[32]H.Robbins and S.Munroe,“A Stochastic Approximation

Method,”Annals of Math.Statistics,vol.22,pp.400-407,1951.

BLANZ AND VETTER:FACE RECOGNITION BASED ON FITTING A3D MORPHABLE MODEL1073

[33]T.Sim,S.Baker,and M.Bsat,“The CMU Pose,Illumination,and

Expression(PIE)Database,”Proc.Int’l Conf.Automatic Face and Gesture Recognition,pp.53-58,2002.

[34]T.Sim and T.Kanade,“Illuminating the Face,”Technical Report

CMU-RI-TR-01-31,The Robotics Inst.,Carnegie Mellon Univ., Sept.2001.

[35]T.Vetter and V.Blanz,“Estimating Coloured3D Face Models

from Fingle Images:An Example Based Approach,”Proc.Conf.

Computer Vision(ECCV’98),vol.II,1998.

[36]T.Vetter and T.Poggio,“Linear Object Classes and Image

Synthesis from a Single Example Image,”IEEE Trans.Pattern Analysis and Machine Intelligence,vol.19,no.7,pp.733-742,July 1997.

[37]P.Viola,“Alignment by Maximization of Mutual Information,”

A.I.Memo No.1548,MIT Artificial Intelligence Laboratory,1995.

[38]W.Zhao and R.Chellappa,“SFS Based View Synthesis for Robust

Face Recognition,”Proc.Int’l Conf.Automatic Face and Gesture Recognition,pp.285-292,2000.

[39]W.Zhao,R.Chellappa,A.Rosenfeld,and P.J.Phillips,“Face

Recognition:A Literature Survey,”UMD CfAR Technical Report CAR-TR-948,2000.

Volker Blanz received the diploma-degree from

University of Tu¨bingen,Germany,in1995.He

then worked on a project on multiclass support

vector machines at AT&T Bell Labs in Holmdel,

New Jersey.He received the PhD degree in

physics from University of Tu¨bingen in2000for

his thesis on reconstructing3D shape from

images,written at Max-Planck-Institute for Biolo-

gical Cybernetics,Tu¨bingen.He was a visiting

researcher at the Center for Biological and Computational Learning at MIT and a research assistant at the University of Freiburg.In2003,he joined the Max-Planck-Institute for Computer Science,Saarbru¨cken,Germany.His research interests are in the fields of face recognition,machine learning,facial modeling,and animation.

Thomas Vetter studied mathematics and phy-

sics and received the PhD degree in biophysics

from the University of Ulm,Germany.As a

postdoctoral researcher at the Center for Biolo-

gical and Computational Learning at MIT,he

started his research on computer vision.In1993,

he moved to the Max-Planck-Institut in Tu¨bingen

and,in1999,he became a professor of

computer graphics at the University of Freiburg.

Since2002,he has been a professor of applied computer science at the University of Basel in Switzerland.His current research is on image understanding,graphics,and automated model building.He is a member of the IEEE and the IEEE Computer Society. .For more information on this or any other computing topic, please visit our Digital Library at https://www.wendangku.net/doc/fc10207381.html,/publications/dlib.

1074IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL.25,NO.9,SEPTEMBER

2003

on the contrary的解析

On the contrary Onthecontrary, I have not yet begun. 正好相反,我还没有开始。 https://www.wendangku.net/doc/fc10207381.html, Onthecontrary, the instructions have been damaged. 反之,则说明已经损坏。 https://www.wendangku.net/doc/fc10207381.html, Onthecontrary, I understand all too well. 恰恰相反,我很清楚 https://www.wendangku.net/doc/fc10207381.html, Onthecontrary, I think this is good. ⑴我反而觉得这是好事。 https://www.wendangku.net/doc/fc10207381.html, Onthecontrary, I have tons of things to do 正相反,我有一大堆事要做 Provided by jukuu Is likely onthecontrary I in works for you 反倒像是我在为你们工作 https://www.wendangku.net/doc/fc10207381.html, Onthecontrary, or to buy the first good. 反之还是先买的好。 https://www.wendangku.net/doc/fc10207381.html, Onthecontrary, it is typically american. 相反,这正是典型的美国风格。 222.35.143.196 Onthecontrary, very exciting.

恰恰相反,非常刺激。 https://www.wendangku.net/doc/fc10207381.html, But onthecontrary, lazy. 却恰恰相反,懒洋洋的。 https://www.wendangku.net/doc/fc10207381.html, Onthecontrary, I hate it! 恰恰相反,我不喜欢! https://www.wendangku.net/doc/fc10207381.html, Onthecontrary, the club gathers every month. 相反,俱乐部每个月都聚会。 https://www.wendangku.net/doc/fc10207381.html, Onthecontrary, I'm going to work harder. 我反而将更努力工作。 https://www.wendangku.net/doc/fc10207381.html, Onthecontrary, his demeanor is easy and nonchalant. 相反,他的举止轻松而无动于衷。 https://www.wendangku.net/doc/fc10207381.html, Too much nutrition onthecontrary can not be absorbed through skin. 太过营养了反而皮肤吸收不了. https://www.wendangku.net/doc/fc10207381.html, Onthecontrary, I would wish for it no other way. 正相反,我正希望这样 Provided by jukuu Onthecontrary most likely pathological. 反之很有可能是病理性的。 https://www.wendangku.net/doc/fc10207381.html, Onthecontrary, it will appear clumsy. 反之,就会显得粗笨。 https://www.wendangku.net/doc/fc10207381.html,

英语造句

一般过去式 时间状语:yesterday just now (刚刚) the day before three days ag0 a week ago in 1880 last month last year 1. I was in the classroom yesterday. I was not in the classroom yesterday. Were you in the classroom yesterday. 2. They went to see the film the day before. Did they go to see the film the day before. They did go to see the film the day before. 3. The man beat his wife yesterday. The man didn’t beat his wife yesterday. 4. I was a high student three years ago. 5. She became a teacher in 2009. 6. They began to study english a week ago 7. My mother brought a book from Canada last year. 8.My parents build a house to me four years ago . 9.He was husband ago. She was a cooker last mouth. My father was in the Xinjiang half a year ago. 10.My grandfather was a famer six years ago. 11.He burned in 1991

学生造句--Unit 1

●I wonder if it’s because I have been at school for so long that I’ve grown so crazy about going home. ●It is because she wasn’t well that she fell far behind her classmates this semester. ●I can well remember that there was a time when I took it for granted that friends should do everything for me. ●In order to make a difference to society, they spent almost all of their spare time in raising money for the charity. ●It’s no pleasure eating at school any longer because the food is not so tasty as that at home. ●He happened to be hit by a new idea when he was walking along the riverbank. ●I wonder if I can cope with stressful situations in life independently. ●It is because I take things for granted that I make so many mistakes. ●The treasure is so rare that a growing number of people are looking for it. ●He picks on the weak mn in order that we may pay attention to him. ●It’s no pleasure being disturbed whena I settle down to my work. ●I can well remember that when I was a child, I always made mistakes on purpose for fun. ●It’s no pleasure accompany her hanging out on the street on such a rainy day. ●I can well remember that there was a time when I threw my whole self into study in order to live up to my parents’ expectation and enter my dream university. ●I can well remember that she stuck with me all the time and helped me regain my confidence during my tough time five years ago. ●It is because he makes it a priority to study that he always gets good grades. ●I wonder if we should abandon this idea because there is no point in doing so. ●I wonder if it was because I ate ice-cream that I had an upset student this morning. ●It is because she refused to die that she became incredibly successful. ●She is so considerate that many of us turn to her for comfort. ●I can well remember that once I underestimated the power of words and hurt my friend. ●He works extremely hard in order to live up to his expectations. ●I happened to see a butterfly settle on the beautiful flower. ●It’s no pleasure making fun of others. ●It was the first time in the new semester that I had burned the midnight oil to study. ●It’s no pleasure taking everything into account when you long to have the relaxing life. ●I wonder if it was because he abandoned himself to despair that he was killed in a car accident when he was driving. ●Jack is always picking on younger children in order to show off his power. ●It is because he always burns the midnight oil that he oversleeps sometimes. ●I happened to find some pictures to do with my grandfather when I was going through the drawer. ●It was because I didn’t dare look at the failure face to face that I failed again. ●I tell my friend that failure is not scary in order that she can rebound from failure. ●I throw my whole self to study in order to pass the final exam. ●It was the first time that I had made a speech in public and enjoyed the thunder of applause. ●Alice happened to be on the street when a UFO landed right in front of her. ●It was the first time that I had kept myself open and talked sincerely with my parents. ●It was a beautiful sunny day. The weather was so comfortable that I settled myself into the

英语句子结构和造句

高中英语~词性~句子成分~语法构成 第一章节:英语句子中的词性 1.名词:n. 名词是指事物的名称,在句子中主要作主语.宾语.表语.同位语。 2.形容词;adj. 形容词是指对名词进行修饰~限定~描述~的成份,主要作定语.表语.。形容词在汉语中是(的).其标志是: ous. Al .ful .ive。. 3.动词:vt. 动词是指主语发出的一个动作,一般用来作谓语。 4.副词:adv. 副词是指表示动作发生的地点. 时间. 条件. 方式. 原因. 目的. 结果.伴随让步. 一般用来修饰动词. 形容词。副词在汉语中是(地).其标志是:ly。 5.代词:pron. 代词是指用来代替名词的词,名词所能担任的作用,代词也同样.代词主要用来作主语. 宾语. 表语. 同位语。 6.介词:prep.介词是指表示动词和名次关系的词,例如:in on at of about with for to。其特征:

介词后的动词要用—ing形式。介词加代词时,代词要用宾格。例如:give up her(him)这种形式是正确的,而give up she(he)这种形式是错误的。 7.冠词:冠词是指修饰名词,表名词泛指或特指。冠词有a an the 。 8.叹词:叹词表示一种语气。例如:OH. Ya 等 9.连词:连词是指连接两个并列的成分,这两个并列的成分可以是两个词也可以是两个句子。例如:and but or so 。 10.数词:数词是指表示数量关系词,一般分为基数词和序数词 第二章节:英语句子成分 主语:动作的发出者,一般放在动词前或句首。由名词. 代词. 数词. 不定时. 动名词. 或从句充当。 谓语:指主语发出来的动作,只能由动词充当,一般紧跟在主语后面。 宾语:指动作的承受着,一般由代词. 名词. 数词. 不定时. 动名词. 或从句充当. 介词后面的成分也叫介词宾语。 定语:只对名词起限定修饰的成分,一般由形容

六级单词解析造句记忆MNO

M A: Has the case been closed yet? B: No, the magistrate still needs to decide the outcome. magistrate n.地方行政官,地方法官,治安官 A: I am unable to read the small print in the book. B: It seems you need to magnify it. magnify vt.1.放大,扩大;2.夸大,夸张 A: That was a terrible storm. B: Indeed, but it is too early to determine the magnitude of the damage. magnitude n.1.重要性,重大;2.巨大,广大 A: A young fair maiden like you shouldn’t be single. B: That is because I am a young fair independent maiden. maiden n.少女,年轻姑娘,未婚女子 a.首次的,初次的 A: You look majestic sitting on that high chair. B: Yes, I am pretending to be the king! majestic a.雄伟的,壮丽的,庄严的,高贵的 A: Please cook me dinner now. B: Yes, your majesty, I’m at your service. majesty n.1.[M-]陛下(对帝王,王后的尊称);2.雄伟,壮丽,庄严 A: Doctor, I traveled to Africa and I think I caught malaria. B: Did you take any medicine as a precaution? malaria n.疟疾 A: I hate you! B: Why are you so full of malice? malice n.恶意,怨恨 A: I’m afraid that the test results have come back and your lump is malignant. B: That means it’s serious, doesn’t it, doctor? malignant a.1.恶性的,致命的;2.恶意的,恶毒的 A: I’m going shopping in the mall this afternoon, want to join me? B: No, thanks, I have plans already. mall n.(由许多商店组成的)购物中心 A: That child looks very unhealthy. B: Yes, he does not have enough to eat. He is suffering from malnutrition.

base on的例句

意见应以事实为根据. 3 来自辞典例句 192. The bombers swooped ( down ) onthe air base. 轰炸机 突袭 空军基地. 来自辞典例句 193. He mounted their engines on a rubber base. 他把他们的发动机装在一个橡胶垫座上. 14 来自辞典例句 194. The column stands on a narrow base. 柱子竖立在狭窄的地基上. 14 来自辞典例句 195. When one stretched it, it looked like grey flakes on the carvas base. 你要是把它摊直, 看上去就象好一些灰色的粉片落在帆布底子上. 18 来自辞典例句 196. Economic growth and human well - being depend on the natural resource base that supports all living systems. 经济增长和人类的福利依赖于支持所有生命系统的自然资源. 12 1 来自辞典例句 197. The base was just a smudge onthe untouched hundred - mile coast of Manila Bay. 那基地只是马尼拉湾一百英里长安然无恙的海岸线上一个硝烟滚滚的污点. 6 来自辞典例句 198. You can't base an operation on the presumption that miracles are going to happen. 你不能把行动计划建筑在可能出现奇迹的假想基础上.

英语造句大全

英语造句大全English sentence 在句子中,更好的记忆单词! 1、(1)、able adj. 能 句子:We are able to live under the sea in the future. (2)、ability n. 能力 句子:Most school care for children of different abilities. (3)、enable v. 使。。。能句子:This pass enables me to travel half-price on trains. 2、(1)、accurate adj. 精确的句子:We must have the accurate calculation. (2)、accurately adv. 精确地 句子:His calculation is accurately. 3、(1)、act v. 扮演 句子:He act the interesting character. (2)、actor n. 演员 句子:He was a famous actor. (3)、actress n. 女演员 句子:She was a famous actress. (4)、active adj. 积极的 句子:He is an active boy. 4、add v. 加 句子:He adds a little sugar in the milk. 5、advantage n. 优势 句子:His advantage is fight. 6、age 年龄n. 句子:His age is 15. 7、amusing 娱人的adj. 句子:This story is amusing. 8、angry 生气的adj. 句子:He is angry. 9、America 美国n.

(完整版)主谓造句

主语+谓语 1. 理解主谓结构 1) The students arrived. The students arrived at the park. 2) They are listening. They are listening to the music. 3) The disaster happened. 2.体会状语的位置 1) Tom always works hard. 2) Sometimes I go to the park at weekends.. 3) The girl cries very often. 4) We seldom come here. The disaster happened to the poor family. 3. 多个状语的排列次序 1) He works. 2) He works hard. 3) He always works hard. 4) He always works hard in the company. 5) He always works hard in the company recently. 6) He always works hard in the company recently because he wants to get promoted. 4. 写作常用不及物动词 1. ache My head aches. I’m aching all over. 2. agree agree with sb. about sth. agree to do sth. 3. apologize to sb. for sth. 4. appear (at the meeting, on the screen) 5. arrive at / in 6. belong to 7. chat with sb. about sth. 8. come (to …) 9. cry 10. dance 11. depend on /upon 12. die 13. fall 14. go to … 15. graduate from 16. … happen 17. laugh 18. listen to... 19. live 20. rise 21. sit 22. smile 23. swim 24. stay (at home / in a hotel) 25. work 26. wait for 汉译英: 1.昨天我去了电影院。 2.我能用英语跟外国人自由交谈。 3.晚上7点我们到达了机场。 4.暑假就要到了。 5.现在很多老人独自居住。 6.老师同意了。 7.刚才发生了一场车祸。 8.课上我们应该认真听讲。9. 我们的态度很重要。 10. 能否成功取决于你的态度。 11. 能取得多大进步取决于你付出多少努力。 12. 这个木桶能盛多少水取决于最短的一块板子的长度。

初中英语造句

【it's time to和it's time for】 ——————这其实是一个句型,只不过后面要跟不同的东西. ——————It's time to跟的是不定式(to do).也就是说,要跟一个动词,意思是“到做某事的时候了”.如: It's time to go home. It's time to tell him the truth. ——————It's time for 跟的是名词.也就是说,不能跟动词.如: It's time for lunch.(没必要说It's time to have lunch) It's time for class.(没必要说It's time to begin the class.) They can't wait to see you Please ask liming to study tonight. Please ask liming not to play computer games tonight. Don’t make/let me to smoke I can hear/see you dance at the stage You had better go to bed early. You had better not watch tv It’s better to go to bed early It’s best to run in the morning I am enjoy running with music. With 表伴随听音乐 I already finish studying You should keep working. You should keep on studying English Keep calm and carry on 保持冷静继续前行二战开始前英国皇家政府制造的海报名字 I have to go on studying I feel like I am flying I have to stop playing computer games and stop to go home now I forget/remember to finish my homework. I forget/remember cleaning the classroom We keep/percent/stop him from eating more chips I prefer orange to apple I prefer to walk rather than run I used to sing when I was young What’s wrong with you There have nothing to do with you I am so busy studying You are too young to na?ve I am so tired that I have to go to bed early

The Kite Runner-美句摘抄及造句

《The Kite Runner》追风筝的人--------------------------------美句摘抄 1.I can still see Hassan up on that tree, sunlight flickering through the leaves on his almost perfectly round face, a face like a Chinese doll chiseled from hardwood: his flat, broad nose and slanting, narrow eyes like bamboo leaves, eyes that looked, depending on the light, gold, green even sapphire 翻译:我依然能记得哈桑坐在树上的样子,阳光穿过叶子,照着他那浑圆的脸庞。他的脸很像木头刻成的中国娃娃,鼻子大而扁平,双眼眯斜如同竹叶,在不同光线下会显现出金色、绿色,甚至是宝石蓝。 E.g.: A shadow of disquiet flickering over his face. 2.Never told that the mirror, like shooting walnuts at the neighbor's dog, was always my idea. 翻译:从来不提镜子、用胡桃射狗其实都是我的鬼主意。E.g.:His secret died with him, for he never told anyone. 3.We would sit across from each other on a pair of high

翻译加造句

一、翻译 1. The idea of consciously seeking out a special title was new to me., but not without appeal. 让我自己挑选自己最喜欢的书籍这个有意思的想法真的对我具有吸引力。 2.I was plunged into the aching tragedy of the Holocaust, the extraordinary clash of good, represented by the one decent man, and evil. 我陷入到大屠杀悲剧的痛苦之中,一个体面的人所代表的善与恶的猛烈冲击之中。 3.I was astonished by the the great power a novel could contain. I lacked the vocabulary to translate my feelings into words. 我被这部小说所包含的巨大能量感到震惊。我无法用语言来表达我的感情(心情)。 4,make sth. long to short长话短说 5.I learned that summer that reading was not the innocent(简单的) pastime(消遣) I have assumed it to be., not a breezy, instantly forgettable escape in the hammock(吊床),( though I’ ve enjoyed many of those too ). I discovered that a book, if it arrives at the right moment, in the proper season, will change the course of all that follows. 那年夏天,我懂得了读书不是我认为的简单的娱乐消遣,也不只是躺在吊床上,一阵风吹过就忘记的消遣。我发现如果在适宜的时间、合适的季节读一本书的话,他将能改变一个人以后的人生道路。 二、词组造句 1. on purpose 特意,故意 This is especially true here, and it was ~. (这一点在这里尤其准确,并且他是故意的) 2.think up 虚构,编造,想出 She has thought up a good idea. 她想出了一个好的主意。 His story was thought up. 他的故事是编出来的。 3. in the meantime 与此同时 助记:in advance 事前in the meantime 与此同时in place 适当地... In the meantime, what can you do? 在这期间您能做什么呢? In the meantime, we may not know how it works, but we know that it works. 在此期间,我们不知道它是如何工作的,但我们知道,它的确在发挥作用。 4.as though 好像,仿佛 It sounds as though you enjoyed Great wall. 这听起来好像你喜欢长城。 5. plunge into 使陷入 He plunged the room into darkness by switching off the light. 他把灯一关,房

改写句子练习2标准答案

The effective sentences:(improve the sentences!) 1.She hopes to spend this holiday either in Shanghai or in Suzhou. 2.Showing/to show sincerity and to keep/keeping promises are the basic requirements of a real friend. 3.I want to know the space of this house and when it was built. I want to know how big this house is and when it was built. I want to know the space of this house and the building time of the house. 4.In the past ten years,Mr.Smith has been a waiter,a tour guide,and taught English. In the past ten years,Mr.Smith has been a waiter,a tour guide,and an English teacher. 5.They are sweeping the floor wearing masks. They are sweeping the floor by wearing masks. wearing masks,They are sweeping the floor. 6.the drivers are told to drive carefully on the radio. the drivers are told on the radio to drive carefully 7.I almost spent two hours on this exercises. I spent almost two hours on this exercises. 8.Checking carefully,a serious mistake was found in the design. Checking carefully,I found a serious mistake in the design.

用以下短语造句

M1 U1 一. 把下列短语填入每个句子的空白处(注意所填短语的形式变化): add up (to) be concerned about go through set down a series of on purpose in order to according to get along with fall in love (with) join in have got to hide away face to face 1 We’ve chatted online for some time but we have never met ___________. 2 It is nearly 11 o’clock yet he is not back. His mother ____________ him. 3 The Lius ___________ hard times before liberation. 4 ____________ get a good mark I worked very hard before the exam. 5 I think the window was broken ___________ by someone. 6 You should ___________ the language points on the blackboard. They are useful. 7 They met at Tom’s party and later on ____________ with each other. 8 You can find ____________ English reading materials in the school library. 9 I am easy to be with and _____________my classmates pretty well. 10 They __________ in a small village so that they might not be found. 11 Which of the following statements is not right ____________ the above passage? 12 It’s getting dark. I ___________ be off now. 13 More than 1,000 workers ___________ the general strike last week. 14 All her earnings _____________ about 3,000 yuan per month. 二.用以下短语造句: 1.go through 2. no longer/ not… any longer 3. on purpose 4. calm… down 5. happen to 6. set down 7. wonder if 三. 翻译: 1.曾经有段时间,我对学习丧失了兴趣。(there was a time when…) 2. 这是我第一次和她交流。(It is/was the first time that …注意时态) 3.他昨天公园里遇到的是他的一个老朋友。(强调句) 4. 他是在知道真相之后才意识到错怪女儿了。(强调句) M 1 U 2 一. 把下列短语填入每个句子的空白处(注意所填短语的形式变化): play a …role (in) because of come up such as even if play a …part (in) 1 Dujiangyan(都江堰) is still ___________in irrigation(灌溉) today. 2 That question ___________ at yesterday’s meeting. 3 Karl Marx could speak a few foreign languages, _________Russian and English. 4 You must ask for leave first __________ you have something very important. 5 The media _________ major ________ in influencing people’s opinion s. 6 _________ years of hard work she looked like a woman in her fifties. 二.用以下短语造句: 1.make (good/full) use of 2. play a(n) important role in 3. even if 4. believe it or not 5. such as 6. because of

英语造句

English sentence 1、(1)、able adj. 能 句子:We are able to live under the sea in the future. (2)、ability n. 能力 句子:Most school care for children of different abilities. (3)、enable v. 使。。。能 句子:This pass enables me to travel half-price on trains. 2、(1)、accurate adj. 精确的 句子:We must have the accurate calculation. (2)、accurately adv. 精确地 句子:His calculation is accurately. 3、(1)、act v. 扮演 句子:He act the interesting character.(2)、actor n. 演员 句子:He was a famous actor. (3)、actress n. 女演员 句子:She was a famous actress. (4)、active adj. 积极的 句子:He is an active boy. 4、add v. 加 句子:He adds a little sugar in the milk. 5、advantage n. 优势 句子:His advantage is fight. 6、age 年龄n. 句子:His age is 15. 7、amusing 娱人的adj. 句子:This story is amusing. 8、angry 生气的adj. 句子:He is angry. 9、America 美国n. 句子:He is in America. 10、appear 出现v. He appears in this place. 11. artist 艺术家n. He is an artist. 12. attract 吸引 He attracts the dog. 13. Australia 澳大利亚 He is in Australia. 14.base 基地 She is in the base now. 15.basket 篮子 His basket is nice. 16.beautiful 美丽的 She is very beautiful. 17.begin 开始 He begins writing. 18.black 黑色的 He is black. 19.bright 明亮的 His eyes are bright. 20.good 好的 He is good at basketball. 21.British 英国人 He is British. 22.building 建造物 The building is highest in this city 23.busy 忙的 He is busy now. 24.calculate 计算 He calculates this test well. 25.Canada 加拿大 He borns in Canada. 26.care 照顾 He cared she yesterday. 27.certain 无疑的 They are certain to succeed. 28.change 改变 He changes the system. 29.chemical 化学药品

相关文档