文档库 最新最全的文档下载
当前位置:文档库 › and Retrieval—Relevance feedback

and Retrieval—Relevance feedback

and Retrieval—Relevance feedback
and Retrieval—Relevance feedback

Regularized Regression on Image Manifold for Retrieval?

Deng Cai Dept.of Computer Science

UIUC dengcai2@https://www.wendangku.net/doc/2610438946.html,

Xiaofei He

Y ahoo!Inc.

hex@https://www.wendangku.net/doc/2610438946.html,

Jiawei Han

Dept.of Computer Science

UIUC

hanj@https://www.wendangku.net/doc/2610438946.html,

ABSTRACT

Recently,there have been considerable interests in geometric-based methods for image retrieval.These methods consider the image space as a smooth manifold and apply manifold learning techniques to?nd a Euclidean embedding.Thus,the Euclidean distances in the embedding space can be used as approximations to the geodesic distances on the manifold.A main advantage of these methods is that the relevance feedbacks during retrieval can be naturally in-corporated into the system as prior information.In this paper,we consider the retrieval problem as a classi?cation problem on mani-fold.Instead of learning a distance measure,we aim to learn a clas-si?cation function on the image manifold.Considering ef?ciency is a key issue in image retrieval,especially on the Web scale,we propose a novel approach for image retrieval on manifold.This ap-proach is based on a regularized linear regression framework.The local manifold structure and user-provided relevance feedbacks are incorporated into the image retrieval system through a Locality Pre-serving Regularizer.Extensive experiments are carried out on a large image database which demonstrates the ef?ciency and effec-tiveness of the proposed approach.

Categories and Subject Descriptors

H.3.3[Information Storage and Retrieval]:Information Search and Retrieval—Relevance feedback

General Terms

Algorithms,Performance,Theory

Keywords

Image Retrieval,Relevance Feedback,Regression

?The work was supported in part by the U.S.National Science Foundation NSF IIS-05-13678/06-42771and NSF BDI-05-15813. Any opinions,?ndings,and conclusions or recommendations ex-pressed here are those of the authors and do not necessarily re?ect the views of the funding agencies.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro?t or commercial advantage and that copies bear this notice and the full citation on the?rst page.To copy otherwise,to republish,to post on servers or to redistribute to lists,requires prior speci?c permission and/or a fee.

MIR’07,September28–29,2007,Augsburg,Bavaria,Germany. Copyright2007ACM978-1-59593-778-0/07/0009...$5.00.1.INTRODUCTION

Content-Based Image retrieval(CBIR)has been a long stand-ing problem in multimedia.Unlike text-based search which has achieved enormous success,the state-of-the-art CBIR systems are still far from satisfactory[24],[4],[21].The dif?culty of CBIR is essentially due to the dif?culty of representing an image.The components of a text document,i.e.,terms,can faithfully describe the topics of the document;whereas the components of an image, i.e.,pixels,can hardly describe any semantics of the image.Let us compare the space of text documents with that of images.The document space is usually linear or close to linear.That is,if we add two document vectors together,the new document vector can still describe the topics of the two original documents to some ex-tent.However,if we add two image vectors(pixel wise)together, the resulting image is not a naturally generated image and can no longer describe the semantics of the original two images.Mathe-matically,this is because of the non-linearity and disconnectivity of the pixel-based space of natural images.

Instead of pixels,visual features like color,texture,and shape have been proposed to represent images.This is referred to as fea-ture vector thereafter,and the space of feature vectors is referred to as feature space.Since visual features cannot always describe the semantics of the images,relevance feedback is introduced as a powerful tool for soliciting information from the users[23].During the last decade,relevance feedback has been at the core of the CBIR research.Most of the previous research on relevance feedback has fallen into the following three categories:(1)retrieval based on query point movement[22],(2)retrieval based on re-weighting of different feature dimensions[19],and(3)retrieval based on updat-ing the probability distribution of images in the database[7].How-ever,all of these methods fail to take into account the geometrical structure of the feature space.

Recently there have been considerable interests in geometrically motivated approaches for image retrieval[11],[12],[13],[20],[31]. These methods consider the image feature space as a nonlinear space,particularly,manifold.He et al.proposed a method that ?nds a Euclidean embedding of the image manifold and performs image retrieval in the embedding space[13].Lin et al.proposed an Augmented Relation Embedding method that maps the feature space into a semantic manifold which grasps the user’s preferences. They construct two feedback relational graphs to incorporate the user-provided positive and negative examples[20].Moreover,a manifold ranking-based method has been proposed to explore the relationships among all the data points in the feature space,and measure the relevances between the query and all the images in the database accordingly,which is different from traditional similarity metrics based on pair-wise distance[11].These methods can suc-cessfully discover the intrinsic image manifold structure.However,

they aim to learn a distance measure on the image manifold,rather than a classi?cation function on the image manifold.Therefore, they fail to make full use of the user-provided relevance feedbacks and thus may not be optimal in the sense of discriminating relevant images from irrelevant ones.

In this paper,we propose a novel approach for image retrieval on manifold.Our approach is fundamentally based on a regularized linear regression framework.There are many classi?cation meth-ods in literature,such as Support Vector Machine(SVM,[29]), boosting[26],Linear Discriminant Analysis(LDA,[8]),and re-gression[10].Previous work has demonstrated that SVM can sig-ni?cantly improve retrieval performance[28][16].However,one of the main disadvantages of SVM is its high computational com-plexity.It can hardly be scaled to the Web environment where the image search system needs to response to tens of thousands queries simultaneously.Therefore,we consider using the regression frame-work in this study.Besides its computational ef?ciency,another advantage of regression is that the prior information can be easily incorporated as a regularization term.Speci?cally,we?rst build an adjacency graph to model the local image manifold structure. User’s relevance feedbacks are used to update the graph structure. In this way,the classi?er obtained minimizes the least square error and simultaneously respects the graph structure.

The following highlights the major contributions of the paper:

1.The problem of relevance feedback image retrieval is a typi-

cal small-sample learning problem.Therefore,when regres-

sion framework is considered,a key factor is how to regular-

ize it.The traditional solution to this is to apply an L2norm

of the classi?cation function which leads to minimum norm

classi?er,generally referred to as ridge regression.Such reg-

ularization term is data independent.The regularization term

used in our approach explicitly takes into account the ge-

ometrical structure of the data space and makes use of the

user’s relevance feedbacks.

2.Most of previous manifold-based image retrieval methods

consider image retrieval as a distance-based ranking prob-

lem.They suffer from the problem of how to combine the

user-provided positive and negative examples.Different from

them,we consider the image retrieval problem as a classi?ca-

tion problem on manifolds which allows us to make ef?cient

use of relevance feedbacks.

The rest of this paper is organized as follows.In Section2,we provide a brief review of regression.We introduce our image re-trieval approach on manifold in Section3.The experimental re-sults are presented in Section4.Finally,we conclude the paper and provide suggestions for future work in Section5.

2.A BRIEF REVIEW OF REGRESSION AND

LOCALITY PRESERVING PROJECTION In this section,we provide a brief review of Linear Regression and Locality Preserving Projection[14][12].

2.1Linear Regression

Suppose we have m labeled data points{(x i,y i)}m i=1,x i∈R n, y i∈{1,?1}.Let X=[x1,···,x m].Linear regression aims to ?t a function

f(x)=a T x+b

such that the residual sum of square is minimized:

RSS(a)=

m

i=1 f(x i)?y i 2(1)

For the sake of simplicity,we append a new element“1”to each

x i.Thus,the coef?cient b can be absorbed into a and we have

f(x)=a T x.Let y=[y1,···,y m]T.We have

RSS(a)= y?X T a T y?X T a

Requiring?RSS(a)/?a=0,we obtain:

a=(XX T)?1X y(2)

It would be important to note that,in image retrieval,the number of

labeled samples is often smaller than the number of features.Thus,

the matrix XX T is singular and the problem(1)is ill posed.A

possible solution is to impose a penalty on the norm of a:

RSS ridge(a)=

m

i=1 y i?a T x i 2+λ a 2(3)

The solution to(3)is given below:

a= XX T+λI ?1X y(4)

where I is a n×n identity matrix.It is clear to see that XX T+λI

is no longer singular.The term a 2in Eq.(3)is called Tikhonov

regularizer[27].In statistics,such regression is called ridge regres-

sion[10].The Tikhonov regularizer a 2is data independent.It

fails to discover the intrinsic geometrical structure of the feature

space and the semantic relationship between images.

2.2Locality Preserving Projection

Given m data points{x i}m i=1?R n,LPP uses a p-nearest neigh-

bor graph to model the local geometrical structure in the data.Specif-

ically,we put an edge between nodes i and j if x i and x j are

“close”,i.e.,x i and x j are among p nearest neighbors of each other.

Let W denote the corresponding weight matrix,the objective func-

tion of LPP is as follows[14]:

a opt=arg min

a T XDX T a=1 ij

a T x i?a T x j 2W ij

=arg min

a T XDX T a=1

a T XLX T a(5)

where L=D?W is the graph Laplacian[5]and D ii= j W ij.

The constraint a T XDX T a=1removes an arbitrary scaling fac-

tor in the embedding[1].Such a objective function incurs a heavy

penalty if neighboring points x i and x j are mapped far apart.

The projection vector a that minimizes the objective function is

given by the minimum eigenvalue solution to the generalized eigen-

value problem:

XLX T a=λXDX T a(6)

Unlike ISOMAP which preserves global geometrical properties like

geodesics,LPP preserves local geometrical structure.Speci?cally,

LPP aims to preserve local distances and?nd a linear approxima-

tion to the manifold which can best preserve the local isometry.

LPP has been successfully used in relevance feedback image re-

trieval.

3.REGULARIZED REGRESSION ON

IMAGE MANIFOLD

In this section,we introduce our image retrieval approach on

manifold.We begin with a formal description of the problem.

3.1The Problem

The generic problem of image retrieval is the following.Given a query image q and an image database1{x i}m i=2,?nd a function f such that f(x)re?ects the semantic relationship between x and q. The typical relevance feedback based retrieval process is outlined as follows.

1.The user provides his relevance feedback to the system by

labeling images as“relevant”or“irrelevant”.

2.The system modi?es f using the feedbacks.

3.The system re-ranks the images and present the top ones to

the user.

The most typical f is de?ned by using a distant measure[13],

f(x)=dist(x,q)

Another possible choice is to consider f as a classi?cation function [28].Speci?cally,f(x)>0if x is relevant to q and f(x)<0 otherwise.

When we consider f as a classi?cation function,the user pro-vided positive and negative examples are used as training samples. Suppose we have a set of feedback samples,{(x i,y i)}l i=2,where y i is the label of x i marked by the user:

y i= 1,x i is relevant to the query image;

?1,x i is irrelevant to the query image.

The label of the query image q can be regarded as1.For simplic-ity,we denote the query image as(x1,y1).Thus,one tries to?nd a linear function f(x)=a T x such that some prede?ned loss func-tion L(a)is minimized.The most typical loss function is the least square error in Eq.(1).Some other popular loss functions includes the hinge loss used in SVM[29]:

L(a)=

l

i=1 1?y i a T x i +

1?y i a T x i += 1?y i a T x i,if y i a T x i≤1;

0,otherwise. and the logistic loss used in Logistic Regression[10]:

L(a)=

l

i=1log 1+exp(?y i a T x i)

In this paper,we adopt the least square loss function for its simplic-ity and effectiveness.

Most of previous classi?cation based methods consider image retrieval as a supervised learning problem such that f is obtained by only using the training samples.However,image retrieval is intrinsically a semi-supervised learning problem in that the testing samples(images in the database)are available during the training process[6],[17],[25].Naturally,an optimal classi?cation func-tion should take into account the distribution of the testing sam-ples.In the next subsection,we introduce a novel image retrieval approach based on semi-supervised learning on image manifold. Our approach is fundamentally based on previous work on mani-fold learning[1],[14]and manifold regularization[2].

1For convenience,we will treat q as x1in the later description.3.2Learning A Retrieval Function with

Locality Preserving Regularizer

Given the image retrieval problem as described in the last sub-section,we de?ne:

y=[y1,y2,···,y l]T

X1=[x1,x2,···,x l]

X2=[x l+1,x l+2,···,x m]

X=[X1,X2]

where X1are the query image and feedback images,y is the label vector and X2are the remaining images in the database.

We aim to learn a function f such that f(x)>0if x is rele-vant to q and f(x)<0otherwise.In the situations when there is no suf?cient training samples comparing to the number of features, such as image retrieval,one is often confronted with the over?tting problem.In order to overcome this problem and increase the gener-alization capability of the classi?er,one hopes to make the function f as smooth as possible,based on the assumption that close points should have similar semantics.This suggests the following two general principles for learning an image retrieval function: Principal1f(x i)=y i,i=1,···,l.

Principal2If x i and x j are close to each other,f(x i)and f(x j)are also close to each other.

Principal1can be formulated as a least square cost function:

φ1(f)=

l

i=1 y i?f(x i) 2(7)

For principal2,we use a p-nearest neighbor graph G to capture local geometrical structure in the data.Speci?cally,we put an edge between nodes i and j if x i and x j are“close”,i.e.,x i and x j are among p nearest neighbors of each other.Thus,principal2can be formulated as a locality preserving cost function:

φ2(f)=

m

i,j=1 f(x i)?f(x j) 2W ij(8)

where W is the weight matrix of the p-nearest neighbor graph G. Such a cost function incurs a heavy penalty if neighboring points x i and x j are mapped far apart.The cost function(8)and its lin-earization are originally introduced in[1],[14].The optimal f can be obtained by solving the following optimization problem[2]: f?=arg min

f

φ(f)=arg min

f φ1(f)+λφ2(f)

=arg min

f

l

i=1 y i?f(x i) 2+

λ

m

i,j=1 f(x i)?f(x j) 2W ij

(9)

We will describe in details how to construct the weight matrix W in the next subsection.The locality preserving cost functionφ2(f) can also be regarded as a regularization term.Thus,regression with such a regularization term can be called Locality Preserving Regularized Regression(LPR Regression).

Consider a linear map,i.e.,f(x)=a T x,we have:

φ1(f)

=

l

i=1 y i?a T x i 2

= y?X T1a T y?X T1a

=y T y?2a T X1y+a T X1X T1a Likewise,φ2(f)can be reduced to:

φ2(f)

=

m

i,j=1 a T x i?a T x j 2W ij

=2a T ii D ii x i x T i? ij W ii x i x T j a

=2a T(XDX T?XW X T)a

=2a T XLX T a

where L=D?W is the Laplacian matrix2and D is a diagonal matrix whose entries are column(or row,since W is symmetric) sums of W,D ii= j W ji.

Note that,y T1y1is a constant.Thus,the?nal cost functionφ(f) can be written as follows:

φ(a)=?2a T X1y+a T X1X T1a+2λa T XLX T a(10) Requiring the derivative ofφ(a)with respect to a vanish,we get:

?φ(a)

?a T

=0(11)??X1y+X1X T1a+2λXLX T a=0(12)

?a= X1X T1+2λXLX T ?1X1y(13) 3.3The Algorithm

Given a query image q and an image database{x2,···,x m}?R n.Suppose the user provides the label information of the top l?1 images,we get(x2,y2),···,(x l,y l),where y i is the label of x i marked by the user:

y i= 1,x i is relevant to the query image;

?1,x i is irrelevant to the query image.

The label of q can be regarded as1.For simplicity,we denote the query image as(x1,y1).Let X1=(x1,···,x l)∈R n×l, y=(y1,···,y l)T∈R l and X=(x1,···,x m)∈R n×m.

The algorithmic procedure of Regression with Locality Preserv-ing Regularizer(LPR Regression)is stated below:

1.Constructing the adjacency graph:Let G denote a graph

with m nodes.The i-th node corresponds to the image x i. 2The Laplacian matrix L(=D?W)for?nite graph,or graph Laplacian[5],[9],is analogous to the Laplace Beltrami operator on compact Riemannian manifolds.While the Laplace Beltrami operator for a manifold is generated by the Riemannian metric,for a graph it comes from the adjacency relation.In manifold learn-ing,the graph Laplacian is very important since the graph can be build to model the local structure of the data and the Laplacian(dis-crete approximation to the Laplace Beltrami operator)can capture the intrinsic geometric structure of the data.A lot of algorithms had been developed based on the graph Laplacian.In fact,both PCA and LDA can can be interpreted as spectral dimensionality reduction methods with different graph Laplacian(different graph structure).Please see[15]for more

details.Figure1:A comparison of Ridge Regression and LPR Regres-sion for a two class problem where data are linearly separable. There is only one labeled example for each class.LPR Regres-sion considers the geometrical structure of the whole dataset (labeled and unlabeled)and produces a high generalization ca-pability function.Ridge Regression only considers the labeled information and fails to produce a satisfactory function.

We construct the graph G through the following three steps

to model the local structure as well as the label information

(user’s feedback):

(a)Put an edge between nodes i and j if x i is among p

nearest neighbors of x j or x j is among p nearest neigh-

bors of x i.

(b)Put an edge between nodes i and j if x i shares the same

label with x j.

(c)Remove the edge between nodes i and j if the label of

x i is different with that of x j.

2.Choosing the weights:W is a sparse symmetric m×m ma-

trix with W ij having the weight of the edge joining vertices

i and j.

(a)If there is no edge between i and j,W ij=0.

(b)Otherwise,

W ij= 1,x i shares the same label with x j.

x T i x j

x i x i

,otherwise.

The weight matrix W of graph G models the local structure

of the image space as well as relevance information provided

by the user.

3.Solve the linear equations:

(X1X T1+λXLX T)a=X1y(14) which gives us the classi?cation function:

a=(X1X T1+λXLX T)?1X1y(15) where L=D?W is the Laplacian matrix and D is a di-

agonal matrix whose entries are column(or row,since W is symmetric)sums of W,D ii= j W ji.

4.Re-rank the image database{x2,···,x m}by y i=a T x i.

(a)

(b)

(c)

Figure 2:Sample images from category (7)bird ,(23)dinosaur and (66)ship

4.EXPERIMENTS AND DISCUSSIONS

We performed several experiments to evaluate the effectiveness of the proposed algorithm on a large image database.

4.1Image Dataset and Features

The image database we used consists of 7,900images of 79se-mantic categories,from COREL data set.It is a large and hetero-geneous image set.Figure 2shows some sample images.

We combined 64-dimensional color histogram and 64-dim Color Texture Moment (CTM)[30]to represent each image.The color histogram is calculated using 4×4×4bins in HSI space.The Color Texture Moment (CTM)is proposed by Yu et al .[30],which inte-grates the color and texture characteristics of an image in a compact form.CTM adopts local Fourier transform as a texture represen-tation scheme and derive eight characteristic maps for describing different aspects of co-occurrence relations of image pixels in each channel of the (SV cos H ,SV sin H ,V )color space.Then CTM calculates the ?rst and second moments of these maps as a repre-sentation of the natural color image pixel distribution,see [30]for details.

4.2Experimental Design

To exhibit the advantages of using our algorithm,we need a re-liable way of evaluating the retrieval performance and the compar-isons with other algorithms.We list different aspects of the experi-mental design below.

4.2.1Evaluation Metrics

To evaluate the effectiveness of an algorithm,we use both the precision-scope curve and the precision rate [18].In our context,the scope speci?ed by the number N ,of top-ranking images re-turned in response to the user’s query.The precision is the ratio of relevant images number to the scope N .A precision-scope curve records the precision over a range of scopes and can evaluate the overall performance of an algorithm.On the other hand,the preci-sion rate emphasizes the precision for a particular value of scope.In general,it is appropriate to present 20images on a screen.Putting more images on a screen might affect the quality of the presenta-tion.Therefore,the retrieval performance at top 20(the precision rate at scope 20)is especially important.

Besides precision,ef?ciency is also a key issue in image re-trieval,especially when web scale is concerned.Thus we record and compare the running times of different algorithms.

4.2.2Five-fold Cross Validation

In a real image retrieval system,a query image is usually not in the image database.To simulate such environment,we use ?ve-fold cross validation to evaluate all the algorithms.More precisely,we divide the whole image database into ?ve equal-size sets and

there are 20images per category in each set.In each run of cross validation,one set is picked as the query set,and the other four sets are left as database set.The precision-scope curve and precision rate are derived by averaging the results from the ?ve runs of cross validation.

4.2.3Relevance Feedback Scheme

For each submitted query,our system retrieves and ranks the im-ages in the database set.The top 10ranked images were selected as the feedback images,and their label information (relevant or ir-relevant)are used for re-ranking.Note that the images have been selected in the previous iterations are excluded from later selec-tions.And,with each query,the feedback mechanism is carried out for four iterations.

It is important to note that the relevance feedback scheme used here is different from the automatic feedback scheme described in [12],[20].In [12],[20],the top four relevant and irrelevant im-ages were selected as the feedback images.In reality,such scheme might be impossible.It is more reasonable for the users to provide feedback information on the ?rst screen shot (10or 20images).However,the system can not guarantee there are at least four rele-vant images in these top ranked images.

4.2.4Compared Algorithms

To demonstrate the effectiveness of our proposed algorithm,we compared the following four algorithms.

RidgeReg :The ridge regression algorithm described in Section 2.

The classi?cation function w is the solution of linear equa-tions (X 1X T

1+λI )w =X 1y ,where X 1=[x 1,x 2,···,x l ]are the query image vector x 1and l ?1feedback image vec-tors.y =[y 1,y 2,···,y l ]T are the feedback information,y i =1for relevant images and y i =0for irrelevant images.The back slash operator in Matlab is used in our system to solve the equation 3.The parameter λwas set to 0.1.LPRReg :Different from the ridge regularizer which is data inde-pendent,the locality preserving regularizer can capture the intrinsic geometrical structure of the feature space and the se-mantic relationship between images.Similarly,the classi?-cation function a is the solution of linear equations (X 1X T

1+

λXLX T

)a =X 1y ,where

X =(x 1,···,x l ,x l +1,···,x m )

includes both labeled (feedback)images and unlabeled im-ages in database.L is graph Laplacian de?ned on X as de-scribed in Section 3and we set the parameter p =5.In 3

The back slash operator in Matlab solve the equation by Gaussian elimination with partial pivoting,which is very ef?cient.

Table1:Precision at top20returns of the four algorithms after the?rst feedback iteration P@20after the?rst feedback iteration(%)P@20after the?rst feedback iteration(%) Baseline RidgeReg LPRReg SVM ARE Baseline RidgeReg LPRReg SVM ARE 115.2521.0025.1521.0519.104174.1076.2584.9578.3079.85 241.2038.7061.4041.9548.304244.5532.9057.3534.8052.65 312.4520.0020.4520.0014.0543 6.6011.1010.1511.55 6.85 421.3533.9035.9034.1523.454493.2090.7094.7093.5597.70 511.3516.8017.8517.9013.054527.7023.1533.5025.2532.55 614.5518.1023.8518.8518.25468.7512.1013.1012.6512.15 7 5.308.509.558.207.154727.9029.8037.9530.1034.10 827.7533.7543.9036.5033.554823.4024.8032.0026.2028.85 929.6538.6050.0539.9536.254916.2533.1036.6034.4023.15 107.6011.7515.1511.7011.305042.8539.3552.9539.7548.70 1131.3541.1547.7541.9035.90518.7518.1018.1017.8512.40 1222.1538.0041.3038.0525.705215.4520.6023.8521.4521.70 139.3015.4516.5516.5012.205359.4061.0570.8063.2564.45 1433.0543.7051.3546.1541.605432.0032.8544.6034.3038.55 1585.5592.5093.5591.8089.655536.2546.0058.3046.0047.45 1610.7016.5519.5517.2515.005620.3536.8535.2037.6020.95 1721.1033.4540.2034.0024.305773.6570.0573.8571.0577.65 1815.3018.0024.3518.2018.805816.3025.0027.5525.2523.85 1936.0548.9051.9549.5037.70598.2516.9016.2517.009.25 2012.6015.9519.6516.9517.306073.8586.8092.8088.1577.20 217.5010.7512.2511.359.006138.1554.3062.7055.8550.65 2255.8053.4062.5055.8060.656231.3039.9549.6041.4538.65 2388.2092.8597.4595.7594.306310.9026.8526.0526.3016.05 2469.2066.8582.4071.6079.706442.5045.8061.3050.2049.70 257.9512.7014.7013.0510.156512.6521.0022.3521.1517.60 2646.7041.5551.5042.9549.606630.6038.0541.8538.5038.20 2731.2548.8555.9049.7039.506717.2527.7527.0527.1020.75 2824.9040.9046.6541.4531.506819.4238.1239.4838.1825.97 2982.0469.3780.9972.1182.616944.3541.5059.2049.3047.70 3032.3033.0043.4534.5538.607025.2052.3054.7555.2034.15 3142.0054.5563.0554.0047.907131.8037.8048.0540.6038.15 3283.4577.5088.6580.6588.907236.0026.7534.1029.3542.60 3368.3072.1587.4573.9071.757338.7552.7062.3056.6551.75 3425.8541.6044.3544.6025.957419.9024.3027.8525.6024.40 3510.8516.3517.7516.2013.507533.2034.7546.0038.2036.30 369.6511.6015.0511.9512.507613.3025.9026.9526.1516.55 3734.2044.7556.9048.0039.957718.0020.5526.9022.9525.10 3811.0014.4017.9514.7015.507827.0038.1044.4038.7035.25 3915.6020.8523.1021.6520.857920.9017.4025.2520.7026.90 4048.2037.2552.6041.0554.90

our implementation,the value of m is set to300,i.e.,the top

ranked300images(labeled and unlabeled)plus the query

image are used to estimate the image manifold for a particu-

lar query.

Also,the back slash operator in Matlab is used in our system

to solve the equation.The parameterλis set to0.1,and the

effect of parameter selection will be discussed later.

SVM:The Support Vector Machines approach.The labeled im-ages{x i,y i}l i=1are used to learn the classi?cation function

by SVM.The LIBSVM system[3]was used in our system

to solve the SVM optimization problem.Cross-validation on

the labeled images is used to select the parameters in SVM. ARE:Augmented Relation Embedding,which was proposed by Lin et.al.[20].ARE tries to map the feature space into a

semantic manifold that grasps the user’s preferences.ARE

constructs two feedback relational graphs to incorporate the

user provided positive and negative examples.In the compar-

ison experiment reported in[20],ARE is superior than Lo-

cality Preserving Projection in the incremental semi-supervised

mode[12].Based on the consideration of ef?ciency,the top 300ranked images plus the query image are used to estimate the image manifold for a particular query.

Different from the previous three methods which learn a clas-si?cation function,ARE tries to learn a subspace in which the Euclidean distance can better re?ect the semantic structure of images.A crucial problem of ARE is how to determine the dimensionality of the subspace.In our experiments,we iter-ate all the dimensions and select the dimension with respect to the best performance.However,in real world image re-trieval applications,one has to estimate the dimensionality. There are two interesting points in these four compared algo-rithms.

1.The?rst three(RidgeReg,LPRReg and SVM)are classi?-

cation based methods.These algorithms try to learn a clas-si?cation function f and re-rank the image database by this function.The last one(ARE)is a subspace learning method.

The images in the database are re-ranked by their distances with the query image in the subspace.If we want to re-rank

Scope

P r e c i s i o n

(a)Feedback Iteration 1

(b)Feedback Iteration 2

Figure 3:The average precision-scope curves of different algorithms for the ?rst two feedback iterations.The LPRReg is the best algorithm on the entire scope.

(a)Precision at Top 10(b)Precision at Top 20(c)Precision at Top 30

Figure 4:Performance evaluation of the four learning algorithms in learning the semantic concepts from the feedbacks.(a)Precision at top 10,(b)Precision at top 20and (c)Precision at top 30.

the whole image database,apparently,the latter approach will be much more time consuming and can hardly scale to a very large image database.

2.Two of the four algorithms (RidgeReg and SVM)are super-vised and only consider the labeled images.In image re-trieval,the number of labeled images is usually very small,especially in the ?rst round of feedback.The other two al-gorithms (LPRReg and ARE)are semi-supervised.They consider both labeled images and unlabeled images.Those large amount of unlabeled images can help to reveal the in-trinsic geometrical structure of the feature space and the se-mantic relationship between images.Thus,these two semi-supervised algorithms are expected to achieve reasonable per-formance even with a small amount of labeled images.

4.3Image Retrieval Performance

Table (1)shows the precision at top 20of the ?rst feedback it-eration for all the 79categories.The Baseline indicates the initial result without feedback information.The retrieval performances of all the algorithms vary with the categories.There are some easy

categories on which all the algorithms perform well and some hard categories on which all the algorithms perform poorly.Since the features we used in our experiments are color and texture features,those categories with similar color and texture (e.g .,category 23in Figure 2(b))will have good retrieval performance.While those categories with different color and texture (e.g .category 7in Fig-ure 2(a))tend to have poor retrieval performance.Among all the 79categories,our LPRReg is the best for 64categories and ranked No.2for the remaining 15categories.

Figure 3shows the average precision-scope curves of different algorithms for the ?rst two feedback iterations.By iteratively adding the user’s feedback,the corresponding precision results (at top 10,top 20and top 30)of the four algorithms are respectively shown in Figure 4.The running time of different algorithms for each query are also shown in Table 2.We would like to highlight several points on these results.

?Our algorithm LPRReg achieves the highest precision on all the feedback iterations and on the whole scope.The reason is that LPRReg learns a classi?cation function which is pow-erful in the sense of discriminating between relevant images

Table2:Average running time of different algorithms on pro-cessing one query

Time on different feedback iteration(s)

12345 RidgeReg0.0050.0050.0050.0060.006

LPRReg0.0520.0560.0590.0590.062

SVM0.0580.0680.0770.0840.091

ARE0.0940.0950.0960.0970.098

Figure5:The performance of LPR regression vs.the param-eterλ.The LPR regression is very stable with respect to the parameterλ.The performance is almost the same with theλfrom10?2to107.

and irrelevant images.Meanwhile,LPRReg explicitly con-siders the intrinsic geometrical structure of image manifold, which guarantee the best performance even with only a very small number of labeled images.

?The discriminating power between relevant images and irrel-evant images

is the key of a classi?cation function for image retrieval.A classi?er with good discriminating power on un-labeled images is expected.However,with a small number of labeled images,the learned classi?er might not be good for the test(unlabeled)images,which is known as the gener-alization capability of a classi?er.In such situation,regular-ization is needed.SVM can be thought of as a classi?er with a large margin regularizer[10].Although such regularizer is quite powerful with a small number of training data,it fails to consider the intrinsic image manifold structure which can be revealed by the unlabeled images.

?At the?rst feedback iteration,the result of ARE is quite good,especially the precision on top5and top10.The rea-son is that the number of labeled images is pretty small in the ?rst feedback round.ARE considers both labeled and unla-beled image while the supervised algorithms(RidgeReg and SVM)only consider the labeled images.In the later feed-back iterations,the feedback information is accumulated and the classi?er approaches give better results.ARE aims at learning a distance measure on the image manifold,and thus may not be optimal in the sense of discriminating between relevant images and irrelevant images.

?Considering ef?ciency is a key issue in image retrieval,espe-Figure6:The performance of ARE vs.the dimensionality.The best performance in difference feedback iterations appears at different dimensions,which makes hard to estimate the intrin-sic dimensionality in reality.

cially when web scale is concerned,classi?cation framework rather than subspace learning framework might be a better choice.Also,regression framework could be more competi-tive than SVM.

4.4Parameter Selection

In the LPR regression,There is a parameterλto adjust the weight of the regularization part.In our previous experimental results,we empirically set theλas0.1.In this subsection,we try to exam-ine the stability of LPR regression with respect to the parameterλ. Figure5shows the performance changing of LPR regression at the ?rst feedback iteration with the parameterλ.We can see that the LPR regression is very stable.The performance is almost the same with theλfrom10?2to107.

Different from regression or SVM,the ARE algorithm is a sub-space learning algorithm.For ARE,there is a problem of how to determine the dimensionality or the subspace.Figure6shows the performance changing of ARE at different feedback iterations with the reduced dimensionality.We see that the best performances at different feedback iterations appear at different dimensional sub-spaces,which makes hard to estimate the dimensionality of the re-duced space in reality.

5.CONCLUSIONS AND FUTURE WORK

A novel image retrieval approach on manifold is proposed in this paper.Our retrieval method is based on learning a classi?cation function on the image manifold.Our method has two major ad-vantages:as a classi?cation based method,our method is more ef-?cient than subspace learning based methods;second,our method explicitly takes into consideration the geometrical structure of the image manifold by making use of both labeled and unlabeled im-ages.Several experiments demonstrate the ef?ciency and effective-ness of our method.

Due to the ef?ciency consideration,the algorithm proposed in our paper is linear.Thus it may fail to capture those complex im-age manifolds which are highly non-linear.However,it is easy to extend our algorithm to Reproducing Kernel Hilbert Space(RKHS) which leads to Locality Preserving Regularized Kernel Regression. We have noticed that there is a trend in many areas,including

multimedia information retrieval,computer vision,pattern recogni-tion,and data mining,that the data points are considered as drawn from sampling a probability distribution that has support on or near a submanifold of Euclidean space.Manifold based techniques have shown its superiority to Euclidean based techniques for CBIR by several researchers[13],[20],[11].All of these algorithms(includ-ing the one presented in this paper)have a implicit assumption that the data points are uniformly sampled from the image manifold.If the data points are not uniformly sampled from the manifold,the graph Laplacian may not converge to the true Laplace Beltrami op-erator on the manifold.However,the real world image database might be much more complicated and far from uniform.We are currently exploring this problem in theory and practice.

6.REFERENCES

[1]M.Belkin and https://www.wendangku.net/doc/2610438946.html,placian eigenmaps and spectral

techniques for embedding and clustering.In Advances in

Neural Information Processing Systems14,pages585–591.

MIT Press,Cambridge,MA,2001.

[2]M.Belkin,P.Niyogi,and V.Sindwani.On manifold

regularization.In Tenth International Workshop on Arti?cial Intelligence and Statistics,2005.

[3]C.-C.Chang and C.-J.Lin.LIBSVM:a library for support

vector machines,2001.Software available at

https://www.wendangku.net/doc/2610438946.html,.tw/~cjlin/libsvm.

[4]E.Chang,K.Goh,G.Sychay,and G.Wu.Cbsa:

Content-based soft annotation for multimodal image retrieval using bayes point machines.IEEE Transactions on Circuits

and Systems for Video Technology,13(1):26–38,January

2003.

[5]F.R.K.Chung.Spectral Graph Theory,volume92of

Regional Conference Series in Mathematics.AMS,1997. [6]I.Cohen,F.G.Cozman,N.Sebe,M.C.Cirelo,and T.S.

Huang.Semisupervised learning of classi?ers:Theory,

algorithms,and their application to human-computer

interaction.IEEE Transactions on Pattern Analysis and

Machine Intelligence,26(12):1553–1567,2004.

[7]I.J.Cox,T.P.Minka,T.V.Papathomas,and P.N.Yianilos.

The bayesian image retrieval system,pichunter:Theory,

implementation,and psychophysical experiments.IEEE

Transactions on Image Processing,9:20–37,2000.

[8]R.O.Duda,P.E.Hart,and D.G.Stork.Pattern

Classi?cation.Wiley-Interscience,Hoboken,NJ,2nd

edition,2000.

[9]S.Guattery and https://www.wendangku.net/doc/2610438946.html,ler.Graph embeddings and

laplacian eigenvalues.SIAM Journal on Matrix Analysis and Applications,21(3):703–723,2000.

[10]T.Hastie,R.Tibshirani,and J.Friedman.The Elements of

Statistical Learning:Data Mining,Inference,and

Prediction.New York:Springer-Verlag,2001.

[11]J.He,M.Li,H.-J.Zhang,H.Tong,and C.Zhang.

Manifold-ranking based image retrieval.In Proceedings of

the ACM Conference on Multimedia,New York,October

2004.

[12]X.He.Incremental semi-supervised subspace learning for

image retrieval.In Proceedings of the ACM Conference on

Multimedia,New York,October2004.

[13]X.He,W.-Y.Ma,and H.-J.Zhang.Learning an image

manifold for retrieval.In Proceedings of the ACM

Conference on Multimedia,New York,October2004. [14]X.He and P.Niyogi.Locality preserving projections.In

Advances in Neural Information Processing Systems16.MIT Press,Cambridge,MA,2003.

[15]X.He,S.Yan,Y.Hu,P.Niyogi,and H.-J.Zhang.Face

recognition using laplacianfaces.IEEE Transactions on

Pattern Analysis and Machine Intelligence,27(3):328–340,

2005.

[16]C.-H.Hoi and M.R.Lyu.A novel log-based relevance

feedback technique in content-based image retrieval.In

Proceedings of the ACM Conference on Multimedia,New

York,October2004.

[17]S.Hoi and M.Lyu.A semi-supervised active learning

framework for image retrieval.In Proc.IEEE Conf.

Computer Vision and Pattern Recognition Machine Learning (CVPR’05),2005.

[18]D.P.Huijsmans and N.Sebe.How to complete performance

graphs in content-based image retrieval:Add generality and

normalize scope.IEEE Transactions on Pattern Analysis and Machine Intelligence,27(2):245–251,2005.

[19]Y.Ishikawa,R.Subramanya,and C.Faloutsos.Mindreader:

Query databases through multiples examples.In Proc.24th

International Conference on Very Large Databases,pages

218–227,New York,1998.

[20]Y.-Y.Lin,T.-L.Liu,and H.-T.Chen.Semantic manifold

learning for image retrieval.In Proceedings of the ACM

Conference on Multimedia,Singapore,November2005. [21]W.-Y.Ma and https://www.wendangku.net/doc/2610438946.html,ra:a toolbox for

navigating large image databases.Multimedia Systems,7(3), May1999.

[22]Y.Rui,T.S.Huang,and S.Mehrotra.Content-based image

retrieval with relevance feedback in mars.In IEEE

Conference on Image Processing,pages815–818,Santa

Barbara,CA,Oct.1997.

[23]Y.Rui,T.S.Huang,M.Ortega,and S.Mehrotra.Relevance

feedback:A power tool in interactive content-based image

retrieval.IEEE Trans.on Circuits and Systems for Video

Technology,8(5):644–655,1998.

[24]A.Smeulders,M.Worring,S.Santini,A.Gupta,and R.Jain.

Content-based image retrieval at the end of the early years.

IEEE Transactions on Pattern Analysis and Machine

Intelligence,22(12):1349–1380,2000.

[25]Q.Tian,J.Yu,Q.Xue,and N.Sebe.A new analysis of the

value of unlabeled data in semi-supervised learning for

image retrieval.In IEEE Int.Conf.on Multimedia and Expo

(ICME’04),2004.

[26]K.Tieu and P.Viola.Boosting image retrieval.In

Proceedings of the ACM Conference on Multimedia,Hilton

Head Island,SC,June2000.

[27]A.N.Tikhonov.Regularization of incorrectly posed

problems.Soviet Math.,(4),1963(English Translation). [28]S.Tong and E.Chang.Support vector machine active

learning for image retrieval.In Proceedings of the ninth ACM international conference on Multimedia,pages107–118,

2001.

[29]V.N.Vapnik.The Nature of Statistical Learning Theory.

Springer-Verlag,1995.

[30]H.Yu,M.Li,H.-J.Zhang,and J.Feng.Color texture

moments for content-based image retrieval.In International

Conference on Image Processing,pages24–28,2002. [31]J.Yu and Q.Tian.Learning image manifolds by semantic

subspace projection.In Proceedings of the ACM Conference on Multimedia,Santa Barbara,October2006.

员工绩效考核指导意见

中国水利水电第三工程局有限公司 员工绩效考核指导意见 第一章总则 第一条为确保公司发展战略和经营目标的实现,进一步落实全员经营责任,使激励及约束机制覆盖到全体员工,结合公司实际,制定本指导意见。 第二条员工绩效考核原则 1、公开、公平、公正原则 以充分调动员工的积极性为目的,做到考核办法公开、过程公开,确保考核结果公平、公正。 2、岗位履职考核原则 围绕目标管理,针对员工岗位职责,采用定量考核和定性评价相结合办法,确保短期目标和长期目标、个人绩效和组织绩效协同。 3、绩效考核结果应用原则 绩效考核结果与员工薪酬分配、晋升挂钩。 第三条适用范围 本办法适用于公司执行岗位绩效工资制在岗员工。执行绩效年薪的分局班子、项目班子人员执行或参照执行《中国水电三局有限公司二级单位领导人员年度综合考核评价管理办法》。 第二章绩效考核组织与职责 第四条公司所属各单位、各项目应成立绩效考核委员会(员工人数少、工期短的项目可设绩效考核小组),为本单位绩效考核的决策机构。绩效考核委员会(小组)由各单位、项目主要领导及相关部门的负责人组成,负责审议考核办法,确定本单位(项目)关键业绩指标,审核考核结果。 第五条绩效考核委员会(小组)办公室设在人力资源部门,在委员会(小组)领导下开展工作,具体实施考核工作,包括制定实施方案、组织实施考核、汇总分析绩效考核情况、指导绩效反馈面谈等。 第三章绩效考核周期及考核内容 第六条考核周期 绩效考核分半年考核和年度考核。 1、半年考核:每年7月,各单位副主任级以下在岗员工参加半年度考核,于每年7月底之前完成。 2、年度考核:各单位须于工作会后一个月内完成在岗员工年度考核。 第七条考核内容 员工绩效考核内容包括:工作业绩(工作任务完成情况、关键业绩指标)、能力素质、民主测评。各单位根据实际情况参照下表制定本单位员工绩效考核实施细则。 总部员工绩效考核内容及评分权重参照表 被考核人 考评内容、考核人及评分权重半年考核年度考核

模拟电路习题答案第6章放大电路中的反馈题解

第六章放大电路中的反馈 自测题 一、在括号内填入“√”或“×”,表明下列说法是否正确。 (1)若放大电路的放大倍数为负,则引入的反馈一定是负反馈。()(2)负反馈放大电路的放大倍数与组成它的基本放大电路的放大倍数量纲相同。() (3)若放大电路引入负反馈,则负载电阻变化时,输出电压基本不变。 ()(4)阻容耦合放大电路的耦合电容、旁路电容越多,引入负反馈后,越容易产生低频振荡。() 解:(1)×(2)√(3)×(4)√ 二、已知交流负反馈有四种组态: A.电压串联负反馈B.电压并联负反馈 C.电流串联负反馈D.电流并联负反馈选择合适的答案填入下列空格内,只填入A、B、C或D。 (1)欲得到电流-电压转换电路,应在放大电路中引入; (2)欲将电压信号转换成与之成比例的电流信号,应在放大电路中引入; (3)欲减小电路从信号源索取的电流,增大带负载能力,应在放大电路中引入; (4)欲从信号源获得更大的电流,并稳定输出电流,应在放大电路中引入。 解:(1)B (2)C (3)A (4)D 三、判断图所示各电路中是否引入了反馈;若引入了反馈,则判断是正反馈还是负反馈;若引入了交流负反馈,则判断是哪种组态的负反馈,并求 A 或f s u A 。设图中所有电容出反馈系数和深度负反馈条件下的电压放大倍数 f u 对交流信号均可视为短路。

图 解:图(a )所示电路中引入了电流串联负反馈。反馈系数和深度负反 馈条件下的电压放大倍数f u A 分别为 L 3 1321f 32131 R R R R R R A R R R R R F u 式中R L 为电流表的等效电阻。 图(b )所示电路中引入了电压并联负反馈。反馈系数和深度负反馈条 件下的电压放大倍数f u A 分别为 1 2f 2 1R R A R F u 图(c )所示电路中引入了电压串联负反馈。反馈系数和深度负反馈条 件下的电压放大倍数f u A 分别为 1 1f u A F 图(d )所示电路中引入了正反馈。 四、电路如图所示。

模拟电子技术课程习题 第六章 放大电路中的反馈

第六章放大电路中的反馈 要得到一个由电流控制的电流源应选用[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 要得到一个由电压控制的电流源应选用[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 在交流负反馈的四种组态中,要求互导增益A iuf= I O/U i稳定应选[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 在交流负反馈的四种组态中,要求互阻增益A uif=U O/I i稳定应选[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 在交流负反馈的四种组态中,要求电流增益A iif=I O/I i稳定应选[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 放大电路引入交流负反馈后将[ ] A.提高输入电阻 B.减小输出电阻 C.提高放大倍数 D.提高放大倍数的稳定性 负反馈放大电路产生自激振荡的条件是[ ] =1 =-1 C.|AF|=1 D. AF=0 放大电路引入直流负反馈后将[ ] A.改变输入、输出电阻 B.展宽频带 C.减小放大倍数 D.稳定静态工作点 电路接成正反馈时,产生正弦波振荡的条件是[ ] A. AF=1 B. AF=-1 C. |AF|=1 D. AF=0 在深度负反馈放大电路中,若开环放大倍数A增加一倍,则闭环增益A f将 A. 基本不变 B. 增加一倍[ ]

C. 减小一倍 D. 不能确定 在深度负反馈放大电路中,若反馈系数F增加一倍,闭环增益A f将[ ] A. 基本不变 B.增加一倍 C. 减小一倍 D. 不能确定 分析下列各题,在三种可能的答案(a.尽可能小,b.尽可能大,c.与输入电阻接近)中选择正确者填空: 1、对于串联负反馈放大电路,为使反馈作用强,应使信号源内阻。 2、对于并联反馈放大电路,为使反馈作用强,应使信号源内阻。 3、为使电压串联负反馈电路的输出电阻尽可能小,应使信号漂内阻。在讨论反馈对放大电路输入电阻R i的影响时,同学们提出下列四种看法,试指出哪个(或哪些)是正确的: a.负反馈增大R i,正反馈减小R i; b.串联反馈增大R i,并联反馈减小R i; c.并联负反馈增大R i,并联正反馈减小R i; d.串联反馈增大R i,串联正反馈减小R i; 选择正确的答案填空。 1、反馈放大电路的含义是。(a.输出与输入之间有信号通路,b.电路中存在反向传输的信号通路,c.除放大电路以外还有信号通路) 2、构成反馈通路的元器件。(a.只能是电阻、电容或电感等无源元件, b.只能是晶体管、集成运放等有源器件, c.可以是无源元件,也可是以有源器件) 3、反馈量是指。(a.反馈网络从放大电路输出回路中取出的信号,b.反馈到输入回路的信号,c.反馈到输入回路的信号与反馈网络从放大电路输出回路中取出的信号之比) 4、直流负反馈是指。(a.只存在于直接耦合电路中的负反馈,b.直流通路中的负反馈,c.放大直流信号时才有的负反馈) 5、交流负反馈是指。(a.只存在于阻容耦合及变压器耦合电路中的负反馈,b.交流通路中的负反馈,c.放大正弦波信号时才有的负反馈) 选择正确的答案填空。 1、在放大电路中,为了稳定静态工作点,可以引入;若要稳定放大倍数,应引入;某些场合为了提高放大倍数,可适当引入;希望展宽频

《模拟电子技术基础》第三版习题解答第6章 放大电路中的反馈题解

第六章 放大电路中的反馈 自测题 一、在括号内填入“√”或“×”,表明下列说法是否正确。 (1)若放大电路的放大倍数为负,则引入的反馈一定是负反馈。( ) (2)负反馈放大电路的放大倍数与组成它的基本放大电路的放大倍数量纲相同。( ) (3)若放大电路引入负反馈,则负载电阻变化时,输出电压基本不变。 ( ) (4)阻容耦合放大电路的耦合电容、旁路电容越多,引入负反馈后,越容易产生低频振荡。( ) 解:(1)× (2)√ (3)× (4)√ 二、已知交流负反馈有四种组态: A .电压串联负反馈 B .电压并联负反馈 C .电流串联负反馈 D .电流并联负反馈 选择合适的答案填入下列空格内,只填入A 、B 、C 或D 。 (1)欲得到电流-电压转换电路,应在放大电路中引入 ; (2)欲将电压信号转换成与之成比例的电流信号,应在放大电路中引入 ; (3)欲减小电路从信号源索取的电流,增大带负载能力,应在放大电路中引入 ; (4)欲从信号源获得更大的电流,并稳定输出电流,应在放大电路中引入 。 解:(1)B (2)C (3)A (4)D 三、判断图T6.3所示各电路中是否引入了反馈;若引入了反馈,则判断是正反馈还是负反馈;若引入了交流负反馈,则判断是哪种组态的负反馈, 并求出反馈系数和深度负反馈条件下的电压放大倍数f u A 或f s u A 。 设图中所有电容对交流信号均可视为短路。

图T 6.3 解:图(a )所示电路中引入了电流串联负反馈。反馈系数和深度负反 馈条件下的电压放大倍数f u A 分别为 L 3 1321f 3213 1 R R R R R R A R R R R R F u ?++≈++= 式中R L 为电流表的等效电阻。 图(b )所示电路中引入了电压并联负反馈。反馈系数和深度负反馈条 件下的电压放大倍数f u A 分别为 1 2f 2 1R R A R F u -≈-= 图(c )所示电路中引入了电压串联负反馈。反馈系数和深度负反馈条 件下的电压放大倍数f u A 分别为 1 1f ≈=u A F 图(d )所示电路中引入了正反馈。 四、电路如图T6.4所示。

模电实验报告负反馈放大电路

实验三负反馈放大电路 一、实验目的 1、研究负反馈对放大器放大倍数的影响。 2、了解负反馈对放大器通频带和非线性失真的改善。 3、进一步掌握多级放大电路静态工作点的调试方法。 二、实验仪器 1、双踪示波器 2、信号发生器 3、万用表 三、预习要求 1、认真阅读实验内容要求,估计待测量内容的变化趋势。 2、图3-1电路中晶体管β值为120.计算该放大器开环和闭环电压放大倍数。 3、放大器频率特性测量方法。 说明:计算开环电压放大倍数时,要考虑反馈网络对放大器的负载效应。对于第一级电路,该负载效应相当于C F、R F与1R6并联,由于1R6≤Rf,所以C F、R F 的作用可以略去。对于第二季电路,该负载效应相当于C F、R F与1R6串联后作用在输出端,由于1R6≤Rf,所以近似看成第二级内部负载C F、R F。 4、在图3-1电路中,计算级间反馈系数F。 四、实验内容 1、连接实验线路 如图3-1所示,将线连好。放大电路输出端接Rp4,1C6(后面称为R F)两端,构成负反馈电路。

2、调整静态工作点 方法同实验二。将实验数据填入表3-1中。 表3-1 3、负反馈放大器开环和闭环放大倍数的测试 (1)开环电路 ○1按图接线,R F先不接入。 ○2输入端接如Ui=1mV,f=1kHZ的正弦波。调整接线和参数使输出不是真且无震荡。 ○3按表3-2要求进行测量并填表。 ○4根据实测值计算开环放大倍数和输出电阻R0。 (2)闭环电路 ○1接通R F,按(1)的要求调整电路。 ○2调节Rp4=3KΩ,按表3-2要求测量并填表,计算A uf和输出电阻R0。 ○3改变Rp4大小,重复上述实验步骤。 ○4根据实测值验证A uf≈1/F。讨论负反馈电路的带负载能力。

模拟电子电路例题负反馈放大电路例题

模拟电子电路例题_负反馈放大电路例题: 1. 1.电流并联负反馈可稳定放大器的输出____,这种负反馈放大器的输入电阻____,输出电阻____。 答案:电流,低,高 2.要求多级放大器输入电阻低,输出电阻也低,应该在多级放大器间引入____负反馈。 答案:电压并联 3.要求多级放大器输入电阻高,输出电压稳定,应该在多级放大器中引入____负反馈。 答案:电压串联 4.直流负反馈只能影响放大器的____,交流负反馈只影响放大器的交流____。 答案:静态工作点,性能 5.将放大电路的____的一部分或全部通过某种方式反送到____称作反馈。 答案:输出信号,输入端 6.负反馈使放大电路____降低,但使____得以提高,改善了输出波形的____,展宽了放大电路的____。 答案:放大倍数,闭环放大倍数的稳定性,非线性失真,通频带 7.串联负反馈使输入电阻____,而并联负反馈使输入电阻____。 答案:提高,降低 8.电压负反馈使输出电阻____,而电流负反馈使输出电阻____。 答案:降低,提高 9.反馈深度用____来表示,它体现了反馈量的大小。

答案: 2. 电路如图示,试分别说明 (1)为了使从引到T2基极的反馈为负反馈,图中运放的正反馈应如何标示。 (2)接成负反馈情况下,若,欲使,则R F= (3)在上述情况下,若运放A的A vo或电路中的RC值变化5%,问值也变化5%吗 解:(1)电路按瞬时极性法可判断,若A上端标示为(+)极时为电压串联负反馈,否则为正反馈。可见上(+)下(-)标示才正确。

(2)若为电压串联负反馈,因为,则 成立。由,可得 (3)由于只决定于R F和R b2两个电阻的值,因而基本不变,所以值不会改变。 3. 下列电路中,判别哪些电路是负反馈放大电路属于何种负反馈类型那些属于直流反馈,起何作用

模拟电路基础知识系列之六:反馈放大电路

模拟电路基础知识系列之六:反馈放大电路 一、反馈概念的建立 *开环放大倍数---A*闭环放大倍数---Af*反馈深度---1+AF*环路增益---AF:1.当AF>0时,Af下降,这种反馈称为负反馈。2.当AF=0时,表明反馈效果为零。3.当AF<0时,Af升高,这种反馈称为正反馈。4.当AF=-1时,Af→∞。放大器处于“自激振荡”状态。二.反馈的形式和判断1. 反馈的范围----本级或级间。2. 反馈的性质----交流、直流或交直流。直流通路中存在反馈则为直流反馈,交流通路中存在反馈则为交流反馈,交、直流通路中都存在反馈则为交、直流反馈。 3. 反馈的取样----电压反馈:反馈量取样于输出电压;具有稳定输出电压的作用。(输出短路时反馈消失)电流反馈:反馈量取样于输出电流。具有稳定输 出电流的作用。(输出短路时反馈不消失) 4. 反馈的方式-----并联反馈:反馈量与原输入量在输入电路中以电流形式相叠加。Rs 越大反馈效果越好。反馈信号反馈到输入端)串联反馈:反馈量与原输入量在输入电路中以电压的形式相叠加。Rs 越小反馈效果越好。反馈信号反馈到非输入端) 5. 反馈极性-----瞬时极性法:(1)假定某输入信号在某瞬时的极性为正(用+表示),并设信号的频率在中频段。(2)根据该极性,逐级推断出放大电路中各相关点的瞬时极性(升高用+ 表示,降低用-表示)。(3)确定反馈信号的极性。(4)根据Xi 与X f 的极性,确定净输入信号的大小。Xid 减小为负反馈;Xid 增大为正反馈。三. 反馈形式的描述方法某反馈元件引入级间(本级)直流负反馈和交流电压(电流)串联(并联)负

模拟电路自测题4(反馈与负反馈)

反馈和负反馈放大电路 1. 放大电路中有反馈的含义是___B____。 (a) 输出与输入之间有信号通路(b) 电路中存在反向传输的信号通路 (c) 除放大电路以外还有信号通道 2. 根据反馈的极性,反馈可分为___C____反馈。 (a) 直流和交流(b) 电压和电流(c) 正和负 3. 根据反馈信号的频率,反馈可分为____A___反馈。 (a) 直流和交流(b) 电压和电流(c) 正和负 4. 根据取样方式,反馈可分为_____B__反馈。 (a) 直流和交流(b) 电压和电流(c) 正和负 5. 根据比较的方式,反馈可分为___C____反馈。 (a) 直流和交流(b) 电压和电流(c) 串联和并联 6. 负反馈多用于____A___。 (a) 改善放大器的性能(b) 产生振荡(c) 提高输出电压 7. 正反馈多用于____B___。 (a) 改善放大器的性能(b) 产生振荡(c) 提高输出电压 8. 直流负反馈是指___B____。 (a) 只存在于直接耦合电路中的负反馈(b) 直流通路中的负反馈 (c) 放大直流信号才有的负反馈 9. 交流负反馈是指____B___。 (a) 只存在于阻容耦合电路中的负反馈(b) 交流通路中的负反馈 (c) 变压器耦合电路中的反馈 10.直流负反馈在电路中的主要作用是__C_____。 (a) 提高输入电阻(b) 增大电路增益(c) 稳定静态工作点 11.若反馈信号正比于输出电压,该反馈为___C____反馈。 (a) 串联(b) 电流(c) 电压 12.若反馈信号正比于输出电流,该反馈为____B___负反馈。 (a) 并联(b) 电流(c) 电压 13.当电路中的反馈信号以电压的形式出现在电路输入回路的反馈称为___B____反馈。 (a) 并联(b) 串联(c) 电压 14.当电路中的反馈信号以电流的形式出现在电路输入回路的反馈称为___A____反馈。 (a) 并联(b) 串联(c) 电压 15.电压负反馈可以____A___。 (a) 稳定输出电压(b) 稳定输出电流(c) 增大输出功率 16.电流负反馈可以____B___。 (a) 稳定输出电压(b) 稳定输出电流(c) 增大输出功率 17.对于放大电路,所谓闭环是指____ C_____。 (a) 接入负载(b) 接入信号源(c) 有反馈通路 18. 串联负反馈____A_____。 (a) 提高电路的输入电阻(b) 降低电路的输入电阻(c) 提高电路的输出电压 19. 并联负反馈____B_____。 (a) 提高电路的输入电阻(b) 降低电路的输入电阻(c) 提高电路的输出电压 20. 电压并联负反馈____B_____。 (a) 提高电路的输出电阻(b) 降低电路的输出电阻(c) 提高电路的输出电压 21. 电流串联负反馈____A_____。 (a) 提高电路的输出电阻(b) 降低电路的输出电阻(c) 提高电路的输出电压

考核申诉:绩效考核被忽视的重要环节

考核申诉:绩效考核被忽视的重要环节核心提示:公平、公正、公开是绩效管理的最高原则。为此,绩效考核过程本着透明原则,考评结果与本人直接见面,管理者要坚决杜绝黑箱操作,企业开通申诉渠道进行防范。从整个绩效管理的角度,有效启动员工申诉机制,是绩效考核不可或缺的环节。它对考评者给与必要的约束和压力, 公平、公正、公开是绩效管理的最高原则。为此,绩效考核过程本着透明原则,考评结果与本人直接见面,管理者要坚决杜绝“黑箱操作”,企业开通申诉渠道进行防范。从整个绩效管理的角度,有效启动员工申诉机制,是绩效考核不可或缺的环节。它对考评者给与必要的约束和压力,避免个别管理者不公正对待员工问题,可大大减少内部矛盾和冲突,促进绩效考评健康推进。 按照考评程序,考核结束后,被考评人若对考评结果有异议,可在人力资源部将考核结果反馈后的3个工作日内填写《考核申诉表》,向间接上级人力资源部提出书面申诉。间接上级应认真对待员工申诉,进行投诉进行审核处理。如果间接上级未及时处理或投诉人对处理结果不满意,员工可以向人力资源部提出二次申诉,人力资源部对申诉资料进行调查,将调查结果和处理意见报考评小组,考评小组评议后,确定维持原评议结果或调整原评议结果。 考核申诉虽然很重要,但在一些房地产企业考评申诉工作执行并不好:不是投诉很多,申诉不好处理,而是几乎没有申诉,这不正常,是不健康的考评状态,需要引导和纠正。我们不是说没事找事,没有申诉硬要员工去申诉。据我们调查,实际情况是,没有投诉不等于员工没有意见,而是不敢投诉,怕得罪人,怕被领导穿小鞋。大家虽然没有公开的申诉,但私下牢骚怪话并不少。 绩效申诉的确有一定难度,难点在于员工投诉顶头上司有顾虑。但这个问题必须解决,只有解决好这个问题,才可能树立绩效考评的公信力。因此,在建立绩效管理体系之初,企业就要高度正视员工考评申诉问题,正确引导,重在开个好头。企业老板要教育管理层要有颗平常心员工要敢于维护自己的权益,打消顾虑。要让管理层理解:员工申诉是他们的权力,就算出现什么差错也在所难免,纠正就好,如果没有错,通过申诉澄清不是更好?有则改之,无则加勉;人的天性是“不平则鸣”,你不给他正常渠道,他会通过非正式渠道“鸣”,负面作

模电负反馈学习总结(优.选)

1、反馈的基本概念 反馈是指放大器输出负载上的电压或电流的一部分(或全部),通过一定的电路形式(称为反馈网络),送回到输入回路,以对放大器的输入电压或电流产生影响,从而使输出负载电压或电流得到自动调节。 2、有无反馈的判断 是否存在将输出回路与输入回路相连接的通路(即反馈通路),输出量的大小是否会影响放大电路的净输入。 3、负反馈放大器的四种基本组态 电压串联负反馈、电压并联负反馈、电流串联负反馈、电流并联负反馈 (1)按反馈极性分:正反馈、负反馈 反馈极性的判别: 先假定输入量的瞬时极性为正极性; 根据放大电路各级的组态逐级推出电路中各点的瞬时极性,直至推出反馈信号的瞬时极性;最后在输入回路进行比较综合,判断引入反馈后净输入量如何变化。 (2)按取样方式分:电压反馈、电流反馈 取样方式的判别 方法1:根据输出回路对电压或电流的取样方式确定。 方法2:推论(适用于有公共地时) 若负载与反馈从放大器件的同一输出端接出,则为电压反馈。 若负载与反馈从放大器件的不同输出端接出,或负载没有直接接地,则为电流反馈。 方法3:根据反馈表达式 若反馈量可以表示为与输出电压成正比(表达式中不含负载RL),则为电压反馈。 若反馈量可以表示为与输出电流成正比(表达式中不含负载RL),则为电流反馈。 方法4:推论(负载短路法) 将放大电路的负载短路(RL=0),若输入回路中仍然存在反馈量(Xf ≠0),则为电流反馈;若输入回路中不存在反馈(Xf =0),则为电压反馈。 (3)按求和方式分:并联反馈、串联反馈 串联反馈、并联反馈的判别 方法1:根据输入回路的连接方式确定。 方法2:根据求和表达式 若以电压量进行比较求和,则为串联反馈。 若以电流量进行比较求和,则为并联反馈。 方法3:推论(适用于有公共地时) 若输入信号与反馈接到放大器件的同一输入端,则为并联反馈。 若输入信号与反馈接到放大器件的不同输入端,则为串联反馈。

模拟电子技术课程习题第六章放大电路中的反馈

模拟电子技术课程习题第六章放大电路中的反馈 -标准化文件发布号:(9456-EUATWK-MWUB-WUNN-INNUL-DDQTY-KII

第六章放大电路中的反馈 6.1 要得到一个由电流控制的电流源应选用[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 6.2 要得到一个由电压控制的电流源应选用[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 6.3 在交流负反馈的四种组态中,要求互导增益A iuf= I O/U i稳定应选[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 6.4 在交流负反馈的四种组态中,要求互阻增益A uif=U O/I i稳定应选[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 6.5 在交流负反馈的四种组态中,要求电流增益A iif =I O /I i 稳定应选[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 6.6 放大电路引入交流负反馈后将[ ] A.提高输入电阻 B.减小输出电阻 C.提高放大倍数 D.提高放大倍数的稳定性 6.7 负反馈放大电路产生自激振荡的条件是[ ] A.AF=1 B.AF=-1 C.|AF|=1 D. AF=0 6.8 放大电路引入直流负反馈后将[ ] A.改变输入、输出电阻 B.展宽频带 C.减小放大倍数 D.稳定静态工作点 6.9 电路接成正反馈时,产生正弦波振荡的条件是[ ]

A. AF=1 B. AF=-1 C. |AF|=1 D. AF=0 6.10 在深度负反馈放大电路中,若开环放大倍数A增加一倍,则闭环增益A f 将 A. 基本不变 B. 增加一倍[ ] C. 减小一倍 D. 不能确定 6.11 在深度负反馈放大电路中,若反馈系数F增加一倍,闭环增益A f 将[ ] A. 基本不变 B.增加一倍 C. 减小一倍 D. 不能确定 6.12 分析下列各题,在三种可能的答案(a.尽可能小,b.尽可能大,c.与输入电阻接近)中选择正确者填空: 1、对于串联负反馈放大电路,为使反馈作用强,应使信号源内阻。 2、对于并联反馈放大电路,为使反馈作用强,应使信号源内阻。 3、为使电压串联负反馈电路的输出电阻尽可能小,应使信号漂内阻。 6.13 在讨论反馈对放大电路输入电阻R i 的影响时,同学们提出下列四种看法,试指出哪个(或哪些)是正确的: a.负反馈增大R i ,正反馈减小R i ; b.串联反馈增大R i ,并联反馈减小R i ; c.并联负反馈增大R i ,并联正反馈减小R i ; d.串联反馈增大R i ,串联正反馈减小R i ; 6.14 选择正确的答案填空。 1、反馈放大电路的含义是。(a.输出与输入之间有信号通路,b.电路中存在反向传输的信号通路,c.除放大电路以外还有信号通路)

员工绩效考核的结果

员工绩效考核管理办法 第一章总则 第一条目的 为建立和完善事业部人力资源绩效考核体系和激励与约束机制,对员工进行客观、公正地评价,并通过此评价合理地进行价值分配,特制订本办法。 第二条原则 严格遵循“客观、公正、公开、科学”的原则,真实地反映被考核人员的实际情况,避免因个人和其他主观因素影响绩效考核的结果。第三条指导思想建立客观、公正、公开、科学的绩效评价制度,完善员工的激励机制与约束机制,为科学的人事决策提供可靠的依据。 第四条适用范围 本办法适用于事业部职能部除管理干部以外的全体员工,二级子公司可参照本办法建立各单位内部的绩效考核制度(二级子公司财务人员统一由事业部财务管理部进行考核)。 第二章考核体系 第五条考核对象 I类员工:工作内容的计划性和目标性较强的员工

U类员工:每月工作性质属重复性、日常性工作的员工 第六条:考核内容 1、业绩考核:I类员工主要参照各部门月度工作计划并依据工作目标进行考 核;n类员工依据职位说明书进行考核。 2、能力考核:通过员工的工作行为,观察、分析、评价其具备的工作能力。 3、态度考核:通过员工日常工作表现和行为,考察其工作责任感和工作态度。 第七条:考核方式 考核实行直接主管评分和部门主管签名确认的两级考核方式。 第三章考核实施 第八条考核机构 人力资源部:作为事业部人力资源工作的归口管理部门,负责绩效考核制度的制定,并组织事业部各职能部员工的绩效考核,指导和监督二级子公司绩效考核工作。 二级子公司人事部门:作为事业部下属二级子公司人事系统的归口管理部门,按照事业部《员工绩效考核管理办法》和其他有关制度的规定,负责本单位绩效考核制度的制订和实施工作。 第九条考核周期 以半年为考核周期,年终进行综合评定;新聘员工以试用期为考核周期。上半年:1月1日-6月30日;下半年:7月1日~12月31日。具体时间以通知为准。第十条考核流程 根据职位说明书和部门月度工作计划,每年1月和7月份由人力资源部协助各部门对该部门员工工作绩效进行综合评定,各部门应于1月15日和7月15日前将 考核结果报事业部人力资源部。 第四章考核结果的应用 第十一条考核结果等级分布

模拟电子技术课程习题-第六章---放大电路中的反馈

第六章放大电路中的反馈 6.1 要得到一个由电流控制的电流源应选用[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 6.2 要得到一个由电压控制的电流源应选用[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 6.3 在交流负反馈的四种组态中,要求互导增益A iuf = I O /U i 稳定应选[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 6.4 在交流负反馈的四种组态中,要求互阻增益A uif =U O /I i 稳定应选[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 6.5 在交流负反馈的四种组态中,要求电流增益A iif =I O /I i 稳定应选[ ] A.电压串联负反馈 B.电压并联负反馈 C.电流串联负反馈 D.电流并联负反馈 6.6 放大电路引入交流负反馈后将[ ] A.提高输入电阻 B.减小输出电阻 C.提高放大倍数 D.提高放大倍数的稳定性 6.7 负反馈放大电路产生自激振荡的条件是[ ] A.AF=1 B.AF=-1 C.|AF|=1 D. AF=0 6.8 放大电路引入直流负反馈后将[ ] A.改变输入、输出电阻 B.展宽频带 C.减小放大倍数 D.稳定静态工作点 6.9 电路接成正反馈时,产生正弦波振荡的条件是[ ] A. AF=1 B. AF=-1 C. |AF|=1 D. AF=0 6.10 在深度负反馈放大电路中,若开环放大倍数A增加一倍,则闭环增益A f 将 A. 基本不变 B. 增加一倍[ ]

模拟电子技术基础反馈电路分析

7.1 反馈的基本概念与分类 7.2 负反馈放大电路的方框图及增 益的一般表达式 7.3 负反馈对放大电路性能的改善7.4 负反馈放大电路的分析方法7.5 负反馈放大电路的稳定问题

7.1 反馈的基本概念与分类 7.1.1基本概念 ?反馈 ?电路中的反馈形式 7.1.2四种类型的反馈组态 ?类型 ?四种组态的判断方法 ?各种反馈类型的特点 ?信号源对反馈效果的影响

1. 反馈 将电子系统输出回路的电量(电压或电流),送回到输入回路的过程。 h fe i b i c v ce I b v be h re v ce h ie h oe 内部反馈 外部反馈

1. 反馈 v I v O +-R L v I v O +- R L R 2 R 1 反馈通路——信号反向传输的渠道 开环——无反馈通路闭环——有反馈通路 反馈通路(反馈网络) 信号的正向传输

2. 电路中的反馈形式 (1)正反馈与负反馈 正反馈:输入量不变时,引入反馈后输出量变大了。 负反馈:输入量不变时,引入反馈后输出量变小了。 另一角度 正反馈:引入反馈后,使净输入量变大了。 负反馈:引入反馈后,使净输入量变小了。 判别方法:瞬时极性法。即在电路中,从输入端开始,沿着信号流向,标出某一时刻有关节点电压变化的斜率(正斜率或负斜率,用“+”、“-”号表示)。

例 v I v O -+ R L R 2 R 1 (+) (+) (-) (-) (-) 净输入量 v I v O -+ R L R 2 R 1 (+) (+)(-) (-) 净输入量 负反馈 正反馈 v O -+R 4 R 5 R 3 -+ v I R 1 R 2 反馈通路 反馈通路 (+) (+) (+) (+) (-) (-) 净输入量 级间负反馈 反馈通路 本级反馈通路

相关文档
相关文档 最新文档