文档库 最新最全的文档下载
当前位置:文档库 › LETTER Communicated by Klaus-Robert Müller SVDD-Based Pattern Denoising

LETTER Communicated by Klaus-Robert Müller SVDD-Based Pattern Denoising

LETTER Communicated by Klaus-Robert Müller SVDD-Based Pattern Denoising
LETTER Communicated by Klaus-Robert Müller SVDD-Based Pattern Denoising

LETTER Communicated by Klaus-Robert M¨uller SVDD-Based Pattern Denoising

Jooyoung Park

parkj@korea.ac.kr

Daesung Kang

mpkds@korea.ac.kr

Department of Control and Instrumentation Engineering,Korea University, Jochiwon,Chungnam,339-700,Korea

Jongho Kim

jongho6270.kim@https://www.wendangku.net/doc/f916706138.html,

Mechatronics and Manufacturing Technology Center,Samsung Electronics Co.,Ltd., Suwon,Gyeonggi,443-742,Korea

James T.Kwok

jamesk@https://www.wendangku.net/doc/f916706138.html,t.hk

Ivor W.Tsang

ivor@https://www.wendangku.net/doc/f916706138.html,t.hk

Department of Computer Science and Engineering,Hong Kong University of Science and Technology,Clear Water Bay,Hong Kong

The support vector data description(SVDD)is one of the best-known one-class support vector learning methods,in which one tries the strategy of using balls de?ned on the feature space in order to distinguish a set of normal data from all other possible abnormal objects.The major concern of this letter is to extend the main idea of SVDD to pattern https://www.wendangku.net/doc/f916706138.html,bining the geodesic projection to the spherical decision boundary resulting from the SVDD,together with solving the preimage problem,we propose a new method for pattern denoising.We?rst solve SVDD for the training data and then for each noisy test pattern,obtain its denoised feature by moving its feature vector along the geodesic on the manifold to the nearest decision boundary of the SVDD ball.Finally we ?nd the location of the denoised pattern by obtaining the pre-image of the denoised feature.The applicability of the proposed method is illustrated by a number of toy and real-world data sets.

1Introduction

Recently,the support vector learning method has become a viable tool in the area of intelligent systems(Cristianini&Shawe-Taylor,2000;Sch¨olkopf &Smola,2002).Among the important application areas for support Neural Computation19,1919–1938(2007)C 2007Massachusetts Institute of Technology

1920J.Park et al. vector learning,we have the one-class classi?cation problems(Campbell& Bennett,2001;Crammer&Chechik,2004;Lanckriet,El Ghaoui,&Jordan, 2003;Laskov,Sch¨a fer,&Kotenko,2004;M¨uller,Mika,R¨a tsch,Tsuda,& Sch¨olkopf,2001;Pekalska,Tax,&Duin,2003;R¨a tsch,Mika,Sch¨olkopf, &M¨uller,2002;Sch¨olkopf,Platt,&Smola,2000;Sch¨olkopf,Platt,Shawe-Taylor,Smola,&Williamson,2001;Sch¨olkopf&Smola,2002;Tax,2001; Tax&Duin,1999).In one-class classi?cation problems,we are given only the training data for the normal class,and after the training phase is?nished,we are required to decide whether each test vector belongs to the normal or the abnormal class.One-class classi?cation problems are often called outlier detection problems or novelty detection prob-lems.Obvious examples of this class include fault detection for machines and the intrusion detection system for computers(Sch¨olkopf&Smola, 2002).

One of the best-known support vector learning methods for the one-class problems is the SVDD(support vector data description)(Tax,2001; Tax&Duin,1999).In the SVDD,balls are used for expressing the region for the normal class.Among the methods having the same purpose with the SVDD are the so-called one-class SVM of Sch¨olkopf and others(R¨a tsch et al.,2002;Sch¨olkopf et al.,2001;Sch¨olkopf et al.,2000),the linear program-ming method of Campbell and Bennet(2001),the information-bottleneck-principle-based optimization approach of Crammer and Chechik(2004), and the single-class minimax probability machine of Lanckriet et al. (2003).Since balls on the input domain can express only a limited class of regions,the SVDD in general enhances its expressing power by utilizing balls on the feature space instead of the balls on the input domain.

In this letter,we extend the main idea of the SVDD toward the use for the problem of pattern denoising(Kwok&Tsang,2004;Mika et al.,1999; Sch¨olkopf et al.,1999).Combining the movement to the spherical decision boundary resulting from the SVDD together with a solver for the preimage problem,we propose a new method for pattern denoising that consists of the following steps.First,we solve the SVDD for the training data consisting of the prototype patterns.Second,for each noisy test pattern,we obtain its denoised feature by moving its feature vector along the geodesic to the spherical decision boundary of the SVDD ball on the feature space.Finally in the third step,we recover the location of the denoised pattern by obtaining the preimage of the denoised feature following the strategy of Kwok and Tsang(2004).

The remaining parts of this letter are organized as follows.In section2, preliminaries are provided regarding the SVDD.Our main results on pat-tern denoising based on the SVDD are presented in section3.In section 4,the applicability of the proposed method is illustrated by a number of toy and real-world data sets.Finally,in section5,concluding remarks are given.

SVDD-Based Pattern Denoising1921 2Preliminaries

The SVDD method,which approximates the support of objects belonging to

normal class,is derived as follows(Tax,2001;Tax&Duin,1999).Consider a

ball B with center a∈R d and radius R,and the training data set D consisting

of objects x i∈R d,i=1,...,N.The main idea of SVDD is to?nd a ball that can achieve two con?icting goals(it should be as small as possible and

contain as many training data as possible)simultaneously by solving

min L0(R2,a,ξ)=R2+C

N i=1

ξi

s.t. x i?a 2≤R2+ξi,ξi≥0,i=1,...,N.(2.1) Here,the slack variableξi represents the penalty associated with the devia-tion of the i th training pattern outside the ball,and C is a trade-off constant controlling the relative importance of each term.The dual problem of equa-tion2.1is

maxα

N

i=1

αi x i,x i ?

N

i=1

N

j=1

αiαj x i,x j

s.t.

N

i=1

αi=1,αi∈[0,C],i=1,...,N.(2.2)

In order to express more complex decision regions in R d,one can use the so-called feature mapφ:R d→F and balls de?ned on the feature space F. Proceeding similar to the above and utilizing the kernel trick φ(x),φ(z) = k(x,z),one can?nd the corresponding feature space SVDD ball B F in F. Moreover,from the Kuhn-Tucker condition,its center can be expressed as

a F=

N

i=1

αiφ(x i),(2.3)

and its radius R F can be computed by utilizing the distance between a F and any support vector x on the ball boundary:

R2F=k(x,x)?2

N

i=1

αi k(x i,x)+

N

i=1

N

j=1

αiαj k(x i,x j).(2.4)

In this letter,we always use the gaussian kernel k(x,z)=exp(? x?z 2/s2), and so k(x,x)=1for each x∈R d.Finally,note that in this case,the SVDD

1922J.Park et al. formulation is equivalent to

minα

N

i=1

N

j=1

αiαj k(x i,x j)

s.t.

N

i=1

αi=1,αi∈[0,C],i=1,...,N,(2.5)

and the resulting criterion for the normality is

f F(x) =R2F? φ(x)?a F 2≥0.(2.6) 3Main Results

In SVDD,the objective is to?nd the support of the normal objects;anything outside the support is viewed as abnormal.In the feature space,the support is expressed by a reasonably small ball containing a reasonably large portion of theφ(x i)’s.The main idea of this letter is to utilize the ball-shaped support on the feature space for correcting test inputs distorted by noise.More precisely,with the trade-off constant C set appropriately,1we can?nd a region where the normal objects without noise generally reside.When an object(which was originally normal)is given as a test input x in a distorted form,the network resulting from the SVDD is supposed to judge that the distorted object x does not belong to the normal class.The role of the SVDD has been conventional up to this point,and the problem of curing the distortion might be thought of as beyond the scope of the SVDD.

In this letter,we go one step further and move the feature vectorφ(x) of the distorted test input x to the point Qφ(x)lying on the surface of the SVDD ball B F so that it can be tailored enough to be normal(see Figure1). Given that all the points in the input space are mapped to a manifold in the kernel-induced feature space(Burges,1999),the movement is along the geodesic on this manifold,and so the point Qφ(x)can be considered as the geodesic projection ofφ(x)onto the SVDD ball.Of course,since the movement starts from the distorted featureφ(x),there are plenty of reasons to believe that the tailored feature Qφ(x)still contains essential information about the original pattern.We claim that the tailored feature Qφ(x)is the denoised version of the feature vectorφ(x).Pertinent to this claim is the discussion of Ben-Hur,Horn,Siegelmann,and Vapnik(2001) on the support vector clustering,in which the SVDD is shown to be a very ef?cient tool for clustering since the SVDD ball,when mapped back to 1In our experiments for noisy handwritten digits,C=1/(N×0.2)was used for the purpose of denoising.

SVDD-Based Pattern Denoising1923

Figure1:Basic idea for?nding the denoised feature vector Qφ(x)by moving along the geodesic.

the input space,can separate into several components,each enclosing a separate cluster of normal data points,and can generate cluster boundaries of arbitrary shapes.These arguments,together with an additional step for ?nding the preimage of Qφ(x),comprise our proposal for a new denoising strategy.In the following,we present the proposed method more precisely with mathematical details.

The proposed method consists of three steps.First,we solve the SVDD, equation2.5,for the given prototype patterns D ={x i∈R d|i=1,...,N}.

As a result,we?nd the optimalαi’s along with a F and R2F obtained via equations2.3and2.4.Second,we consider each test pattern x.When the decision function f F of equation2.6yields a nonnegative value for x,the test input is accepted as normal,and the denoising process is bypassed with Qφ(x)set equal toφ(x).Otherwise,the test input x is considered to be abnormal and distorted by noise.To recover the denoised pattern,we move its feature vectorφ(x)along the geodesic de?ned on the manifold in the feature space,toward the SVDD ball B F up to the point where it touches the ball.In principle,any kernel can be used here.However,as we will show,closed-form solutions can be obtained when stationary kernels (such as the gaussian kernel)are used.2In this case,it is obvious that all the points are mapped onto the surface of a ball in the feature space,and we can see from Figure2that the point Qφ(x)is the ultimate destination of this movement.For readers’convenience,we also include a three-dimensional drawing(see Figure3)to clarify Figure2.

In the following,the proposed method will be presented only for the gaussian kernel,where all the points are mapped to the unit ball in the fea-ture space.Extension to other stationary kernels is straightforward.In order to?nd Qφ(x),it is necessary to solve the following series of subproblems:

2A stationary kernel k(x,x )is a kernel that depends on only x?x .

1924J.Park et al.

a F

δa

φ(x)

Q φ(x)

βa

F

γ

H

F

Figure 2:Proposed denoising procedure when a stationary kernel is used.

r

To ?nd the separating hyperplane H F .From Figure 2,it is clear that for the SVDD problems utilizing stationary kernels,the center a F of the SVDD ball has the same direction with the weight vector of the separating hyperplane H F .In particular,when the gaussian kernel is used,the hyperplane H F can be represented by

2 a F ,φ(x ) =1+ a F 2?R 2F .

(3.1)

Further information needed for identifying the location of Q φ(x )in-cludes the vectors βa F ,δa F ,and the distance γshown in Figure 2.r

To ?nd vector βa F .As shown in Figure 2,the vector βa F lies on the hyperplane H F .Thus,it should satisfy equation 3.1—that is,

2 a F ,βa F =1+ a F 2?R 2F .(3.2)

Therefore,we have

β=1+ a F 2?R 2F 2 a F 2=1+αT K α?R 2F

2αT K α

,(3.3)

SVDD-Based Pattern Denoising 1925

Figure 3:Denoised feature vector Q φ(x )shown in a (hypothetical)three-dimensional feature space.Here,since the chosen kernel is stationary,the pro-jected feature vector Q φ(x ),as well as the feature vector φ(x ),should lie on a ball centered at the origin of the feature space.Also note that the location of Q φ(x )should be at the boundary of the intersection of the ball surface and the SVDD ball,which is colored black.

where α

=[α1···αN ]T ,and K is the kernel matrix with entries K i j =k (x i ,x j ).

r

To ?nd distance γ.Since Q φ(x )is on the surface of the unit ball,we have Q φ(x ) 2=1.Also from the Pythagorean theorem, βa F 2+γ2= Q φ(x ) 2holds.Hence,we have

γ=

1?β2 a F 2= 1?β2αT K α.

(3.4)

r

To ?nd vector δa F .Since P φ(x )

=φ(x )+δa F should lie on the hyper-plane H F ,it should satisfy equation 3.1.Thus,the following holds:

2 a F ,φ(x )+δa F =1+ a F 2?R 2F .(3.5)

Hence,we have

δ=

1+ a F 2?R 2F ?2 a F ,φ(x ) 2 a F 2=1+αT K α?R 2F ?2k x α2αT K α,(3.6)

where k x

=[k (x ,x 1),...,k (x ,x N )]T .

1926J.Park et al.

r

To ?nd the denoised feature vector Q φ(x ).From Figure 2,we see that

Q φ(x )=βa F +γ

φ(x )+(δ?β)a F

(φ(x )+(δ?β)a F ).

(3.7)

Note that with

λ1

φ(x )+(δ?β)a F

(3.8)

and

λ2 =β+

γ(δ?β)

φ(x )+(δ?β)a F

,

(3.9)

the above expression for Q φ(x )can be further simpli?ed into

Q φ(x )=λ1φ(x )+λ2a F ,

(3.10)where λ1and λ2can be computed from

λ1=γ

1+2(δ?β)k x α+(δ?β)2αT K α

,(3.11)λ2=β+γ(δ?β)

1+2(δ?β)k x α+(δ?β)2αT K α

.

(3.12)

Obviously,the movement from the feature φ(x )to Q φ(x )is along the geodesic to the noise-free normal class and thus can be interpreted as per-forming denoising in the feature space.With this interpretation in mind,the feature vector Q φ(x )will be called the denoised feature of x in this letter.In the third and ?nal steps,we try to ?nd the preimage of the denoised fea-ture Q φ(x ).If the inverse map φ?1:F →R d is well de?ned and available,

this ?nal step attempting to get the denoised pattern via ?x

=φ?1(Q φ(x ))will be trivial.However,the exact preimage typically does not exist (Mika et al.,1999).Thus,we need to seek an approximate solution instead.For this,we follow the strategy of Kwok and Tsang (2004),which uses a sim-ple relationship between feature-space distance and input-space distance (Williams,2002)together with the MDS (multi-dimensional scaling)(Cox &Cox,2001).Using the kernel trick and the simple relation,equation 3.10,we see that Q φ(x ),φ(x i ) can be easily computed as follows:

Q φ(x ),φ(x i ) =λ1k (x i ,x )+λ2

N j =1

αj k (x i ,x j ).(3.13)

Thus,the feature space distance between Q φ(x )and φ(x i )can be obtained

by plugging equation 3.13into

?d

2(Q φ(x ),φ(x i ))

= Q φ(x )?φ(x i ) 2=2?2 Q φ(x ),φ(x i ) .(3.14)

SVDD-Based Pattern Denoising 1927

Now,note that for the gaussian kernel,the following simple relation-ship holds true between d (x i ,x j ) = x i ?x j and ?d (φ(x i ),φ(x j ))

= φ(x i )?

φ(x j ) (Williams,2002):

?d

2(φ(x i ),φ(x j ))= φ(x i )?φ(x j ) 2=2?2k (x i ,x j )

=2?2exp(? x i ?x j 2/s 2)=2?2exp(?d 2(x i ,x j )/s 2).

(3.15)

Since the feature space distance ?d

2(Q φ(x ),φ(x i ))is now available from equa-tion 3.14for each training pattern x i ,we can easily obtain the corresponding

input space distance between the desired approximate preimage ?x

of Q φ(x )and each x i .Generally,the distances with neighbors are the most important in determining the location of any point.Hence,here we consider only the squared input space distances between Q φ(x )and its n nearest neighbors {φ(x (1)),...,φ(x (n ))}?D F ,and de?ne

d 2

=[d 21,d 22,...,d 2n ]T

,(3.16)

where d i is the input space distance between the desired preimage of Q φ(x )and x (i ).In MDS (Cox &Cox,2001),one attempts to ?nd a representation of the objects that preserves the dissimilarities between each pair of them.Thus,we can use the MDS idea to embed Q φ(x )back to the input space.For this,we ?rst take the average of the training data {x (1),...,x (n )}?D to

get their centroid ˉx

=(1/n ) n i =1x (i ),and construct the d ×n matrix,X

=[x (1),x (2),...,x (n )].

(3.17)

Here,we note that by de?ning the n ×n centering matrix H

=I n ?(1/n )1n 1T

n ,where I n =diag[1,...,1]∈R n ×n and 1n =[1,...,1]T ∈R n ×1,the matrix XH centers the x (i )’s at their centroid:

XH =[x (1)?ˉx

,...,x (n )?ˉx ].(3.18)

The next step is to de?ne a coordinate system in the column space of

XH .When XH is of rank q ,we can obtain the SVD (singular value

1928J.Park et al. decomposition)(Moon&Stirling,2000)of the d×n matrix XH as

XH=[U1U2]

10

00

V T1

V T2

=U1 1V T1

=U1Z,(3.19) where U1=[e1,...,e q]is the d×q matrix with orthonormal columns e i, and Z = 1V T1=[z1,...,z n]is a q×n matrix with columns z i being the projections of x(i)?ˉx onto the e j’s.Note that

x(i)?ˉx 2= z i 2,i=1,...,n,(3.20) and collect these into an n-dimensional vector:

d20 =[ z1 2,..., z n 2]T.(3.21)

The location of the preimage?x is obtained by requiring d2(?x,x(i)),i= 1,...,n to be as close to those values in equation3.16as possible;thus, we need to solve the LS(least squares)problem to?nd?x:

d2(?x,x(i)) d2i,i=1,...,n.(3.22)

Now following the steps of Kwok and Tsang(2004)and Gower(1968),?z∈R n×1de?ned by?x?ˉx=U1?z can be shown to satisfy

?z=?1

2

?1

1

V T1

d2?d20

.(3.23)

Therefore,by transforming equation3.23back to the original coordinated system in the input space,the location of the recovered denoised pattern turns out to be

?x=U1?z+ˉx.(3.24) 4Experiments

In this section,we compare the performance of the proposed method with other denoising methods on toy and real-world data sets.For simplicity,we denote the proposed method by SVDD.

SVDD-Based Pattern Denoising1929

4.1Toy Data Set.We?rst use a toy example to illustrate the proposed method and compare its reconstruction performance with PCA.The setup is

similar to that in Mika et al.(1999).Eleven clusters of samples are generated

by?rst choosing11independent sources randomly in[?1,1]10and then

drawing samples uniformly from translations of[?σ0,σ0]10centered at each source.For each source,30points are generated to form the training data

and5points to form the clean test data.Normally distributed noise,with

varianceσ2o in each component,is then added to each clean test data point

to form the corrupted test data.

We carried out SVDD(with C=1

N×0.6)and PCA for the training set,

and then performed reconstructions of each corrupted test point using

both the proposed SVDD-based method(with neighborhood size n=10)

and the standard PCA method.3The procedure was repeated for different

numbers of principal components in PCA and for different values ofσ0.

For the width s of the gaussian kernel,we used s2=2×10×σ20as in Mika et al.(1999).From the simulations,we found out that when the input space

dimensionality d is low(as in this example,where d=10),applying the

proposed method iteratively(i.e.,recursively applying the denoising to the

previous denoised results)can improve the performance.

We compared the results of our method(with100iterations)to those of

the PCA-based method using the mean squared distance(MSE),which is

de?ned as

MSE =1

M

M

k=1

t k??t k 2,(4.1)

where M is the number of test patterns,t k is the k th clean test pattern,and ?t

k

is the denoised result for the k th noisy test pattern.Table1shows the ratio of MSE PC A/MSE SVDD.Note that ratios larger than one indicate that the proposed SVDD-based method performs better compared to the other one.

Simulations were also performed for a two-dimensional version of the toy example(see Figure4a),and the denoised results were shown in Figures4b and4c.For PCA,we used only one eigenvector(if two eigen-vectors were used,the result is just a change of basis and thus not useful). The observed MSE values for the reconstructions using the proposed and PCA-based methods were0.0192and0.1902,respectively.

From Table1and Figures4b and4c,one can see that in the considered examples,the proposed method yielded better performance than the PCA-based method.The reason seems to be that here,the examples basically deal

3The corresponding Matlab program is posted online at http://cie.korea.ac.kr/ ac lab/pro01.html.

1930J.Park et al.

Table 1:Comparison of MSE Ratios After Reconstructing the Corrupted Test Points in R 10.

#EV=1

#EV=3#EV=5#EV=7#EV=9σ0=0.05193.691.9037.3712.44 2.528σ0=0.1049.3623.8510.51 4.667 2.421σ0=0.1522.7111.34 5.613 3.241 2.419σ0

=0.20

13.13

6.853

3.854

2.705

2.392

Note:Performance ratios,MSE PC A /MSE SVDD ,being larger than one,indicate how much better SVDD did compared to PCA for different choices of σ0,and different numbers of principal components (#EV)in reconstruction using PCA.

?

???0.5

0.5

1

Figure 4:A two-dimensional version of the toy example (with σ0=0.15)and its denoised results.Lines join each corrupted point (denoted +)with its

reconstruction (denoted o).For SVDD,s 2=2×2×σ2

and C =1/(N ×0.6).(a)Training data (denoted ?)and corrupted test data (denoted +).(b)Recon-struction using the proposed method (with 100iterations)along with the resul-tant SVDD balls.(c)Reconstruction using the PCA-based method,where one principal component was used.

SVDD-Based Pattern Denoising

1931

(a)

(d)(e)

(b)(c)

Figure5:Sample USPS digit images.(a)Clean.(b)With gaussian noise(σ2=

0.3).(c)With gaussian noise(σ2=0.6).(d)With salt-and-pepper noise(p=0.3).

(e)With salt-and-pepper noise(p=0.6).

with clustering-type tasks,so any reconstruction method directly utilizing projection onto low-dimensional linear manifolds would be inef?cient.

4.2Handwritten Digit Data.In this section,we report the denoising re-sults on the USPS digit database(https://www.wendangku.net/doc/f916706138.html,),which consists of16×16handwritten digits of0to9.We?rst normalized each feature value to the range[0,1].For each digit,we randomly chose60exam-ples to form the training set and100examples as the test set(see Figure5). Two types of additive noise were added to the test set.

The?rst is the gaussian noise N(0,σ2)with varianceσ2,and the second is the so-called salt-and-pepper noise with noise level p,where p/2is the probability that a pixel?ips to black or white.Denoising was applied to each digit separately.The width s of the gaussian kernel is set to

s20=

1

N(N?1)

N

i=1

N

j=1

x i?x j 2,(4.2)

the average squared distance between training patterns.Here,the value of C was set to the effect that the support for the normal class resulting from the SVDD may cover approximately80%(=100%?20%)of the training data.Finally,in the third step,we used n=10neighbors to recover the denoised pattern?x by solving the preimage problem.

1932

J.Park et al.

#EVs

S N R (i n d B )

(a)

#EVs

S N R (i n d B )

(b)

#EVs

S N R (i n d B )(c)

#EVs

S N R (i n d B )

(d)

#EVs

S N R (i n d B )

(e)#EVs

S N R (i n d B )

(f)

Figure 6:SNRs of the denoised USPS images.(Top)Gaussian noise with variance σ2.(a)σ2=0.6.(b)σ2=0.5.(c)σ2=0.4.(Bottom)Salt-and-pepper noise with noise level p .(d)p =0.6.(e)p =0.5.(f)p =0.4.

The proposed approach is compared with the following standard meth-ods:

r Kernel PCA denoising,using the preimage ?nding method in Mika et al.(1999)

r Kernel PCA denoising,using the preimage ?nding method in Kwok and Tsang (2004)r Standard (linear)PCA

r

Wavelet denoising (using the Wavelet Toolbox in Matlab).

For wavelet denoising,the image is ?rst decomposed into wavelet coef-?cients using the discrete wavelet transform (Mallat,1999).These wavelet coef?cients are then compared with a given threshold value,and those that are close to zero are shrunk so as to remove the effect of noise in the data.The denoised image is then reconstructed from the shrunken wavelet coef?cients by using the inverse discrete wavelet transform.The choice of the threshold value can be important to denoising performance.In the experiments,we use two standard methods to determine the threshold:VisuShrink (Donoho,1995)and SureShrink (Donoho &Johnstone,1995).Moreover,the Symlet6wavelet basis,with two levels of decomposition,is used.The methods of Mika et al.(1999)and Kwok and Tsang (2004)are both based on kernel PCA and require the number of eigenvectors as a

SVDD-Based Pattern Denoising 1933

(a)(b)(c)

(d)(e)(f)

(g)(h)(i)

(j)(k)(l)

Figure 7:Sample denoised USPS images.(Top two rows)Gaussian noise (σ2=0.6).(a)SVDD.(b)KPCA (Mika et al.,1999).(c)KPCA (Kwok &Tsang,2004).(d)PCA.(e)Wavelet (VisuShrink).(f)Wavelet (SureShrink).(Bottom two rows)Salt-and-pepper noise (p =0.6).(g)SVDD.(h)KPCA (Mika et al.,1999).(i)KPCA (Kwok &Tang,2004).(j)PCA.(k)Wavelet (VisuShrink).(l)Wavelet (SureShrink).

predetermined parameter.In the experiments,the number of principal com-ponents is varied from 5to 60(the maximum number of PCA components

that can be obtained on this data set).For SVDD,we set C =1

N ν

with νset to 0.2.For denoising using MDS and SVDD,10nearest neighbors are used to perform preimaging.

1934

J.Park et al.

#EVs

S N R (i n d B )

(a)

#EVs

S N R (i n d B )

(b)

#EVs

S N R (i n d B )

(c)

??#EVs

S N R (i n d B )

(d)

??#EVs

S N R (i n d B )

(e)

#EVs

S N R (i n d B )(f)

Figure 8:SNRs of the denoised USPS images when 300samples are chosen from each digit for training.(Top)Gaussian noise with variance σ2.(a)σ2=0.6.(b)σ2=0.5.(c)σ2=0.4.(Bottom)Salt-and-pepper noise with noise level p .(d)p =0.6.(e)p =0.5.(f)p =0.4.

To quantitatively evaluate the denoised performance,we used the aver-age signal-to-noise ratio (SNR)over the test set images,where the SNR is de?ned as

10log 10

var(clean image)

var(clean image –new image)

,

in decibel (dB).Figure 6shows the (average)SNR values obtained for the various methods.SVDD always achieves the best performance.When more PCs are used,the performance of denoising using kernel PCA in-creases,while the performance of PCA ?rst increases and then decreases as some noisy PCs are included,which corrupts the resultant images.Note that one advantage of the wavelet denoising methods is that they do not require training.But a subsequent disadvantage is that they cannot utilize the training set,and so both do not perform well here.Samples of the de-noised images that correspond to the best setting of each method are shown in Figure 7.

As the performance of denoising using kernel PCA appears improving with the number of PCs,we also experimented with a larger data set so that even more PCs can be used.Here,we followed the same experimental setup except that 300(instead of 60)examples were randomly chosen from

SVDD-Based Pattern Denoising

1935

ν

S N R (i n d B )

(a)

weighting factor for s

2

S N R (i n d B )

(b)

n

S N R (i n d B )

(c)

ν

S N R (i n d B )

(d)

weighting factor for s 2

S N R (i n d B )

(e)

n

S N R (i n d B )(f)

Figure 9:SNR results of the proposed method on varying each of ν,width of the gaussian kernel (s )and the neighborhood size for MDS (n ).(Top)Gaussian noise with different σ2’s.(a)Varying ν.(b)Varying s as a factor of s 0in equation 4.2.(c)Varying n .(Bottom)Salt-and-pepper noise with different p ’s.(d)Varying ν.(e)Varying s as a factor of s 0in equation 4.2.(f)Varying n .

each digit to form the training set.Figure 8shows the SNR values for the various methods.On this larger data set,denoising using kernel PCA does perform better than the others when a suitable number of PCs are chosen.This demonstrates that the proposed denoising procedure is comparatively more effective on small training sets.

In order to investigate the robustness of the proposed method,we also performed experiments using the 60-example training set for a wide range of ν,width of the gaussian kernel (s ),and the neighborhood size for MDS (n ).Results are reported in Figure 9.The proposed method shows robust performance around the range of parameters used.

In the previous experiments,denoising was applied to each digit sep-arately,which means one must know what the digit is before applying denoising.To investigate how well the proposed method denoises when the true digit is unknown,we follow the same setup but combine all the digits (with a total of 600digits)for training.Results are shown in Figures 10and 11.From the visual inspection,one can see that its per-formance is slightly inferior to that of the separate digit case.Again,SVDD is still the best,though kernel PCA using the preimage method in Kwok and Tsang (2004)sometimes achieves better results as more PCs are included.

1936

J.Park et al.

#EVs

S N R (i n d B )

(a)

#EVs

S N R (i n d B )

(b)

#EVs

S N R (i n d B )

(c)

#EVs

S N R (i n d B )

(d)#EVs

S N R (i n d B )

(e)#EVs

S N R (i n d B )

(f)

Figure 10:SNRs of the denoised

USPS images.

Here,

the 10

digits

are

combined during training.(Top)Gaussian noise with variance σ2’s.(a)σ2=0.6.(b)σ2=0.5.(c)σ2=0.4.(Bottom)Salt-and-pepper noise with noise level p ’s.(d)p =0.6.(e)p =0.5.(f)p =0.4.

(a)(b)(c)(d)

(e)(f)(g)(h)

Figure 11:Sample denoised USPS images.The 10digits are combined during training.Recall that wavelet denoising does not use the training set,and so their de-noising results are the same as those in Figure 7and are not shown here.(Top)Gaussian noise (with σ2=0.6).(a)SVDD.(b)KPCA (Mika et al.1999).(c)KPCA (Kwok &Tsang,2004).(d)PCA.(Bottom)Salt-and-pepper noise (with p =0.6).(e)SVDD.(f)KPCA (Mika et al.1999).(g)KPCA (Kwok &Tsang,2004).(h)PCA.

SVDD-Based Pattern Denoising1937 5Conclusion

We have addressed the problem of pattern denoising based on the SVDD. Along with a brief review over the SVDD,we presented a new denoising method that uses the SVDD,the geodesic projection of the noisy point to the surface of the SVDD ball in the feature space,and a method for?nding the preimage of the denoised feature vectors.Work yet to be done includes more extensive comparative studies,which will reveal the strengths and weaknesses of the proposed method,and re?nement of the method for better denoising.

References

Ben-Hur,A.,Horn D.,Siegelmann H.T.,&Vapnik V.(2001).Support vector cluster-ing.Journal of Machine Learning Research,2,125–137.

Burges,C.J.C.(1999).Geometry and invariance in kernel based methods.In A.J.

Smola,P.L.Bartlett,B.Sch¨olkopf,&D.Schuurmons(Eds.),Advances in kernel methods—support vector learning.Cambridge,MA:MIT Press.

Campbell,C.,&Bennett,K.P.(2001).A linear programming approach to nov-elty detection.In T.K.Leen,T.G.Dietterich,&V.Tresp(Eds.),Advances in neural information processing systems,13,(pp.395–401).Cambridge,MA:MIT Press.

Cox,T.F.,&Cox,M.A.A.(2001).Multidimensional scaling(2nd ed.).London:Chap-man&Hall.

Crammer,K.,&Chechik,G.(2004).A needle in a haystack:Local one-class optimiza-tion.In Proceedings of the Twenty-First International Conference on Machine Learning.

Banff,Alberta,Canada.

Cristianini,N.,&Shawe-Taylor,J.(2000).An introduction to support vector machines and other kernel-based learning methods.Cambridge:Cambridge University Press. Donoho,D.L.(1995).De-noising by soft-thresholding.IEEE Transactions on Informa-tion Theory,41(3),613–627.

Donoho,D.L.,&Johnstone,I.M.(1995).Adapting to unknown smoothness via wavelet shrinkage.Journal of the American Statistical Association,90(432),1200–1224.

Gower,J.C.(1968).Adding a point to vector diagrams in multivariate analysis.

Biometrika,55(3),582–585.

Kwok,J.T.,&Tsang,I.W.(2004).The pre-image problem in kernel methods.IEEE Transactions on Neural Networks,15(6),1517–1525.

Lanckriet,G.R.G.,El Ghaoui,L.,&Jordan,M.I.(2003).Robust novelty detection with single-class MPM.In S.Becker,S.Thron,&K.Obermayer(Eds.),Advances in neural information processing systems,15,(pp.905–912).Cambridge,MA:MIT Press.

Laskov,P.,Sch¨a fer,C.,&Kotenko,I.(2004).Intrusion detection in unlabeled data with quarter-sphere support vector machines.In Proceeding of Detection of Intrusions and Malware and Vulnerability Assessment(DIMV A)2004(pp.71–82).Dortmund, Germany.

1938J.Park et al. Mallat,S.G.(1999).A wavelet tour of signal processing(2nd ed.).San Diego,CA: Academic Press.

Mika,S.,Sch¨olkopf,B.,Smola,A.J.,M¨uller,K.R.,Scholz,M.,&R¨a tsch,G.(1999).

Kernel PCA and de-noising in feature space.In M.S.Kearns,S.Solla,&D.

Cohn(Eds.),Advances in neural information processing systems,11(pp.536–542).

Cambridge,MA:MIT Press.

Moon,T.K.,&Stirling,W.C.(2000).Mathematical methods and algorithms for signal processing.Upper Saddle River,NJ:Prentice Hall.

M¨uller,K.-R.,Mika,S.,R¨a tsch,G.,Tsuda,K.,&Sch¨olkopf,B.(2001).An introduction to kernel-based learning algorithms.IEEE Transactions on Neural Networks,12(2), 181–201.

Pekalska,E.,Tax,D.M.J.,&Duin,R.P.W.(2003).One-class LP classi?ers for dissim-ilarity representations.In S.Becker,S.Thrun,&K.Obermayer(Eds.)Advances in neural information processing systems,15(pp.761–768).Cambridge,MA:MIT Press.

R¨a tsch,G.,Mika,S.,Sch¨olkopf,B.,&M¨uller,K.-R.(2002).Constructing boosting al-gorithms from SVMs:An application to one-class classi?cation.IEEE Transactions on Pattern Analysis and Machine Intelligence,24(9),1184–1199.

Sch¨olkopf,B.,Mika,S.,Burges,C.,Knirsch,P.,M¨uller,K.-R.,R¨a tsch,G.,&Smola,A.J.

(1999).Input space vs feature space in kernel-based methods.IEEE Transactions on Neural Networks,10(5),1000–1017.

Sch¨olkopf,B.,Platt,J.C.,Shawe-Taylor,J.,Smola,A.J.,&Williamson,R.C.(2001).

Estimating the support of a high-dimensional distribution.Neural Computation, 13(7),1443–1471.

Sch¨olkopf,B.,Platt,J.C.,&Smola,A.J.(2000).Kernel method for percentile feature extraction(Tech.Rep.MSR-TR-2000-22).Redmond,WA:Microsoft Research. Sch¨olkopf,B.,&Smola,A.J.(2002).Learning with kernels,Cambridge,MA:MIT Press. Tax,D.(2001).One-class classi?cation.Unpublished doctoral/dissertation,Delft Uni-versity of Technology.

Tax,D.,&Duin,R.(1999).Support vector data description.Pattern Recognition Letters, 20(11–13),1191–1199.

Williams,C.K.I.(2002).On a connection between kernel PCA and metric multidi-mensional scaling.Machine Learning,46(1–3),11–19.

Received August31,2005;accepted October30,2006.

新闻评论的格式写法

新闻评论的格式写法 新闻评论的格式写法新闻评论的文体结构与其他新闻文体相比基本是一致的,它包括标题、导语、主体、结尾四个部分,而在具体写作新闻评论时,各个构成部分又有其独特的写法。1.引人注意的标题新闻评论的标题既可以标明论题的对象和范围,也可以直接提出评论的观点和主旨;总的要求是生动活泼、言简意赅,使标题成为引人耳目的招牌。首先,要巧用动词,强化动词在评论标题中的动态感和鲜活感。2005年11月4日《经济日报》的评论《扬起企业品牌之帆》,这篇评论的标题用动词“扬起”,既揭示出我国目前实施自主品牌的必要性,也展现了我国企业界创新品牌的信

心与决心,给人以昂扬向上的感2005年8月23日《人民日报》的评论标题是《匹夫不可夺志国难见气节》,运用了引用的否定式陈述句的标题能够直接给受众一个非常坦率的态度;疑问句式的标题使受众始终带着一种特定的悬念去思考。《文化产业呼唤”中国创造”》,其标题运用肯定式陈述句,非常鲜明地揭示了媒体所要表达的一种态度和观点;《不该误读”平民医院,’)),虽用表面否定的句式却表达了非常干脆的态度;《洋教材冲击了我们什么?》,这一个带着问号的标题首先就会给受众留下悬念:谁在用洋教材?到底怎么回事?带着种种谜团就会循文找答案了。依据评论的思想内容,善于调动不同的句式,能够造成一种特有的情感效果。评论的标题写作方法不止这些。只要能吸引受众、揭示评论的思想内容,就是好的评论标题。新闻评论标题的写作原则追求有个性、有创新,这样的评论标题才更具魅力。2.富有悬念的导语新闻评论的导语,即开头部分、

引论部分。导语的设计应始终以受众为着眼点,总的要求是:要把最能吸引受众兴趣、最能引起受众关注的事实、观点或问题放在前面。的主体由两个层次构成,一是老百姓对”平民医院” 的错误理解,二是一些地方政府主管部门对”平民医院”的误读。哪个程度更为严重呢?该评论指出:”百姓们对’平民医院’是对眼下’公立医院’就医门槛过高的无奈”,而一些地方政府主管部门也会误读”平民医院只能说明”政府服务功能的缺位”,而后者也是评论者在文中所要着重阐明的重点内容。前后两个层次之间形成一种必然的逐层递进的关系。③对比式。就是主体部分的事实材料及其所要表达的思想内容是相互对照的,通过对比的手法论述论题和观点,有力地证实某一论点的正确或谬误。2005年12月26日《人民日报》”人民论坛”发表的《在磨砺中成长》,主体部分在重点叙述洪战辉自强不息、顽强奋斗的先进事迹,评价其崇高的道德品质的同时,也对目前

CoverLetter英文简历书写 模板

CoverLetter英文简历书写模板 Writing the Cover Letter Writing the Cover Letter What is A Cover Letter? What is A Cover Letter Writing: Invitation to find out Persuasive Writing: Invitation to find out more about you and land Targeted Response to A Job Description and/or a Particular organization and/or a Particular organization—— THESE QUESTIONS: Can you do the job? Can you do the job? Will you do the job? Will you do the job? Will you fit in? Will you fit Cover Letter Four Steps for Cover Letter Writing From https://www.wendangku.net/doc/f916706138.html, Writing From https://www.wendangku.net/doc/f916706138.html, 1. 1. Conduct Research on The Conduct Research on The Organization Organization 2. 2. Deconstruct the job or internship Deconstruct the job or internship description description 3. 3. Consolidate and prioritize the key Consolidate and prioritize the key requirements you have extracted requirements you have extracted 4. 4. Plug yourself into the organizational Plug yourself into the organizational and job requirements and job requirements The Job Description The Job Description Climate and Energy Program (CEP) Climate and Energy Program (CEP) Internship, WRI Internship, WRI Position Summary WRI's Climate and Energy Program (CEP) seeks a WRI's Climate and Energy Program (CEP) seeks a motivated, well motivated, well--organized, and detail organized, and detail--oriented oriented intern to support our climate change and energy intern to support our climate change and energy policy work. The successful candidate will support policy work. The successful candidate will support CEP staff in a wide variety of activities including CEP staff in a wide variety of activities including publications management, event coordination, publications management, event coordination, database management, financial management, database management, financial management, and other tasks related to general program and other tasks related to general program administration. administration. 1. Research: 1. Transport Climate, Energy and Transport Division Projects: Division Projects: ––COP COP--15: Countdown to Copenhagen 15: Countdown to Copenhagen ––U.S. Federal Climate Policy U.S. Federal Climate Policy ––International & U.S. Action International & U.S. Action ––Business & Markets Business & Markets ––Technology & Green Power Technology & Green Power ––Energy Security & Climate Change Energy Security & Climate Change ––Information & Analysis Tools Information & Analysis Tools 1. Research: 1. Research: World Resources Institute --18, 2009, the world will 18, 2009, the world will convene in Copenhagen, Denmark to create convene in Copenhagen, Denmark to create a new global climate agreement to reduce a new global climate agreement to reduce greenhouse gas emissions at the greenhouse gas emissions at the United United Nations Climate Change Conference Nations Climate Change Conference (COP (COP-- experts have been actively involved in WRI’s experts have been actively involved in the negotiations leading up to the negotiations leading up to COP COP--15 15 and and are analyzing various dimensions of a new are analyzing various dimensions of a new agreement, among them: Vulnerability and agreement, among them: Vulnerability and Adaptation; Adaptation; Forestry and Reduced Emissions Forestry and Reduced Emissions for Degradation and Deforestation (REDD); for Degradation and Deforestation (REDD); Technology and Technology Transfer; Technology

小学生新闻消息写法技巧

小学生新闻消息写法技巧 导读:消息的特点 1、短小精练:消息要短小精练,这是新闻写作的基本要求。就小记者采写新闻来说,写好短消息,便于迅速及时的报道新闻事实,同时也锻炼小记者的采写能力;就读者阅读新闻来说,它便于阅读。 2、语言生动简洁:消息的语言只有生动、简洁,才能吸引读者 3、“倒金字塔”结构:消息的写作是将最重要、最新鲜的事实写在新闻的最前面,按事实重要性程度和读者关注的程度先主后次的安排,内容越是重要的,读者越是感兴趣的',越要往前安排,然后依次递减。这在新闻写作中称为“倒金字塔”结构。 二、消息导语的几种写作方法 1、叙述式导语的写作:就是直截了当地用客观事实说话,通过摘要或概括的方法,简明扼要地反映出新闻中最重要、最新鲜的事实,给人一个总的印象,以促其阅读全文。 2、描写式导语的写作:记者根据目击的情况,对新闻中所报道的主要事实,或者事实的某个有意义的侧面,作简练而有特色的描写,向读者提供一个形象,给人以生动具体的印象,这就是描写式导语的一般特点。一般用在开头部分,以吸引读者,增强新闻的感染力。 3、议论式导语的写作:往往采用夹叙夹议的方式,通过极有节制、极有分寸的评论,引出新闻事实。一般分为三种形式:评论式、引语式、设问句。

三、学会恰当运用新闻背景材料 背景材料在不少新闻中占据一定的位置,是新闻稿件中不可缺少的内容。交代背景应根据需要因稿而异,更要紧扣主题,还有交代背景时不宜太多,材料要写的生动活泼。 【小学生新闻消息写法技巧】 1.小学生新闻消息的写法技巧 2.新闻消息的写作方法 3.新闻通讯的写法 4.盘点新闻消息写作方法大全 5.2015年新闻时事评论的写法 6.有关将新闻稿改写为消息 7.2015新闻的写作基础知识:消息的写作 8.新闻宣传报道的格式与写法 上文是关于小学生新闻消息写法技巧,感谢您的阅读,希望对您有帮助,谢谢

分享两篇SCI发表的经历(cover letter、response letter)

分享两篇SCI发表的经历 三年前对于我来说SCI就是天书一样,在我踏进博士的门槛后我以为自己进入了地狱,也纠结也彷徨,整天刷虫友们对于博士、SCI的帖子,我选择了虫友们鼓励的那一部分来激励自己继续前行。我告诉自己坚持就是胜利,当然那是积极的坚持。在好几月之前就有这个想法,今天早上收到第二篇的接收通知后,我便想今天一定要在小木虫上谢谢那些给予我帮助的虫友们。 话不多说,我把自己这两篇投稿的经历与大家共享,希望能给大家带来一点点用处。 第一篇发表在Fitoterapia Cover letter Dear Editor Verotta: We would like to submit the manuscript entitled "××××××题目" by ××××××所有作者姓名which we wish to be considered for publication in Journal of Fitoterapia. All authors have read and approved this version of the article, and due care has been taken to ensure the integrity of the work. Neither the entire paper nor any part of its content has been published or has been accepted elsewhere. It is not being submitted to any other journal. We believe the paper may be of particular interest to the readers of your journal as it is the first time of ××××××研究的精华所在 Thank you very much for your reconsidering our revised manuscript for potential publication in Fitoterapia. We are looking forward to hearing from you soon. Correspondence should be addressed to Jinhui Yu at the following address, phone and fax number, and email address. 地址、学院、学校名称 Phone: + 86×××××× Fax number: + 86571××××××

新闻稿的写法及范文

1、新闻要素:不可忽略5W1H。(Who、What、When、Where、Why、How) 2、新闻构成:题、文、图、表。 3、题:简要、突出、吸引人。 4、文:导语100至200字:开宗明义,人事时地物。 5、主体300至500字:深入浅出,阐扬主旨。 6、结语100字:简洁有力,强调该新闻的意 义与影响,或预告下阶段活动。7、图:视需要加入有助于读者理解的图片。 8、表:视需要加入有助于读者理解的表格。9、写作要律:具有新闻价值、正确的格式、动人的标题。简洁切要的内容、平易友善的叙述、高度可读性、篇 幅以1至2页为宜(一页尤佳)。写作技巧:清晰简洁、段落分明、使用短句、排版清爽。切忌偏离事实、交代不清、内容空洞。一篇好的新闻稿除了必须具有 新闻价值、把握主诉求与正确的格式外,行文应力求简洁切要,叙述应有事实基础,文稿标题则以简要、突出、吸引人为原则,用字要避免冷僻艰深,以提高文 稿的可读性。此外,篇幅也不宜长篇大论,一般以1至2页为原则,必要时可以加入图表,增加文稿的专业性,切忌内容空洞、语意不清、夸大不实。 倒金字塔结构是绝大多数客观报道的写作规则,被广泛运用到严肃刊物的写作 中,同时也是最为常见和最为短小的新闻写作叙事结构。内容上表现在在一篇新闻中,先是把最重要、最新鲜、最吸引人的事实放在导语中,导语中又往往是将 最精彩的内容放在最前端;而在新闻主体部分,各段内容也是依照重要性递减的 顺序来安排。犹如倒置的金字塔,上面大而重,下面小而轻。此种写作方式是目 前媒体常用的写作方式,亦即将新闻中最重要的消息写在第一段,或是以「新闻提要」的方式呈现在新闻的最前端,此种方式有助于媒体编辑下标题,亦有助于阅听人快速清楚新闻重点。基本格式(除了标题)是:先在导语中写一个 新闻事件中最有新闻价值的部分(新闻价值通俗来讲就是新闻中那些最突出,最

英文回复信范例ResponseLetter

Dear Editors and Reviewers, Thank you for your letter and comments on our manuscript titled “Temporal variability in soil moisture after thinning in semi-arid Picea crassifolia plantations in northwestern China” (FORECO_2017_459). These comments helped us improve our manuscript, and provided important guidance for future research. We have addressed the editor’s and the reviewers’comments to the best of our abilities, and revised text to meet the Forest Ecology and Management style requirements. We hope this meets your requirements for a publication. We marked the revised portions in red and highlighted them yellow in the manuscript. The main comments and our specific responses are detailed below: Editor: Please explain how the results in this paper are significantly different from those in Zhu, X., He, Z.B., Du, J., Yang, J.J., Chen, L.F., 2015. Effects of thinning on the soil moisture of the Picea crassifolia plantation in Qilian Mountains. Forest Research. 28, 55–60.)

投稿coverletter写法

Case 1 Dear Editor, We would like to submit the enclosed manuscript entitled "GDNF Acutely Modulates Neuronal Excitability and A-type Potassium Channels in Midbrain Dopaminergic Neurons", which we wish to be considered for publication in Nature Neuroscience. GDNF has long been thought to be a potent neurotrophic factor for the survival of midbrain dopaminergic neurons, which are degenerated in Parkinson’s disease. In this paper, we report an unexpected, acute effect of GDNF on A-type potassium channels, leading to a potentiation of neuronal excitability, in the dopaminergic neurons in culture as well as in adult brain slices. Further, we show that GDNF regulates the K+ channels through a mechanism that involves activation of MAP kinase. Thus, this study has revealed, for the first time, an acute modulation of ion channels by GDNF. Our findings challenge the classic view of GDNF as a long-term survival factor for midbrain dopaminergic neurons, and suggest that the normal function of GDNF is to regulate neuronal excitability, and consequently dopamine release. These results may also have implications in the treatment of Parkinson’s disease. Due to a direct competition and conflict of interest, we request that Drs. XXX of Harvard Univ., and YY of Yale Univ. not be considered as reviewers. With thanks for your consideration, I am Sincerely yours, case2 Dear Editor, We would like to submit the enclosed manuscript entitled "Ca2+-binding protein frequenin mediates GDNF-induced potentiation of Ca2+ channels and transmitter release", which we wish to be considered for publication in Neuron. We believe that two aspects of this manuscript will make it interesting to general readers of Neuron. First, we report that GDNF has a long-term regulatory effect on neurotransmitter release at the neuromuscular synapses. This provides the first physiological evidence for a role of this new family of neurotrophic factors in functional synaptic transmission. Second, we show that the GDNF effect is mediated by enhancing the expression of the Ca2+-binding protein frequenin. Further, GDNF and frequenin facilitate synaptic transmission by enhancing Ca2+ channel activity, leading to an enhancement of Ca2+ influx. Thus, this study has identified, for the first time, a molecular target that mediates the long-term, synaptic action of a neurotrophic factor. Our findings may also have general implications in the cell biology of neurotransmitter release. 某杂志给出的标准Sample Cover Letter Case 3 Sample Cover Letter Dear Editor of the : Enclosed is a paper, entitled "Mobile Agents for Network Management." Please accept it as a candidate for publication in the . Below are our responses to your submission requirements. 1. Title and the central theme of the article. Paper title: "Mobile Agents for Network Management." This study reviews the concepts of mobile agents and distributed network management system. It proposes a mobile agent-based implementation framework and creates a prototype system to demonstrate the superior performance of a mobile agent-based network over the conventional client-server architecture in a large network environment.

新闻消息的写法

新闻写法——消息写作 各位同仁,大家下午好!今天我很荣幸坐在这里同大家一起学习探讨有关新闻的写作技巧,但愿我讲的内容对各位有点启发。分两部分:一是消息,二是通讯。 天下文章,总体来说,都是一个写什么、怎么写的问题。写什么不仅是内容,还有一个形式问题;怎么写是技术问题。新闻是文章种类中一种特殊的文体,怎么写又有它特别的规范。是不是不自由了呢?不是,规范只是一个大笼子,在笼子里可以自由飞翔。 新闻是思想的产物,是意识形态。不是纯客观,不可有闻必录。同时,新闻不是公文,不是法律,不是判决书。读者自由地接受新闻事实,自由地作出判断。新闻以事实形成舆论,左右社会,“人言可畏”,“千夫所指,不疾而亡。”新闻写作提高表达技巧十分重要。 在这里,我先介绍一下消息的写作,这是目前适用最广泛的一种新闻写作方式。对于我们这些从事新闻报道的基层通讯员来说,也是最常用的文体,最容易发稿见报的文体。困为通讯较消息篇幅长、覆盖面大,像我们基层的乡镇、县直单位,一般很难被上级新闻单位采用。 一、什么是消息。 1、消息的定义。消息,就是用最简要和迅速的手段报道最近发生事件的一种新闻宣传文体。也就是说新闻消息就是告诉人们发生了什么,报道最近发生的事实。狭义的新闻就是指消息,它是新闻体裁的重要形式,是报纸和广播电视新闻的主角,其它新闻报道如通讯、广播稿、新闻评论等是它的发展和补充。学会消息写作便意识着掌握了打新闻写作大门的钥匙。我们初学新闻的同志在看报读报时,看到的“本报讯、本刊讯、新华社南昌讯、据**社**讯”等开头的,都称之为消息,我们常说的“豆腐块”,任何一张报纸都有若干条消息。 2、消息的特点:(一)采写发稿迅速、及时,叙事直截了当,语言简洁明快,篇幅短小;(二)消息5W+1H,whe何时、where何地、who何人、what何事、why何故、how如何;(三)在结构上,消息一般由标题、导语、主体、背景和结尾五个部分组成,有“倒金字塔结构”与“非倒金字塔结构”两大类。 3、消息的种类:(一)动态消息:也称动态新闻,这种消息迅速、及时地报道国内国际的重大事件,报道社会主义建设中的新人新事、新气象、新成就、新经验。动态消息中有不少是简讯(短讯、简明新闻),内容更加单一,文字更加精简,常常一事一讯,几行文字。(二)综合消息:也称综合新闻,指的是综合反映带有全局性情况、动向、成就和问题的消息报道。 (三)典型消息:也称典型新闻,这是对某一部门或某一单位的典型经验或成功做法的集中报道,用以带动全局,指导一般。(四)述评消息:也称新闻述评,它除具有动态消息的一般特征外,还往往在叙述新闻事实的同时,由作者直接发出一些必要的议论,简明地表示作者的观点。记者述评、时事述评就是其中的两种。 以上四类消息,以动态消息较易写作,大家可以经常练习写一些,从实践中提高新闻写作能力。

消息导语的写法

消息导语的写法 导语是消息的重要组成部分,是消息的开头,通常是指消息的第一个自然段。消息导语的写作,是新闻记者的基本功。美国大学新闻学院的大学生或研究生要花一年的时间专门学习导语的写作。许多人构思和写作导语的时间要占据通篇稿件的三分之一到一半。由此可见导语写作的难度和重要性。 导语最基本的要求是言简意赅,即用最简洁凝炼的语言将消息中最新鲜、最主要、最精彩、最生动、最吸引人的新闻事实表现出来。导语的写法千变万化、灵活多样。概括起来,大致可以分为下列8种。 一、概要式:即用叙述的方式来归纳概括新闻的要点。 如:新华社1949年4月22日电人民解放军百万大军,从一千余华里的战线上,冲破敌阵,横渡长江。 再如:本报讯中国科学院昆明动物研究所培育计划中的第一只白猴于5月1日降生。(1988年全国好新闻消息二等奖,原载《云南日报》1988 年6月29日一版) 再如:本报讯(记者李捷)613万考生迎来了他们一生一博的时刻-2003年非典时期的正常高考今天静静拉开帷幕。(第十四届中国新闻奖二等奖消息作品) 一、描写式:即用现场目击、白描勾勒的方式来切入。 如:新华社北京1982年7月16日电(记者郭玲春)鲜花、翠柏丛中,安放着中国共产党员金山同志的遗像。千余名群众今天默默走进首都剧场,悼念这位人民的艺术家o(第四届中国新闻奖消息一等奖作品) 再如:本报讯(记者健吾)古埃及的金字塔、法国的巴黎圣母院、缅甸的仰光大金塔、加拿大的尼亚加拉瀑布,还有列宁的墓、白求恩的故居以及日本的劈啪舞……人们不用出国,在五泉山公园文昌宫举行的《劳崇聘外国风光写生展览》上,便可欣赏到外国的名胜古迹,领略到外国的风土人情。(原载《兰州晚报》1985年10月8日四版)再如:本报讯(记者健吾)大西北一个偏僻山村的崎岖小道上,一支娶亲的人马抬着花轿,拥着新郎入场,由此引出一个凄凉的、为人们所熟悉 却又弄不清发生在何年何月的故事。这是省歌剧团为建国40周年正在排练的献礼之作——《阴山下》的序幕o(《兰州晚报》1989年3月5日四版) 三、提问式:即先提出问题,引出悬念,进而解答。

投稿经验

四.我的论文生活 本人现在5篇SCI,两篇IEEE TRANSACTION/JOURNAL (regular) paper,两篇IF=1.2*,一篇IF=0.8*.一个专利,两个应用证明。还有5篇SCI在审,IF分别为2.3*,0.5*,0.5*,0.4*,0.4*。所以我自认为在博士期间做的还可以,再加上博士期间做了太多的秘书工作,同时接手的项目自己独立完成,还是我以前没有接触的领域,也是实验室没有接触过的领域,所以我自己对自己还是比较满意的。当然在理工同窗面前我是非常普通,甚至不入流的学生之一,不过我希望把我一点经验分享大家,如果对于学弟学妹有一点帮助,我就非常开心了,如果没有帮助也请各位大牛不要见笑。下面我介绍一下我写论文的经验。 打铁还需自身强啊。首先我们应该从我们自己本身着手。博士与本科和硕士相比都不同,而且是根本上意义的不同。打个比方,如果给大家一个问题,大家能够非常快速的解决,并利用各种方法。但是这种训练方式完全是自下而上的教育方式,一直到硕士戛然而止。博士突然让我们思维方式出现了一个转变,这是很难的。这种原因可能因为我们小学到硕士一直是按照我们自己的思维方式在培养,到了博士我们学习西方思维方式培养,而产生了极大的落差。 博士不会在告诉你,你需要解决什么样的具体问题,而是在于你是否能发现问题。我们一直的教育都是我们解决问题的能力,对于中国的学生,特别是在理工大学培养的学生,解决问题的能力绝对是非常强的,没有任何问题,但是缺少一双发现问题的眼睛。我们不善于自己提出问题和解决我们觉得陌生的问题,我们喜欢解决别人提出来的问题,使用别人提出的方法,按照别人的思路。不能说孰优孰劣,但是在现有的博士培养体系下我们是处于劣势的。郑强教授说得对,如果找不到与自己发现问题相关的参考文献,通常我们都对自己的发现先产生怀疑,极度的不自信。 我们需要在别人研究内容的基础上发现存在的问题。例如某个同学会说,某某领域已经做烂了,没有东西做了。但是我要告诉你,把你放着一个新的领域,你还是不行。因为不是没有问题,而是你没有发现问题。这个劣势在工科中尤为突出。这样我们在读某篇论文或者某位牛人的大作的时候,读完了直呼精彩,但是我们这个时候就需要考虑这篇论文的问题在哪里,有没有限制条件,我们能不能发现新的问题,在我们解决新的问题的时候也是我们有目的的资料收集的时候,也就是我们写论文的时候,发论文的时候了。不过需要强调的是,思路对也好,不对也好,方法对也好,不对也好,除了结果不同以外,其他都是一样的。过程都是痛苦的,甚至会因为发现自己是如此的无知而感到深深的惭愧和不能接受。 上面的论点对于大多数人来说,大而空,该不懂还是不懂,该写不出来还是写不出来,没有办法,这个就是思维的问题。那么上面从自身入手,我再给出从外界获取资源的方法。只有将二者结合才能达到最佳结果。 如何获取外界资源。我看论文的时候,会有很多问题,论文中很多观点我看不懂,而且这些观点在论文中没有任何的解释,让我摸不着头脑,所以我会给论文的通信作者写信咨询。我的经验是别给中国人写信。中国人不喜欢Share,只喜欢require。下面我附上一封我写信请教问题的模板。 Dear and respected Prof. *** I am very sorry for troubling you, but can I ask you a question relevant to your paper?

cover letter 写法整理过的精华版

Cover letter 写法 一、什么是cover letter? The covering letter is a letter from the authors to the Editor Provide necessary information relevant to the manuscript Additional explanation from the authors 二、为什么重要? An important opportunity to ‘sell’ your paper MS without a cover letter poses problems: To which journal? New MS or revised MS? MS misdirected by reviewers or editors? Whom to communicate if more than on authors? Write the correct address, including telephone, fax numbers, & e-mail, or possible reviewers Be kind to the editor and state why you have submitted that particular package 三、如何写 1.称谓 Dear Dr. 主编name: 2.The title of the manuscript, the numbers of tables, figures, charts, plates and so on; (1)This is a manuscript by**and **entitled “.......”. It is submitted to be considered for publication as a “...” in your journal. (2)We submit our manuscript entitled " 文章title" to 杂志名for publication. (3)On behalf of my co-authors, I am submitting the enclosed material “ TITLE ” for possible publication in JOURNAL. (4)We would like to submit the enclosed manuscript entitled " *** ", which we wish to be considered for publication in **journal. (5)Enclosed are a manuscript by sujian, y ang kun, chenzhihua.Su jian titled“Hypothermia after Acute Ischemic Stroke”.It is submitted to be considered for publication as a“review" in your journal. (6) Enclosed are two complete copies of a manuscript by Mary Q. Smith a nd John L. Jones titled “Fatty acid metabolism in Cedecianeteri,” which is being submitted for possible publication in the Physiology and Metabolism section of the Journal of Bacteriology. 3.Some journals require the authors to briefly introduce his resea rch purposes, methods and results(接着简单介绍你文章的主要创新点和意义,不易过多,但要突出新意和关键点) (1) We believe the paper may be of particular interest to the readers of your journal as it ........ (2)We believe that two aspects of this manuscript will make it interesting to general readers of **journal. First, ***.Second, ***.Further, ***. (3)We believe the paper may be of particular interest to the readers of your journal. 4. The statement that this manucript has never been partly or wholely published in or submitted to any other journals; (1) The work described has not been submitted elsewhere for publication, in whole or in part, and all the authors listed have approved the manuscript that is enclosed. (2)This paper is new. Neither the entire paper nor any part of its content has been published or has been

新闻导语的十六种写法

新闻导语的十六种写法.txt心若无尘,一花一世界,一鸟一天堂。我曾经喜欢过你,现在我依然爱你希望月亮照得到的地方都可以留下你的笑容那些飘满雪的冬天,那个不带伞的少年,那句被门挡住的誓言,那串被雪覆盖的再见新闻导语的十六种写法 中国新记者网2006-10-22 16:25:02 这是以前从站内摘录了,作者本人已经记不得了,今天放上来供大家共同学习。对原作者表示谢意。 由16种常见导语探讨新闻稿件的写法 一个有头脑,会思考的记者,写文章的手法必定多种多样,不会以一种模式、一种文章结构去套所有的稿件。而文章的写法有千万种,任何一篇文章或一本书都只能管中窥豹,以见一斑。本文更不例外,仅以导语的几种比较常见的写作方式,探讨中国大陆新闻报纸文本(区别于美国新闻文本、香港新闻文本、台湾新闻文本与其他华语传媒新闻文本)模式的写法。以期抛砖引玉,向各界新闻前辈学习。 有一说一式 引用当事人原话或描写事件场面,再加以说明。是比较常见的一种导语写法。 用途:几乎一切社会生活领域。 例:“三次,好!成交!”拍卖师一声锤响,慕绥新的防弹奔驰被以160万元的价格拍卖。 又:“‘社区让我们低保户‘自愿’捐款,可谁敢不捐呢?’一位残疾人无奈地对记者说。” 又:“女儿接到大学录取通知书的第二天,母亲却用一条绳子结束了自己的生命。” 又:“古人有云,‘至人无己,神人无功,圣人无名’。历史上确实有许多这样的人,活着时默默无闻,死后一鸣惊人。” 文章写作要点:切忌用得过多过滥。 直击日期式 以日期为第一陈述对象,后带主语、谓语,直接叙述。对事件本身不做丝毫重构和渲染。 用途:多用于国内政治大事件的描写及评述。 例:“2004年9月19日,十六届四中全会同意江泽民辞去中央军委主席职务。” 文章写作要点:政治性文章属于“雷区”,只宜照引新华社的陈述原文,不宜评论。至多加一个“有媒体形容”或“观察家认为”、“异史氏曰”;(《南风窗》在这方面可算敢为天下先,但它也似乎有些滥用之嫌。) 在引用中,要注意尽量不使其语言风格全盘操控自己。更要提防此类党报语言贯穿其他稿件。 变形煽情式 乍一看是直击日期式导语,但外包了文学手法,变成一种半实半虚的叙述加煽情。 用途:全民面子工程类文章,如奥运,姚明,刘翔,中国人质事件。 例:“2004年6月,来自雅典的奥林匹克圣火再次唤起了人们对于奥运火一般的热情。” 又:“2004年10月9日,两名中国工程师在巴基斯坦被绑架。在接下来的6天里,他们的生死安危一直牵动着国人的心。” 文章写作要点:这种文章需要写得毫不脸红。可以使用排比句式,可以让那些热血沸腾,文风酸软,癖好煽情的人来操刀。但一定要用于没有丝毫争议的单纯话题。如“希望工程”就不可。

相关文档