文档库 最新最全的文档下载
当前位置:文档库 › A survey of first-order probabilistic models

A survey of first-order probabilistic models

A survey of  first-order probabilistic models
A survey of  first-order probabilistic models

12

A Survey of First-Order Probabilistic Models

Rodrigo de Salvo Braz ,Eyal Amir,and Dan Roth

Department of Computer Science

University of Illinois at Urbana-Champaign

Urbana,IL61801

Summary.There has been a long standing division in Arti?cial Intelligence between logical and probabilistic reasoning approaches.While probabilistic models can deal well with inherent uncertainty in many real-world domains,they operate on a mostly propositional level.Logic systems,on the other hand,can deal with much richer rep-resentations,especially?rst-order ones,but treat uncertainty only in limited ways. Therefore,an integration of these types of inference is highly desirable,and many ap-proaches have been proposed,especially from the1990s on.These solutions come from many di?erent sub?elds and vary greatly in language,features and(when available at all)inference algorithms.Therefore their relation to each other is not always clear,as well as their semantics.In this survey,we present the main aspects of the solutions proposed and group them according to language,semantics and inference algorithm.In doing so,we draw relations between them and discuss particularly important choices and tradeo?s.

For decades after the?eld of Arti?cial Intelligence(AI)was established,its most prevalent form of representation and inference was logical,or at least symbolic representations that were in a deeper sense equivalent to a fragment of logic. While highly expressive,this type of model lacked a sophisticated treatment of degrees of uncertainty,which permeates real-world domains,especially the ones usually associated with intelligence,such as language,perception and common sense reasoning.

In time,probabilistic models became an important part of the?eld,incor-porating probability theory into reasoning and learning AI models.Since the 1980s the?eld has seen a surge of successful solutions involving large amounts of data processed from a probabilistic point of view,applied especially to Natural Language Processing and Pattern Recognition.1

Currently at the Computer Science Division of University of California,Berkeley. 1Strictly speaking,this tendency has not been only probabilistic,including machine learning methods such as neural networks that did not claim to be modeling prob-abilities.However,a link to probabilities can usually be found and the methods are used in similar ways.

D.E.Holmes and L.C.Jain(Eds.):Innovations in Bayesian Networks,SCI156,pp.289–317,2008. https://www.wendangku.net/doc/5b4326586.html, c Springer-Verlag Berlin Heidelberg2008

290R.de Salvo Braz,E.Amir,and D.Roth

This success,however,came with a price.Typically,probabilistic models are less expressive and?exible than logical or symbolic https://www.wendangku.net/doc/5b4326586.html,ually,they involve propositional,rather than?rst-order representations.When required, more expressive,higher level representations are obtained by ad hoc manipula-tions of lower level,propositional systems.

Starting in the1970s but having greatly increased from the1990s on,a line of research sought to integrate those two important modes of reasoning.In this chapter we give a survey of this research,and try to show some general lines separating di?erent approaches.

We have roughly divided this research in di?erent stages.The1970s and1980s saw great interest in expert systems[1,2].As these systems were applied to real-world domains,coping with uncertainty became more desirable,giving rise to the certainty factors approach,which uses rules with attached numbers(representing degrees of certainty)that get propagated to conclusions during inference.

Certainty factors systems did not have clear semantics,and often produced surprising and nonintuitive results[3].The search for clearer semantics for rules with varying certainty gave rise,among other things,to approaches such as Bayesian Networks.These however were essentially propositional,and thus had much less expressivity than logic systems.

The search for clear semantics of probabilities in logic systems resulted in works such as Nilsson[4],Bacchus[5]and Halpern[6],which laid out the basic theoretic principles supporting probabilistic logic.These works,however,did not include e?cient inference algorithms.

Works aiming at e?cient inference algorithms for?rst-order probabilistic in-ference(FOPI)can be divided in two groups,which Pearl[3]calls extensional and intensional systems.In the?rst one,statements in the language are more procedural in nature,standing for licenses for propagating truth values that have been generalized from true or false to a gray scale of varying degrees of certainty.In the second group,statements place restrictions on a probability distribution on possible worlds.They do not directly correspond to computing operations,nor can they typically be taken into account without regard to other rules(that is,inference is not completely modular).E?cient algorithms have to be devised for these languages that preserve their semantics while doing better than considering the entire model at every step.

Among intensional models,there are further divisions regarding the type of algorithm proposed.One group proposes inference rules similar to the ones used in?rst-order logic inference(for example,modus ponens).A second one com-putes,in more or less e?cient manners,the possible derivations of a query given a model.A third one uses sampling to answer queries about a model.A fourth and more prevalent group constructs a(propositional)graphical model(Bayesian or Markov networks,for example)that answers queries,and uses general graphical model inference algorithms for solving them.Finally,a?fth one proposes lifted algorithms that directly operate on?rst-order representations in order to derive answers to queries.

We now present these stages in more detail.

12A Survey of First-Order Probabilistic Models291 12.1Expert Systems and Certainty Factors

Expert systems are based on rules meant to be applied to existing facts,produc-ing new facts as conclusions[1].Typically,the context is a deterministic one in which facts and rules are assumed to be certain.Uncertainties from real-world applications are dealt with during the modeling stage where necessary(and often heavy-handed)simpli?cations are performed.

Certainty factors were introduced for the purpose of allowing uncertain rules and facts,making for more direct and accurate modeling.A rule(A←B):c1, with c1∈[0,1],indicates that we can conclude A with a degree of certainty of c1×c2,if B is known to be true with a degree of certainty c2∈[0,1].Given a

collection of rules and facts,inference is performed by propagating certainties in this fashion.There are also combination rules for the cases when more than one rule provide certainty factors for the same literal.

A paradigmatic application of certainty factors is the system MYCIN[7],an expert system dedicated to diagnosing diseases based on observed symptoms. Clark&McCabe[8]describe how to use Prolog with predicates containing an extra argument representing its certainty and being propagated accordingly. Shapiro[9]describes a Prolog interpreter that does the same but in a way im-plicit in the interpreter and language,rather than as an extra argument.

One can see that certainty factors have a probabilistic?avor to them,but formally they are not taken to be probabilistic.This is for good reason:should we interpret them as probabilities,results would be inconsistent with probabil-ity theory.Heckerman[10]and Lucas[11]discuss situations in which certainty factor computations can and cannot be correctly interpreted probabilistically. One reason they cannot is the incorrect treatment of bidirectional inference:two certainty factor rules(A←B):c1and B:c2imply nothing about inference from A to B,while P(A|B)and P(B)do place constraints on P(B|A).These problems are further discussed in Pearl[3].

12.2Probabilistic Logic Semantics

The semantic limitations of certainty factors is one of the motivations for de?ning precise semantics for probabilistic logics,but such investigations date from at least as far back as Carnap[12].

One of the most in?uential AI works in this regard is Nilsson[4](a similar approach is given by Hailperin[13]).Nilsson establishes a systematic way of determining the probabilities of logic sentences in a query set,given the proba-bilities of logical sentences in an evidence set.To be more precise,the method determines intervals of probabilities to the query sentences,since in principle the evidence set may be consistent with an entire range of point probabilities for them.For example,knowing that A is true with probability0.2and B with probability0.6means that A∧B is true with probability in[0,0.2],depending on whether A and B are mutually exclusive,or A→B,or anything in between.

292R.de Salvo Braz,E.Amir,and D.Roth

Given a set of sentences L,Nilsson considers the equivalence classes of possible worlds that assign the same truth values to the sentences in L(that is,as far as L is concerned,all possible worlds in the same class are the same).Formally, Nilsson’s system is based on the following linear problem:

Π=V P

0≤Πj≤1

0≤P i≤1

P i=1

i

whereΠis the vector of probabilities of sentences in both query and evidence sets,P the vector of probabilities of possible worlds equivalence classes,and V is a matrix with V ij=1if sentence j is true in possible world set i,and0 otherwise.The probabilities of sentences in the knowledge base are incorporated as constraints in this system as well,and linear programming techniques can be used to determine the probability of novel sentences.However,as Nilsson points out,the problem becomes intractable even with a modest number of sentences, since all possible worlds equivalence classes need to be enumerated and this is an intractable problem.Therefore this framework cannot be directly used in practice.

Placing the probabilities on the possible worlds,as does Nilsson,makes it easy to express subjective probabilities such as“Tweety?ies with probability 0.9”(that is,the sum of probabilities of all possible worlds in which Tweety ?ies is0.9).However,probabilistic knowledge can also express statistical facts about the domain such as“90%of birds?y”(which says that,in each possible world,90%of birds?y).Bacchus[5]provides an elaborate probabilistic logic semantics that includes both types of probabilistic knowledge,making it possible to use both statements above,as well as statements mixing them,such as“There is a probability of0.8that90%of birds?y.”He also discusses the interplay between the two types,namely the question of when it is correct to use the fact that“90%of birds?y”in order to assume that“a randomly chosen bird?ies with probability0.9,”a topic that has both formal and philosophical aspects. Halpern[6]elaborates on the axiomatization of Bacchus,taking probabilities to be real numbers(Bacchus did not),and is often cited as a reference for this semantics with two types of probabilities.In subsequent work,the subjective type probability has been much more developed and used,and is also the type involved in propositional graphical models.

Fagin,Halpern,Meggido[14]present a logic to reason about probabilities, including their addition and multiplication by scalars.Other works discussing the semantics of probabilities on?rst-order structures are[15,16,17].

12.3Extensional Approaches

Somewhat parallel to the works on the semantics of probabilistic logic,a di?erent line of research proposed logic reasoning systems incorporating uncertainty in

12A Survey of First-Order Probabilistic Models293 the explicit form of probabilities(as opposed to certainty factors).These systems often stem from the?elds of logic programming and deductive databases,and ?t into the category described by[3]as extensional systems,that is,systems in which rules work as“procedural licenses”for a computation step instead of a constraint on possible probability distributions.Most of these systems operate on a collection of rules or clauses that propagate generalized truth values(typically, a value or interval in[0,1]).

Kiefer and Li[18]provide a probabilistic interpretation and a?xpoint seman-tics to Shapiro[9].W¨u thrich[19]elaborates on their work,taking into account partial dependencies between clauses.For example,if each of them atoms a,b and c has a prior probability of0.5and we have two rules p←a∧b and p←b∧c,Kiefer and Li will assume the rules independent and assign a prob-ability0.25+0.25?0.25?0.25=0.4375to p.W¨u thrich’s system,however, takes into account the fact that b is shared by the clauses and computes instead 0.25+0.25?0.53=0.375(that is,it avoids double counting of the case where the two rules?re at the same time,which occurs only when the three atoms are true at once).

One of the most in?uential works within the extensional approach is Ng and Subrahmanian[20].Here,a logic programming system uses generalized truth values in the form of intervals of probabilities.They de?ne probabilistic logic program as sets of p-clauses of the form

A:μ←F1:μ1∧...F n:μn,

where A is an atom,F1,...,F n are basic formulas(conjunctions or disjunctions) andμ,μ1,...,μn are probability intervals.A clause states that if the probability of each formula F i is inμi,then the probability of A is inμ.For example,the clause

path(X,Y):[0.8,0.95]←a(X,Z):[1,1]∧path(Z,Y):[0.85,1]

states that,if a(X,Z)is certain(probability in interval[1,1]and therefore1) and path(Z,Y)has probability in[0.85,1],then path(X,Y)has probability in [0.8,0.95].Probabilities of basic formulas F i are determined from the probabil-ity intervals of their conjuncts(or disjuncts)by taking into account the possible correlations between them(similarly to what Nilsson does).The authors present a?xpoint semantics where clauses are repeatedly applied and probability inter-vals successively narrowed up to convergence.They also develop a model theory determining what models(sets of distributions on possible worlds)satisfy a probabilistic logic program,and a refutation procedure for querying a program.

Lakshmanan and Sadri[21]propose a system similar to Ngo and Subrahma-nian,while keeping track of both the probability of each atom as well of its negation.Additionally,it uses con?gurable independence assumptions for dif-ferent clauses,allowing the user to declare whether two atoms are independent, mutually exclusive,or even the lack of an assumption(as in Nilsson).Laksh-manan[22]separates the qualitative and quantitative aspects of probabilistic logic.Dependencies between atoms are declared in terms of the boolean truth

294R.de Salvo Braz,E.Amir,and D.Roth

values of a set of support atoms.Only later is a distribution assigned to the sup-port atoms,consequently de?ning distributions on the remaining atoms as well. The main advantage of the approach is the possibility of investigating di?erent total distributions,based on distributions on the support set,without having to recalculate the relationship between atoms and support set.The algorithms works in ways similar to Ngo and Haddawy[23]and Lakshmanan and Sadri[21], but propagates support set conditions rather than probabilities.Support sets are also a concept very similar to the hypotheses used in Probabilistic Abduction by Poole[24](see next section).

12.4Intensional Approaches

We now discuss intensional approaches to probabilistic logic languages,where statements(often in the form of rules)are interpreted as restrictions on a globally de?ned probability distribution.This probability distribution is over all possible worlds or,in other words,on assignments to the set of all possible random variables in the language.Statements typically pose constraints in the form of conditional probabilities,and often also as conditional independence relations. (As mentioned in Sect.12.2,another possibility would be statistical constraints, but this has not been explored in any works to our knowledge.) The algorithms in intensional approaches,when available,are arguably more complex than extensional approaches,since their steps do not directly correspond to the application of rules in the language and need to be consistent with the global distribution while being as local as possible(for e?ciency reasons).

We cover?ve di?erent types of intensional approaches:deduction rules,ex-haustive computation of derivations,sampling,Knowledge Based Model Con-struction(KBMC)and Lifted inference.

12.4.1Deduction Rules

Classical logic deduction systems often work by receiving a model speci?ed in a particular language and using deduction rules to derive new statements(guar-anteed to be true)from subsets of previous statements.Some work has been devoted to devising similar systems when the language is that of probabilistic logic.

This method is particularly challenging in probabilistic systems because prob-abilistic inference is not as modular as classical logical inference.For example, while the logical knowledge of A→B allows us to deduce B given that A∧?is true for any formula?,knowing P(B|A)in itself does not tell us anything about P(B|A∧?).In principle,one needs to consider all available knowledge when establishing the conditional probability of B.Classical logic reasoning shows a modularity that is harder to achieve in a probabilistic setting.

One way of making probabilistic inference more modular is to use knowledge about conditional independencies between random variables.If we know that B is independent of any other random variable given A,then we know that P(B|A∧?)

12A Survey of First-Order Probabilistic Models 295

is equal to P (B |A )for any ?.This has been the approach of graphical models such as Bayesian and Markov networks [3],where independencies are represented by the structure of a graph over the set of random variables.

The computation steps of speci?c inference algorithms for graphical models (such as Variable Elimination [25])could be cast as deduction rules,much like in classical logic.However this is not traditionally done,mostly because inference rules are typically described in a logic-like language and graphical models are not.When dealing with a ?rst-order probabilistic logic language,however,this approach becomes more natural.

Luckasiewicz [26]uses inference rules for solving trees of probabilistic con-ditional constraints over basic events.These trees are similar to Bayesian net-works,with each node being a random variable and each edge being labeled by a conditional probability table.However,these trees are not meant to encode in-dependence assumptions.Besides,conditional probabilities can also be speci?ed in intervals.

Frisch and Haddawy [27]present a set of inference rules for probabilistic propositional logic with interval probabilities.They characterize it as an anytime system since inference rules will increasingly narrow those intervals.They also provide more modular inference by allowing statements on conditional indepen-dencies of random variables,which are used by certain rules to derive statements based on local information.

Koller and Halpern [28]investigate the use of independence information for FOPI based on inference rules.They use this notion to discuss the issue of sub-stitution in probabilistic inference.While substitution is fundamental to classical logic inference,it is not sound in general in a probabilistic context.For example,

inferring P (q (A ))=13given ?P (q (X ))=13is not sound.Consider three pos-

sible worlds w 1,w 2,w 3containing the three objects o 1,o 2,o 3each,where q (o i )is 1in w i and 0otherwise.If each possible world has a probability 13of being

the actual world,then ?P (q (X ))=13holds.However,if A refers to o i in each

w i ,then P (q (A ))=1.While this problem can be solved by requiring constants to be rigid designators (that is,each of them refers to the same object in all worlds),the authors argue that this is too restrictive.Their solution is to use in-formation on independence.They show that when the statements ?P (q (X ))=13and x =A are independent,one can derive P (q (A ))=13.Finally,they discuss the topic of using statistical probabilities as a basis for subjective ones (the two types discussed by Bacchus [5]and Halpern [6])based on independencies.12.4.2Exhaustive Computation of Derivations

Another type of intensional system is the one in which the available algorithms exhaustively compute the set of derivations or proofs for a query,in the same way proofs are found for queries in logic programming.However,while in logic programming it is often only necessary to ?nd one proof for a certain query,in probabilistic models all proos will typically in?uence the query’s result,and therefore need to be computed.

296R.de Salvo Braz,E.Amir,and D.Roth

Riezler[29]presents a probabilistic account of Constraint Logic Programs (CLPs)[30].In regular logic programming,the only constraints over logical vari-ables are equational constraints coming from uni?cation.CLPs generalize this by allowing other constraints to be stated over those variables.These constraints are managed by special-purpose constraint solvers as the derivation proceeds,and failure in satisfying a constraint determines failure of the derivation.Probabilis-tic Constraint Logic Programs(PCLPs)are a stochastic generalization of CLPs, where clauses are annotated with a probability and chosen for the expansion of a literal according to that probability,among the available clauses with matching heads.The probability of a derivation is determined by the product of probabil-ities associated to the stochastic choices.In fact,PCLPs are a generalization of Stochastic Context-Free Grammars(SCFGs)[31],the di?erence between them being that PCLP symbols have arguments in the form of logical variables with associated constraints while grammar symbols do not.For this reason,PCLP derivations can fail while SCFGs will always succeed.This presents a compli-cation for PCLP algorithms because the probability has to be normalized with respect to the sum of successful derivations only.It also makes the use of ef-?cient dynamic programming techniques such as the inside-outside algorithm [32]not adequate for PCLPs,forcing us to compute all possible derivations of a query.Riezler focuses on presenting an algorithm for learning the parameters of a PCLP from incomplete data,in what is a generalization of the Baum-Welch algorithm for HMMs[33].

Stochastic Logic Programs[34,35]are very similar to PCLPs,restricting themselves to regular logic programming(e.g.,Prolog).This line of work is more focused on the development of an actual system on top of a Prolog interpreter and to be used with Inductive Logic Programming techniques such as Progol [36].Like Riezler,in[35]Cussens develops methods for learning parameters of SLPs using Improved Interative Scaling[37]and the EM algorithm[38].

Luckasiewicz[39]presents a form of Probabilistic Logic Programming that complements Nilsson’s[4]approach.Nilsson considers all equivalence classes of possible worlds with respect to the given knowledge and builds a linear pro-gram in order to assign probabilities to sentences.Luckasiewicz essentially does the same by using logic programming for both determining the the equivalence classes and the linear program.

Baral et al.[40]use answer set logic programming to implement a power-ful probabilistic logic language.Its distinguishing feature is the possibility of specifying observations and actions,with their corresponding implications with respect to causality,as studied by Pearl[41].However,the implementation,using answer set Prolog,depends on determining all answer sets.

12.4.3Sampling Approaches

Because building all drivations of a query given a program is very expensive, approximation solutions become an attractive alternative.

Sato[42]presents PRISM,a full-featured Prolog interpreter extended with probabilistic switches that can be used to encode probabilistic rules and facts.

12A Survey of First-Order Probabilistic Models297 These switches are special built-in predicates that randomly succeed or not, following a speci?c probability distribution.They can be placed in the body of a clause,which in consequence will succeed or not with the same distribution(when the rest of the body succeeds).Therefore,multiple executions of the program will yield di?erent random results that can be used as samples.A query can then be answered by multiple executions which sample its possible outcomes. Sato also provides a way of learning the parameters of the switches by using a form of the EM algorithm[43].

BLOG[44]is a?rst-order language allowing the speci?cation of a generative model on a?rst-order structure.It is similar in form to BUGS[45],a speci?cation language for propositional generative models.The main distinction of BLOG is its open world assumption;it does not require that the number of objects in the world be set a priori,using instead a prior on this number and also keeping track of identities of objects with di?erent names.BLOG computes queries by sampling over possible worlds.

12.4.4Knowledge Based Model Construction

We now present the most prominent family of models in the?eld of FOPI mod-els,Knowledge Based Model Construction(KBMC).These approaches work by generating a propositional graphical model from a?rst-order language speci?ca-tion that answers the query at hand.This construction is usually done in a way speci?c to the query,ruling out irrelevant portions of the graph so as to increase e?ciency.

Many KBMC approaches use a?rst-order logic-like speci?cation language,but some use di?erent languages such as frame systems,parameterized fragments of Bayesian networks,and description logics.Some build Bayesian networks while others prefer Markov networks(and in one case,Dependency Networks[46]).

While KBMC approaches try to prune sections of underlying graphical models which are irrelevant to the current query,there is still potentially much wasted computation because they may replicate portions of the graph which require es-sentially identical computations.For example,a problem may involve many em-ployees in a company,and the underlying graphical model will contain a distinct section with its own set of random variables for each of them(representing their properties),even though all these sections have essentially the same structure. Often the same computation will be repeated for each of those sections,while it is possible to perform it only once in a generalized form.Avoiding this waste is the object of Lifted First-Order Probabilistic Inference[47,48],discussed in Sect.12.4.5.

The most commonly referenced KBMC approach is that of Breese[49,50], although Horsch and Poole[51]had presented a similar solution a year before.

[49]de?nes a probabilistic logic programming language,with Horn clauses anno-tated by probabilistic dependencies between the clause’s head and body.Once a query is presented,clauses are applied to it in order to determine the prob-abilistic dependencies relevant to it.These dependencies are then used to form a Bayesian network.Backward inference will generate the causal portion of the

298R.de Salvo Braz,E.Amir,and D.Roth

network relative to the query;forward inference creates the diagnostic part.The

construction algorithm uses the evidence in order to decide when to stop expand-ing the network–there is no need to generate portions that are d-separated from the query by the evidence.In fact,this work covers not only Bayesian networks,

but in?uence diagrams as well,including decision and utility value nodes.

There are many works similar in spirit to[49]and di?ering only in some details;for example,the already mentioned Horsch and Poole[51],which also

uses mostly Horn clauses(it does allow for universal and existential quanti?ers over the entire clause body though)as the?rst-order language.One distinction

in this work,however,is the more explicit treatment of the issue of combination functions,used to combine distributions coming from distinct clauses with the same head.One example of a combination function is noisy-or[3],which assumes

that the probability provided by a single clause is the probability of it making the consequent true regardless of the other clauses.Suppose we have clauses A←B and A←C in the knowledge base,the?rst one dictating a probability

0.8for A when B is true and the second one dictating a probability0.7for

A when C is true.Then the combination function noisy-or builds a Conditional

Probability Table(CPT)with B and C as parents of A,with entries P(A|B,C)= {1?0.2×0.3,1?0.8×0.3,1?0.2×0.7,1?0.8×0.7}={0.94,0.76,0.86,0.47} for{(B= ,C= ),(B=⊥,C= ),(B= ,C=⊥),(B=⊥,C=⊥)},

respectively.

In?rst-order models,noisy-or and other combination functions are especially useful when a random variable has a varying number of parents,which makes its CPT impossible to represent by a?xed-dimensions table.A clause p←q(X),for example,determines that p depends on all instantiations of q(X),that is,all instantiations of q(X)are parents of p.However,the number of such instantiations depends on how many values X can take.Without knowing this number,the only way of having a general speci?cation of p’s CPT is to have a combination function on the instantiations of q(X).In fact,even when this number is known it may be convenient to represent the CPT with a combination function for compactness sake.

Charniak and Goldman[52]expand a deductive database and truth main-tenance system(TMS)in order to de?ne a language for constructing Bayesian networks.The Bayesian networks come from the data-dependency network main-tained by the TMS system,which is annotated with probabilities.There is also a notion of combination functions.The authors choose not to expand logical languages,justifying this choice by arguing that logic and probability do not correspond perfectly,the?rst being based on implication while the second on conditioning.

Poole[24]de?nes Probabilistic Abduction,a probabilistic logic language aimed at performing abduction reasoning.Probabilities are de?ned only for a set of predicates,called hypotheses(which is reminiscent of the support set in [22]),while the clauses themselves are deterministic.When a problem has nat-urally dependent hypotheses,one can rede?ne them as regular predicates and invent a new hypothesis to explain that dependence.While deterministic clauses

12A Survey of First-Order Probabilistic Models299 can seem too restrictive,one can always get the e?ect of probabilistic rules by using hypotheses as a condition of the rule(like switches in Sato’s PRISM[42]). The language also assumes that the bodies of clauses with the same head are mutually exclusive,and again this is not as restrictive as it might seem since clauses with non-mutually exclusive bodies can be rewritten as a di?erent set of clauses satisfying this.As in other works in this section,the actual computation of probabilities is based on the construction of a Bayesian network.In[53],Poole extends Probabilistic Abduction for decision theory,including both utility and decision variables,as well as negation as failure.

Glesner and Koller[54]present a Prolog-like language that allows the dec-laration of facts about a Bayesian network to be constructed by the inference process.The computing mechanisms of Prolog are used to de?ne the CPTs as well,so they are not restricted to tables,but can be computed on the?y.This allows CPTs to be de?ned as decision trees,for example,which provides a means of doing automatic pruning of the resulting Bayesian network–if the evidence provides information enough to make a CPT decision at a certain tree node, the descendants of that node,along with parts of the network relevant to those descendants only,do not need to be considered or built.The authors focus on ?exible dynamic Bayesian networks that do not necessarily have the same struc-ture at every time slice.

Haddawy[55]presents a language and construction method very similar to[51, 49].However,he focuses on de?ning the semantics of the?rst-order probabilistic logic language directly,and independently of the Bayesian network construction, and proceeds to use it to prove the correctness of the construction method.Breese [49]had done something similar by de?ning the semantics of the knowledge base as an abstract Bayesian network which does not usually get built itself in the presence of evidence,and by showing that the Bayesian network actually built will give the same result as the abstract one.

Koller and Pfe?er[56]present an algorithm for learning the probabilities of noisy?rst-order rules used for KBMC.They use the EM algorithm applied to the Bayesian networks generated by the model,using incomplete data.This works in the same way as the regular Bayesian network parameter learning with EM, with the di?erence that many of the parameters in the generated networks are in fact instances of the same parameter in a?rst-order rule.Therefore,all updates on these parameters must be accumulated in the original parameter.

Jaeger[57]de?nes a language for specifying a Bayesian network whose nodes are the extensions of?rst-order predicates.In other words,each node is the as-signment to the set all atoms of a certain predicate.Needless to say,inference in such a network would be extremely ine?cient since each node would have an extremely large number of values.However,it o?ers the advantage of mak-ing the semantics of the language very clear(it is just the usual propositional Bayesian network semantics–the extension of a predicate is just a propositional variable with a very large number of values).The author proposes,like other approaches here,to build a regular Bayesian network(with a random variable per ground atom)for the purpose of answering speci?c queries.He also presents

300R.de Salvo Braz,E.Amir,and D.Roth

a sophisticated scheme for combination functions,including the possibility of their nesting.

Koller et al.[58,59]de?ne Probabilistic Relational Models(PRMs),a sharp depart from the logical-probabilistic models that had been proposed until then as solutions for FOPI models.Instead of adding probabilities to some logic-like language,the authors use the formalism of Frame Systems[60]as a starting point.The language of frames,similar also to relational databases,is less expres-sive than?rst-order logic,which is to the authors one of its main advantages since?rst-order logic inference is known to be intractable(which only gets worse when probabilities are added to the mix).By using a language that limits its ex-pressivity to what is most needed in practical applications,one hopes to obtain more tractable inference,an argument commonly held in the Knowledge Rep-resentation community[61].In fact,Pfe?er and Koller had already investigated adding probabilities to restricted languages in[62].In that case,the language in question was that of description logics.

The language of Frame Systems consists of de?ning a set of objects described by attributes–binary predicates relating an object to a simple scalar value –or relations–binary predicates relating an object to another(or even self) object.PRMs add probabilities to frame systems by establishing distributions on attributes conditioned on other attributes(in the same object,or related object). In order to avoid declaring these dependencies for each object,this is done at a scheme level where classes,or template objects,stand for all instances of a class.This scheme describes the attributes of classes and the relations between them.Conditional probabilities are de?ned for attributes and can name the conditioning attributes via the relations needed to reach them.

As in the previous approaches,queries to PRMs are computed by generat-ing an underlying Bayesian network.Given a collection of objects(a database skeleton)and the relationships between them,a Bayesian network is built with a random variable for each attribute in each object.The parents of these random variables in the network are the ones determined by the relations in the particu-lar database,and the CPT?lled with the values speci?ed at the template level. An example of this process is shown in Fig.12.1.

Note that the set of ancestors of attributes in the underlying network is de-termined by the relations from one object to another.One could imagine an attribute rating of an object representing a restaurant that depends on the at-tribute training of the object representing its chef(related to it by the relation-ship chef).In approaches following?rst-order representations,chef would be a binary predicate,and each of its instances a random variable.As a result,the ancestors of rating would be the attributes training of all objects potentially linked to the restaurant by the relationship chef,plus the random variables standing for possible pairs in the relationship cook itself,resulting in a large (and thus expensive)CPT.PRMs avoid this when they take data with a de-?ned structure where the assignment to relations such as cook is known;in this case,the random variables in the relationship chef would not even be included in the network,and the attribute rating of each object would have a single

12A Survey of First-Order Probabilistic Models 301

(c)

Fig.12.1.(a)A PRM scheme showing classes of objects (rectangles),probabilistic dependencies between their attributes (full arrows)and relationships (dashed arrows).(b)A database skeleton showing a collection of objects,their classes and relationships.(c)The corresponding generated Bayesian network.

ancestor.When relationships are not ?xed in advance,we have structural uncer-tainty ,which was addressed by the authors in [63].These papers have presented PRM learning of both parameters and structure (that is,the learning of the scheme level).

PRMs make use of Bayesian networks,a directed graphical model that brings a notion of causality.In relational domains it is often the case that random variables depend on each other without a clear notion of causality.Take for example a network of people linked by friendship relationships,with the at-tribute smoker for each person.We might want to state the ?rst-order causal relationship P (smoker (X )|friends (X,Y ),smoker (Y ))in such a model,but it would create cycles in the underlying Bayesian network (between each pair of smoker attributes such as smoker (john )and smoker (mary )).For this reason,Relational Markov Networks (RMNs)[64]recast PRMs so they generate undi-rected graphical models (Markov networks)instead of Bayesian networks.In RMNs,dependencies are stated as ?rst-order features that get instantiated into potential function on cliques of random variables,without a notion of causality or conditional probabilities.The disadvantage of it,however,is that learning in undirected graphical models is harder than in directed ones,involving a full inference step at each expectation step of the EM algorithm.

Relational Dependency Networks (RDNs)[65]provide yet another alterna-tive to this problem.They are the ?rst-order version of Dependency Networks

302R.de Salvo Braz,E.Amir,and D.Roth

(DNs)(Heckerman,[46]),which use conditional probabilities but do not require https://www.wendangku.net/doc/5b4326586.html,ing directed conditional probabilities avoids the expensive learning of undirected models.However,DNs have the downside of conditional proba-bilities being no longer guaranteed consistent with the joint probability de?ned by their normalized product.Heckerman shows that,as the amount of training data increases,conditional probabilities in a DN will asymptotically converge to consistency.RDNs are sets of?rst-order conditional probabilities which are used to generate an underlying regular dependency network.These?rst-order conditional probabilities are typically learned from data by relational learners (Sect.12.5).RDNs are implemented in Proximity,a well-developed,publicly available software package.

Kersting and DeRaedt[66]introduce Bayesian Logic Programs.This work’s motivation is to provide a language which is as syntactically and conceptually simple as possible while preserving the expressive power of works such as Ngo and Haddawy[23],Jaeger[57]and PRMs[58].According to the authors,this is necessary so one understands the relationship between all these approaches,and also the fundamental aspects of FOPI models.

Fierens at al[67]de?ne Logical Bayesian Networks(LBNs).LBNs are very similar to Bayesian Logic Programs,with the di?erence of having both random variables and deterministic logical literals in their language.A logic programming inference process is run for the construction of the Bayesian network,during which logical literals are used,but since they are not random variables,they are not included in the Bayesian network.This addresses the same issue of ?xed relationships discussed in the presentation of PRMs,that is,when a set of relationships is deterministically known,we can create random variable nodes in the Bayesian network with signi?cantly fewer ancestors.In the BLPs and LBNs framework,this is exempli?ed by a rule such as:

rating(X)←cook(X,Y),training(Y).

which has an associated probability,declaring that a restaurant X’s rating depends on their cook Y’s training.In Bayesian Logic Programs,the instan-tiations of cook(X,Y)are random variables(just like the instantiations of rating(X)and training(Y)).Therefore,since we do not know a priori which Y makes cook(timpone,Y)true,rating(timpone)depends on all instantiations of cook(timpone,Y)and training(Y)and has all of them as parents in the un-derlying Bayesian network.If in the domain at hand the information of cook is deterministic,then this would be wasteful.We could instead determine Y such that cook(timpone,Y),say Y=joe,and build the Bayesian network with only the relevant random variable training(joe)as parent of rating(timpone). This is precisely what LBNs do.In LBNs,one would de?ne cook as a deter-ministic literal that would be reasoned about,but not included in the Bayesian network as a random variable.This in fact is even more powerful than the PRMs

12A Survey of First-Order Probabilistic Models 303

approach since it deals even with the situation where relationships are not di-rectly given as data,but have to be reasoned about in a deterministic manner.Santos Costa et al.

[68]propose an elegant KBMC approach that smoothly leverages an already existing framework,Constraint Logic Programming (CLP).In regular logic programming,the only constraint over logical variables are equa-tional constraints coming from uni?cation.As explained in Sect.12.4.2,CLP programs generalize this by allowing other constraints to be stated over those variables.These constraints are managed by special-purpose constraint solvers as the derivation proceeds,and failure in satisfying a constraint determines failure of the derivation.The authors leverage CLP by developing a constraint solver on probabilistic constraints expressed as CPTs,and simply plug it into an already existing CLP system.The resulting system can also use available logic program-ming mechanisms in the CPT speci?cation,making it possible to calculate it dynamically,based on the context,rather than by ?xed tables.The probabilistic constraint solver uses a Bayesian network internally in order to solve the posed constraints,so this system is also using an underlying propositional Bayesian network for answering queries.Santos Costa et al.indicate [69]as the closest approach to theirs,with the di?erence that the latter keeps hard constraints on Bayesian variables separate from probabilistic constraints.This allows hard constraints to be solved separately.It is also di?erent in that it does not use con-ditional independencies (like Bayesian networks do),and therefore its inference is exponential on the number of random variables.

Markov Logic Networks (MLNs)[70]is a recent and rapidly evolving frame-work for probabilistic logic.Its main distinctions are that it is based on undi-rected models and has a very simple semantics while keeping the expressive

1.5? Smokes(X) ?Cancer(X)“Smoking causes cancer”

?X Smokes(X) ?Cancer(X)

Weight Clausal form English / First-Order Logic

1.1

? Friends(X,Y) ?Smokes(X) ?Smokes(Y)“If two people are friends either both

smoke or neither does”

?X ?Y Fr(X,Y) ?(Sm(X) ?Sm(Y))Fig.12.2.A ground Markov network generated from a Markov Logic Network for objects Anna (A)and Bob (B)(example presented in [70])

304R.de Salvo Braz,E.Amir,and D.Roth

Fig.12.3.An example of a MEBN,as shown in[72]

power of?rst-order logic.The downside to this is that its inference can become quite slow if complex constructs are present.

MLNs consist of a set of weighted?rst-order formulas and a universe of ob-jects.Its semantics is simply that of a Markov network whose features are the instantiations of all these formulas given the universe of objects.The potential of a feature is de?ned as the exponential of its weight in case it is true.Figure12.2 shows an example.

Formulas can be arbitrary?rst-order logic formulas,which are converted to clausal form for inference.Converting existentially quanti?ed formulas to clausal form usually involves Skolemization,which requires uninterpreted functions in the language.Since MLNs do not include such functions,existentially quanti?ed formulas are replaced by the disjunction of their groundings(this is possible because the domain is?nite).The great expressivity of MLNs allows them to easily subsume other proposed FOPI languages.They are also a generalization of?rst-order logic,to which they reduce when weights are in?nite.

Learning algorithms for MLNs have been presented from the beginning.Because learning in undirected models is hard,MLNs use the notion of pseudo-likelihood [71],an approximate but e?cient method.When data is incomplete,EM is used.

12A Survey of First-Order Probabilistic Models305 MLNs are a powerful language and framework accompanied by well-supported software(called Alchemy)and which has been applied to real domains.The drawback of its expressivity is potentially very large underlying networks(for example,when existential quanti?cation is used).

Laskey[72]presents multi-entity Bayesian networks(MEBNs),a?rst-order version of Bayesian networks,which rely on generalizing typical Bayesian net-work representations rather than a logic-like language.A MEBN is a collection of Bayesian network fragments involving parameterized random variables.As in the other approaches,the semantics of the model is the Bayesian network re-sulting from instantiating these fragments.Once they are instantiated,they are put together according to the random variables they share.A MEBN is shown in Fig.12.3.

Laskey’s language is indeed quite rich,allowing in?nite models,function sym-bols and distributions on the parameters of random variables themselves.The work focus on de?ning this language rather than on the actual implementation, which is based on instantiating a Bayesian network containing the parts rele-vant to the query at hand.It does not provide a detailed account of this process, which can be especially tricky in the case of in?nite models.

12.4.5Lifted Inference

One of the major di?culties in KBMC approaches is that they must proposi-tionalize the model in order to perform inference.This does not preserve the rich ?rst-order structure present in the original model;the propositionalized version does not indicate anymore that CPTs are instantiations of the same original one, or that random variables are instantiations of an original parameterized random variable.In other words,it creates a potentially large propositional model with a great amount of redundancy that cannot be readily exploited.

Recent research on lifted inference[73,47,48]has addressed this point.A lifted inference algorithm receives a?rst-order speci?cation of a probabilistic model and performs inference directly on it,without propositionalization.This can potentially yield an enormous gain in e?ciency.

For example,a possible model can be formed by parameterized factors (or parfactors)φ1(epidemic(D))andφ2(sick(P,D))and a set of typed ob-jects flu,rubella,and john,mary etc.The model is equivalent to a propo-sitional graphical model formed by all possible instantiations of parfactors by the given objects,which is the set of regular factorsφ1(epidemic(flu)),φ1(epidemic(rubella)),...,andφ2(sick(john,flu)),φ2(sick(john,rubella)),φ2(sick(mary,flu)),

φ2(sick(mary,rubella)),etc.

What lifted inference does,instead of actually generating these instantiations, is to operate directly on the parfactors and obtain the same answer as the one obtained by instantiating and solving by a propositional algorithm.By operat-ing directly on parfactors,the lifted algorithm can potentially be much more e?cient,since the?rst-order structure is explicitly available to it.For example, suppose we want to compute the marginal of P(epidemic(flu)).Then we have

306R.de Salvo Braz,E.Amir,and D.Roth

to sum out all the other random variables in the model.While a regular KBMC algorithm would instantiate them and then sum them out,a lifted inference al-gorithm will directly sum out the parameterized epidemic (D ),for D =flu ,and sick (P,D ).The lifted elimination operation may not depend on the number of objects in the domain at all,greatly speeding up the process.The step in which an entire class of random variables is eliminated at once is possible because they all share the same structure,and this structure is explicitly available to the

algorithm.Figure 12.4presents a simpli?ed diagram of a lifted inference operation.

Poole [73]proposes a lifted algorithm that generalizes propositional Variable Elimination [25]but covers only some speci?c cases.de Salvo Braz et al.[47,48]present a broader algorithm,called First-Order Variable Elimination (FOVE).FOVE includes Poole’s operation (which this work calls Inversion Elimination )a generalized version of it,called simply Inversion ,and a second elimination operation called Counting Elimination .While Inversion does not depend on the domain size,Counting Elimination does,but still only exponentially less than propositionalization.The work also presents rigorous proofs of the correctness of these operations and shows how to solve the lifted version of the Most Probable Explanation (MPE)problem.While more general,FOVE still does not cover all possible cases,when it too must resort to propositionalization.When this

sick(P, D)Fig.12.4.A diagram of several possible operations involving ?rst-order and propo-sitional probabilistic models.The ?gure uses the notation of factor graphs,which ex-plictly shows potential functions as squares connected to their arguments.Parameter-ized factors are shown as piled up squares,since they compactly stand for multiple factors.Lifted inference operates solely on the ?rst-order representation and can be much faster than propositional inference,while producing the same results.

12A Survey of First-Order Probabilistic Models307 happens,this propositionalization will be localized to the parfactors involved in the uncovered case.

Lifted FOPI is a further step towards closing the gap between logic and prob-abilistic inference,bringing to the latter the type of inference that does not require binding of parameters(which would be the logical variables in atoms,in logical terms),often seen in the former.However,it has its own disadvantages. It is relatively more complicated to implement,and requires a normalizing pre-processing of the model(called shattering)that can be very expensive.Further methods are being developed to circumvent these di?culties.

12.5Relational Learning

In this section we discuss some?rst-order models developed from a machine learning perspective.

Machine learning algorithms have traditionally been de?ned as classi?cation of attribute-value vectors[74].In many applications,it is more natural and con-venient to represent data as graphs,where each vertex represents an object and each edge represents a relation between objects.Vertices can be labeled with at-tributes of its corresponding object(unary predicates or binary predicates where the second argument is a simple value;this is similar to PRMs in Sect.12.4.4), and edges can be labeled(the label can be interpreted as a binary predicate holding between the objects).This provides the typical data structure represen-tations such as trees,lists and collections of objects in general.When learning from graphs,we usually want to form hypotheses that explain one or more of the attributes and(or)relations(the targets)of objects in terms of its neigh-bors.Machine learning algorithms which were developed to bene?t from this type of representation have often been called relational.This is closely associ-ated to probabilistic?rst-order models,since graph data can be interpreted as a set of ground literals using unary and binary predicates.Because the hypothe-ses explaining target attributes and relations apply to several objects,it is also convenient to represent the learned hypotheses as quanti?ed(?rst-order)rules. And because most learners involve probabilities or at least some measure of un-certainty,probabilistic?rst-order rules provide a natural representation option. Figure12.5illustrates these concepts.

We now discuss three forms of relational learning:propositionalization(?at-tening),Inductive Logic Programming(ILP),and FOPI learning,which can be seen as a synthesis of the two.

12.5.1Propositionalization

A possible approach to relational machine learning is that of using a relational structure for generating propositional attribute-value vectors for each of its ob-jects.For this reason,the approach has been called propositionalization.Because it transforms graph-like data into vector-like data,it is also often called?attening.

Cumby&Roth[75]provide a language for transforming relational data into attribute-value vectors.Their concern is not forming a?rst-order hypothesis,

308R.de Salvo Braz,E.Amir,and D.Roth

Fig.12.5.A fragment of a graph structure used as input for relational learning.The same information can be represented as a set of ground literals(right).The hypotheses learned to explain either relations or attributes can be represented as weighted?rst-order clauses over those literals(below).

however.They instead keep the attribute-value hypothesis and transform novel data to that representation in order to classify it with propositional learners such as Perceptron.For example,in the case of Fig.12.5,a classi?er seeking to learn the relation acqt would go through the instances of that predicate and generate suitable attribute-value vectors.The literal acqt(paul,larry)would gen-erate an example with label acqt(X,Y)and features male(paul),male(larry), interest(paul,music),school(paul,fdr),interest(larry,basketball),

school(larry,fdr)etc,as well as non-ground ones such as male(X),male(Y), school(X,fdr),school(Y,fdr),school(X,Z),school(Y,Z),interest(X,music) etc.The literal acqt(joe,lissa)would generate an example with label acqt(X,Y) and features male(joe),female(lissa),interest(joe,baseball),

interest(joe,computers),school(joe,jfk),etc,as well as non-ground ones such as male(X),female(Y),school(X,jfk),school(Y,jfk),school(X,Z),

school(Y,Z)etc.Note how this reveals abstractions–the examples above share the features school(X,Z)and school(Y,Z),which may be one reason for people being acquaintances in this domain.Should a target depend on speci?c objects (say,it is much more likely for people at the FDR school to be acquainted to each other)not completely abstracted features such as school(X,fdr)would be preferred by the classi?er.

从实践的角度探讨在日语教学中多媒体课件的应用

从实践的角度探讨在日语教学中多媒体课件的应用 在今天中国的许多大学,为适应现代化,信息化的要求,建立了设备完善的适应多媒体教学的教室。许多学科的研究者及现场教员也积极致力于多媒体软件的开发和利用。在大学日语专业的教学工作中,教科书、磁带、粉笔为主流的传统教学方式差不多悄然向先进的教学手段而进展。 一、多媒体课件和精品课程的进展现状 然而,目前在专业日语教学中能够利用的教学软件并不多见。比如在中国大学日语的专业、第二外語用教科书常见的有《新编日语》(上海外语教育出版社)、《中日交流标准日本語》(初级、中级)(人民教育出版社)、《新编基础日语(初級、高級)》(上海译文出版社)、《大学日本语》(四川大学出版社)、《初级日语》《中级日语》(北京大学出版社)、《新世纪大学日语》(外语教学与研究出版社)、《综合日语》(北京大学出版社)、《新编日语教程》(华东理工大学出版社)《新编初级(中级)日本语》(吉林教育出版社)、《新大学日本语》(大连理工大学出版社)、《新大学日语》(高等教育出版社)、《现代日本语》(上海外语教育出版社)、《基础日语》(复旦大学出版社)等等。配套教材以录音磁带、教学参考、习题集为主。只有《中日交流標準日本語(初級上)》、《初級日语》、《新编日语教程》等少数教科书配备了多媒体DVD视听教材。 然而这些试听教材,有的内容为日语普及读物,并不适合专业外语课堂教学。比如《新版中日交流标准日本语(初级上)》,有的尽管DVD视听教材中有丰富的动画画面和语音练习。然而,课堂操作则花费时刻长,不利于教师重点指导,更加适合学生的课余练习。比如北京大学的《初级日语》等。在这种情形下,许多大学的日语专业致力于教材的自主开发。 其中,有些大学的还推出精品课程,取得了专门大成绩。比如天津外国语学院的《新编日语》多媒体精品课程为2007年被评为“国家级精品课”。目前已被南开大学外国语学院、成都理工大学日语系等全国40余所大学推广使用。

新视野大学英语全部课文原文

Unit1 Americans believe no one stands still. If you are not moving ahead, you are falling behind. This attitude results in a nation of people committed to researching, experimenting and exploring. Time is one of the two elements that Americans save carefully, the other being labor. "We are slaves to nothing but the clock,” it has been said. Time is treated as if it were something almost real. We budget it, save it, waste it, steal it, kill it, cut it, account for it; we also charge for it. It is a precious resource. Many people have a rather acute sense of the shortness of each lifetime. Once the sands have run out of a person’s hourglass, they cannot be replaced. We want every minute to count. A foreigner’s first impression of the U.S. is li kely to be that everyone is in a rush -- often under pressure. City people always appear to be hurrying to get where they are going, restlessly seeking attention in a store, or elbowing others as they try to complete their shopping. Racing through daytime meals is part of the pace

新视野大学英语第三版第二册课文语法讲解 Unit4

新视野三版读写B2U4Text A College sweethearts 1I smile at my two lovely daughters and they seem so much more mature than we,their parents,when we were college sweethearts.Linda,who's21,had a boyfriend in her freshman year she thought she would marry,but they're not together anymore.Melissa,who's19,hasn't had a steady boyfriend yet.My daughters wonder when they will meet"The One",their great love.They think their father and I had a classic fairy-tale romance heading for marriage from the outset.Perhaps,they're right but it didn't seem so at the time.In a way, love just happens when you least expect it.Who would have thought that Butch and I would end up getting married to each other?He became my boyfriend because of my shallow agenda:I wanted a cute boyfriend! 2We met through my college roommate at the university cafeteria.That fateful night,I was merely curious,but for him I think it was love at first sight."You have beautiful eyes",he said as he gazed at my face.He kept staring at me all night long.I really wasn't that interested for two reasons.First,he looked like he was a really wild boy,maybe even dangerous.Second,although he was very cute,he seemed a little weird. 3Riding on his bicycle,he'd ride past my dorm as if"by accident"and pretend to be surprised to see me.I liked the attention but was cautious about his wild,dynamic personality.He had a charming way with words which would charm any girl.Fear came over me when I started to fall in love.His exciting"bad boy image"was just too tempting to resist.What was it that attracted me?I always had an excellent reputation.My concentration was solely on my studies to get superior grades.But for what?College is supposed to be a time of great learning and also some fun.I had nearly achieved a great education,and graduation was just one semester away.But I hadn't had any fun;my life was stale with no component of fun!I needed a boyfriend.Not just any boyfriend.He had to be cute.My goal that semester became: Be ambitious and grab the cutest boyfriend I can find. 4I worried what he'd think of me.True,we lived in a time when a dramatic shift in sexual attitudes was taking place,but I was a traditional girl who wasn't ready for the new ways that seemed common on campus.Butch looked superb!I was not immune to his personality,but I was scared.The night when he announced to the world that I was his girlfriend,I went along

新视野大学英语读写教程第一册课文翻译及课后答案

Unit 1 1学习外语是我一生中最艰苦也是最有意义的经历之一。虽然时常遭遇挫折,但却非常有价值。 2我学外语的经历始于初中的第一堂英语课。老师很慈祥耐心,时常表扬学生。由于这种积极的教学方法,我踊跃回答各种问题,从不怕答错。两年中,我的成绩一直名列前茅。 3到了高中后,我渴望继续学习英语。然而,高中时的经历与以前大不相同。以前,老师对所有的学生都很耐心,而新老师则总是惩罚答错的学生。每当有谁回答错了,她就会用长教鞭指着我们,上下挥舞大喊:“错!错!错!”没有多久,我便不再渴望回答问题了。我不仅失去了回答问题的乐趣,而且根本就不想再用英语说半个字。 4好在这种情况没持续多久。到了大学,我了解到所有学生必须上英语课。与高中老师不。大学英语老师非常耐心和蔼,而且从来不带教鞭!不过情况却远不尽如人意。由于班大,每堂课能轮到我回答的问题寥寥无几。上了几周课后,我还发现许多同学的英语说得比我要好得多。我开始产生一种畏惧感。虽然原因与高中时不同,但我却又一次不敢开口了。看来我的英语水平要永远停步不前了。 5直到几年后我有机会参加远程英语课程,情况才有所改善。这种课程的媒介是一台电脑、一条电话线和一个调制解调器。我很快配齐了必要的设备并跟一个朋友学会了电脑操作技术,于是我每周用5到7天在网上的虚拟课堂里学习英语。 6网上学习并不比普通的课堂学习容易。它需要花许多的时间,需要学习者专心自律,以跟上课程进度。我尽力达到课程的最低要求,并按时完成作业。 7我随时随地都在学习。不管去哪里,我都随身携带一本袖珍字典和笔记本,笔记本上记着我遇到的生词。我学习中出过许多错,有时是令人尴尬的错误。有时我会因挫折而哭泣,有时甚至想放弃。但我从未因别的同学英语说得比我快而感到畏惧,因为在电脑屏幕上作出回答之前,我可以根据自己的需要花时间去琢磨自己的想法。突然有一天我发现自己什么都懂了,更重要的是,我说起英语来灵活自如。尽管我还是常常出错,还有很多东西要学,但我已尝到了刻苦学习的甜头。 8学习外语对我来说是非常艰辛的经历,但它又无比珍贵。它不仅使我懂得了艰苦努力的意义,而且让我了解了不同的文化,让我以一种全新的思维去看待事物。学习一门外语最令人兴奋的收获是我能与更多的人交流。与人交谈是我最喜欢的一项活动,新的语言使我能与陌生人交往,参与他们的谈话,并建立新的难以忘怀的友谊。由于我已能说英语,别人讲英语时我不再茫然不解了。我能够参与其中,并结交朋友。我能与人交流,并能够弥合我所说的语言和所处的文化与他们的语言和文化之间的鸿沟。 III. 1. rewarding 2. communicate 3. access 4. embarrassing 5. positive 6. commitment 7. virtual 8. benefits 9. minimum 10. opportunities IV. 1. up 2. into 3. from 4. with 5. to 6. up 7. of 8. in 9. for 10.with V. 1.G 2.B 3.E 4.I 5.H 6.K 7.M 8.O 9.F 10.C Sentence Structure VI. 1. Universities in the east are better equipped, while those in the west are relatively poor. 2. Allan Clark kept talking the price up, while Wilkinson kept knocking it down. 3. The husband spent all his money drinking, while his wife saved all hers for the family. 4. Some guests spoke pleasantly and behaved politely, while others wee insulting and impolite. 5. Outwardly Sara was friendly towards all those concerned, while inwardly she was angry. VII. 1. Not only did Mr. Smith learn the Chinese language, but he also bridged the gap between his culture and ours. 2. Not only did we learn the technology through the online course, but we also learned to communicate with friends in English. 3. Not only did we lose all our money, but we also came close to losing our lives.

新大学日语简明教程课文翻译

新大学日语简明教程课文翻译 第21课 一、我的留学生活 我从去年12月开始学习日语。已经3个月了。每天大约学30个新单词。每天学15个左右的新汉字,但总记不住。假名已经基本记住了。 简单的会话还可以,但较难的还说不了。还不能用日语发表自己的意见。既不能很好地回答老师的提问,也看不懂日语的文章。短小、简单的信写得了,但长的信写不了。 来日本不久就迎来了新年。新年时,日本的少女们穿着美丽的和服,看上去就像新娘。非常冷的时候,还是有女孩子穿着裙子和袜子走在大街上。 我在日本的第一个新年过得很愉快,因此很开心。 现在学习忙,没什么时间玩,但周末常常运动,或骑车去公园玩。有时也邀朋友一起去。虽然我有国际驾照,但没钱,买不起车。没办法,需要的时候就向朋友借车。有几个朋友愿意借车给我。 二、一个房间变成三个 从前一直认为睡在褥子上的是日本人,美国人都睡床铺,可是听说近来纽约等大都市的年轻人不睡床铺,而是睡在褥子上,是不是突然讨厌起床铺了? 日本人自古以来就睡在褥子上,那自有它的原因。人们都说日本人的房子小,从前,很少有人在自己的房间,一家人住在一个小房间里是常有的是,今天仍然有人过着这样的生活。 在仅有的一个房间哩,如果要摆下全家人的床铺,就不能在那里吃饭了。这一点,褥子很方便。早晨,不需要褥子的时候,可以收起来。在没有了褥子的房间放上桌子,当作饭厅吃早饭。来客人的话,就在那里喝茶;孩子放学回到家里,那房间就成了书房。而后,傍晚又成为饭厅。然后收起桌子,铺上褥子,又成为了全家人睡觉的地方。 如果是床铺的话,除了睡觉的房间,还需要吃饭的房间和书房等,但如果使用褥子,一个房间就可以有各种用途。 据说从前,在纽约等大都市的大学学习的学生也租得起很大的房间。但现在房租太贵,租不起了。只能住更便宜、更小的房间。因此,似乎开始使用睡觉时作床,白天折小能成为椅子的、方便的褥子。

新视野大学英语第一册Unit 1课文翻译

新视野大学英语第一册Unit 1课文翻译 学习外语是我一生中最艰苦也是最有意义的经历之一。 虽然时常遭遇挫折,但却非常有价值。 我学外语的经历始于初中的第一堂英语课。 老师很慈祥耐心,时常表扬学生。 由于这种积极的教学方法,我踊跃回答各种问题,从不怕答错。 两年中,我的成绩一直名列前茅。 到了高中后,我渴望继续学习英语。然而,高中时的经历与以前大不相同。 以前,老师对所有的学生都很耐心,而新老师则总是惩罚答错的学生。 每当有谁回答错了,她就会用长教鞭指着我们,上下挥舞大喊:“错!错!错!” 没有多久,我便不再渴望回答问题了。 我不仅失去了回答问题的乐趣,而且根本就不想再用英语说半个字。 好在这种情况没持续多久。 到了大学,我了解到所有学生必须上英语课。 与高中老师不同,大学英语老师非常耐心和蔼,而且从来不带教鞭! 不过情况却远不尽如人意。 由于班大,每堂课能轮到我回答的问题寥寥无几。 上了几周课后,我还发现许多同学的英语说得比我要好得多。 我开始产生一种畏惧感。 虽然原因与高中时不同,但我却又一次不敢开口了。 看来我的英语水平要永远停步不前了。 直到几年后我有机会参加远程英语课程,情况才有所改善。 这种课程的媒介是一台电脑、一条电话线和一个调制解调器。 我很快配齐了必要的设备并跟一个朋友学会了电脑操作技术,于是我每周用5到7天在网上的虚拟课堂里学习英语。 网上学习并不比普通的课堂学习容易。 它需要花许多的时间,需要学习者专心自律,以跟上课程进度。 我尽力达到课程的最低要求,并按时完成作业。 我随时随地都在学习。 不管去哪里,我都随身携带一本袖珍字典和笔记本,笔记本上记着我遇到的生词。 我学习中出过许多错,有时是令人尴尬的错误。 有时我会因挫折而哭泣,有时甚至想放弃。 但我从未因别的同学英语说得比我快而感到畏惧,因为在电脑屏幕上作出回答之前,我可以根据自己的需要花时间去琢磨自己的想法。 突然有一天我发现自己什么都懂了,更重要的是,我说起英语来灵活自如。 尽管我还是常常出错,还有很多东西要学,但我已尝到了刻苦学习的甜头。 学习外语对我来说是非常艰辛的经历,但它又无比珍贵。 它不仅使我懂得了艰苦努力的意义,而且让我了解了不同的文化,让我以一种全新的思维去看待事物。 学习一门外语最令人兴奋的收获是我能与更多的人交流。 与人交谈是我最喜欢的一项活动,新的语言使我能与陌生人交往,参与他们的谈话,并建立新的难以忘怀的友谊。 由于我已能说英语,别人讲英语时我不再茫然不解了。 我能够参与其中,并结交朋友。

新大学日语阅读与写作1 第3课译文

习惯与礼仪 我是个漫画家,对旁人细微的动作、不起眼的举止等抱有好奇。所以,我在国外只要做错一点什么,立刻会比旁人更为敏锐地感觉到那个国家的人们对此作出的反应。 譬如我多次看到过,欧美人和中国人见到我们日本人吸溜吸溜地出声喝汤而面露厌恶之色。过去,日本人坐在塌塌米上,在一张低矮的食案上用餐,餐具离嘴较远。所以,养成了把碗端至嘴边吸食的习惯。喝羹匙里的东西也象吸似的,声声作响。这并非哪一方文化高或低,只是各国的习惯、礼仪不同而已。 日本人坐在椅子上围桌用餐是1960年之后的事情。当时,还没有礼仪规矩,甚至有人盘着腿吃饭。外国人看见此景大概会一脸厌恶吧。 韩国女性就座时,单腿翘起。我认为这种姿势很美,但习惯于双膝跪坐的日本女性大概不以为然,而韩国女性恐怕也不认为跪坐为好。 日本等多数亚洲国家,常有人习惯在路上蹲着。欧美人会联想起狗排便的姿势而一脸厌恶。 日本人常常把手放在小孩的头上说“好可爱啊!”,而大部分外国人会不愿意。 如果向回教国家的人们劝食猪肉和酒,或用左手握手、递东西,会不受欢迎的。当然,饭菜也用右手抓着吃。只有从公用大盘往自己的小盘里分食用的公勺是用左手拿。一旦搞错,用黏糊糊的右手去拿,

会遭人厌恶。 在欧美,对不受欢迎的客人不说“请脱下外套”,所以电视剧中的侦探哥隆波总是穿着外套。访问日本家庭时,要在门厅外脱掉外套后进屋。穿到屋里会不受欢迎的。 这些习惯只要了解就不会出问题,如果因为不知道而遭厌恶、憎恨,实在心里难受。 过去,我曾用色彩图画和简短的文字画了一本《关键时刻的礼仪》(新潮文库)。如今越发希望用各国语言翻译这本书。以便能对在日本的外国人有所帮助。同时希望有朝一日以漫画的形式画一本“世界各国的习惯与礼仪”。 练习答案 5、 (1)止める並んでいる見ているなる着色した (2)拾った入っていた行ったしまった始まっていた

新视野大学英语(第三版)读写教程第二册课文翻译(全册)

新视野大学英语第三版第二册读写课文翻译 Unit 1 Text A 一堂难忘的英语课 1 如果我是唯一一个还在纠正小孩英语的家长,那么我儿子也许是对的。对他而言,我是一个乏味的怪物:一个他不得不听其教诲的父亲,一个还沉湎于语法规则的人,对此我儿子似乎颇为反感。 2 我觉得我是在最近偶遇我以前的一位学生时,才开始对这个问题认真起来的。这个学生刚从欧洲旅游回来。我满怀着诚挚期待问她:“欧洲之行如何?” 3 她点了三四下头,绞尽脑汁,苦苦寻找恰当的词语,然后惊呼:“真是,哇!” 4 没了。所有希腊文明和罗马建筑的辉煌居然囊括于一个浓缩的、不完整的语句之中!我的学生以“哇!”来表示她的惊叹,我只能以摇头表达比之更强烈的忧虑。 5 关于正确使用英语能力下降的问题,有许多不同的故事。学生的确本应该能够区分诸如their/there/they're之间的不同,或区别complimentary 跟complementary之间显而易见的差异。由于这些知识缺陷,他们承受着大部分不该承受的批评和指责,因为舆论认为他们应该学得更好。 6 学生并不笨,他们只是被周围所看到和听到的语言误导了。举例来说,杂货店的指示牌会把他们引向stationary(静止处),虽然便笺本、相册、和笔记本等真正的stationery(文具用品)并没有被钉在那儿。朋友和亲人常宣称They've just ate。实际上,他们应该说They've just eaten。因此,批评学生不合乎情理。 7 对这种缺乏语言功底而引起的负面指责应归咎于我们的学校。学校应对英语熟练程度制定出更高的标准。可相反,学校只教零星的语法,高级词汇更是少之又少。还有就是,学校的年轻教师显然缺乏这些重要的语言结构方面的知识,因为他们过去也没接触过。学校有责任教会年轻人进行有效的语言沟通,可他们并没把语言的基本框架——准确的语法和恰当的词汇——充分地传授给学生。

新视野大学英语1课文翻译

新视野大学英语1课文翻译 1下午好!作为校长,我非常自豪地欢迎你们来到这所大学。你们所取得的成就是你们自己多年努力的结果,也是你们的父母和老师们多年努力的结果。在这所大学里,我们承诺将使你们学有所成。 2在欢迎你们到来的这一刻,我想起自己高中毕业时的情景,还有妈妈为我和爸爸拍的合影。妈妈吩咐我们:“姿势自然点。”“等一等,”爸爸说,“把我递给他闹钟的情景拍下来。”在大学期间,那个闹钟每天早晨叫醒我。至今它还放在我办公室的桌子上。 3让我来告诉你们一些你们未必预料得到的事情。你们将会怀念以前的生活习惯,怀念父母曾经提醒你们要刻苦学习、取得佳绩。你们可能因为高中生活终于结束而喜极而泣,你们的父母也可能因为终于不用再给你们洗衣服而喜极而泣!但是要记住:未来是建立在过去扎实的基础上的。 4对你们而言,接下来的四年将会是无与伦比的一段时光。在这里,你们拥有丰富的资源:有来自全国各地的有趣的学生,有学识渊博又充满爱心的老师,有综合性图书馆,有完备的运动设施,还有针对不同兴趣的学生社团——从文科社团到理科社团、到社区服务等等。你们将自由地探索、学习新科目。你们要学着习惯点灯熬油,学着结交充满魅力的人,学着去追求新的爱好。我想鼓励你们充分利用这一特殊的经历,并用你们的干劲和热情去收获这一机会所带来的丰硕成果。 5有这么多课程可供选择,你可能会不知所措。你不可能选修所有的课程,但是要尽可能体验更多的课程!大学里有很多事情可做可学,每件事情都会为你提供不同视角来审视世界。如果我只能给你们一条选课建议的话,那就是:挑战自己!不要认为你早就了解自己对什么样的领域最感兴趣。选择一些你从未接触过的领域的课程。这样,你不仅会变得更加博学,而且更有可能发现一个你未曾想到的、能成就你未来的爱好。一个绝佳的例子就是时装设计师王薇薇,她最初学的是艺术史。随着时间的推移,王薇薇把艺术史研究和对时装的热爱结合起来,并将其转化为对设计的热情,从而使她成为全球闻名的设计师。 6在大学里,一下子拥有这么多新鲜体验可能不会总是令人愉快的。在你的宿舍楼里,住在你隔壁寝室的同学可能会反复播放同一首歌,令你头痛欲裂!你可能喜欢早起,而你的室友却是个夜猫子!尽管如此,你和你的室友仍然可能成

新视野大学英语2课文翻译

新视野大学英语2课文翻译(Unit1-Unit7) Unit 1 Section A 时间观念强的美国人 Para. 1 美国人认为没有人能停止不前。如果你不求进取,你就会落伍。这种态度造就了一个投身于研究、实验和探索的民族。时间是美国人注意节约的两个要素之一,另一个是劳力。 Para. 2 人们一直说:“只有时间才能支配我们。”人们似乎是把时间当作一个差不多是实实在在的东西来对待的。我们安排时间、节约时间、浪费时间、挤抢时间、消磨时间、缩减时间、对时间的利用作出解释;我们还要因付出时间而收取费用。时间是一种宝贵的资源,许多人都深感人生的短暂。时光一去不复返。我们应当让每一分钟都过得有意义。 Para. 3 外国人对美国的第一印象很可能是:每个人都匆匆忙忙——常常处于压力之下。城里人看上去总是在匆匆地赶往他们要去的地方,在商店里他们焦躁不安地指望店员能马上来为他们服务,或者为了赶快买完东西,用肘来推搡他人。白天吃饭时人们也都匆匆忙忙,这部分地反映出这个国家的生活节奏。工作时间被认为是宝贵的。Para. 3b 在公共用餐场所,人们都等着别人吃完后用餐,以便按时赶回去工作。你还会发现司机开车很鲁莽,人们推搡着在你身边过去。你会怀念微笑、简短的交谈以及与陌生人的随意闲聊。不要觉得这是针对你个人的,这是因为人们非常珍惜时间,而且也不喜欢他人“浪费”时间到不恰当的地步。 Para. 4 许多刚到美国的人会怀念诸如商务拜访等场合开始时的寒暄。他们也会怀念那种一边喝茶或咖啡一边进行的礼节性交流,这也许是他们自己国家的一种习俗。他们也许还会怀念在饭店或咖啡馆里谈生意时的那种轻松悠闲的交谈。一般说来,美国人是不会在如此轻松的环境里通过长时间的闲聊来评价他们的客人的,更不用说会在增进相互间信任的过程中带他们出去吃饭,或带他们去打高尔夫球。既然我们通常是通过工作而不是社交来评估和了解他人,我们就开门见山地谈正事。因此,时间老是在我们心中的耳朵里滴滴答答地响着。 Para. 5 因此,我们千方百计地节约时间。我们发明了一系列节省劳力的装置;我们通过发传真、打电话或发电子邮件与他人迅速地进行交流,而不是通过直接接触。虽然面对面接触令人愉快,但却要花更多的时间, 尤其是在马路上交通拥挤的时候。因此,我们把大多数个人拜访安排在下班以后的时间里或周末的社交聚会上。 Para. 6 就我们而言,电子交流的缺乏人情味与我们手头上事情的重要性之间很少有或完全没有关系。在有些国家, 如果没有目光接触,就做不成大生意,这需要面对面的交谈。在美国,最后协议通常也需要本人签字。然而现在人们越来越多地在电视屏幕上见面,开远程会议不仅能解决本国的问题,而且还能通过卫星解决国际问题。

新视野大学英语第三版课文翻译

新视野大学英语3第三版课文翻译 Unit 1 The Way to Success 课文A Never, ever give up! 永不言弃! As a young boy, Britain's great Prime Minister, Sir Winston Churchill, attended a public school called Harrow. He was not a good student, and had he not been from a famous family, he probably would have been removed from the school for deviating from the rules. Thankfully, he did finish at Harrow and his errors there did not preclude him from going on to the university. He eventually had a premier army career whereby he was later elected prime minister. He achieved fame for his wit, wisdom, civic duty, and abundant courage in his refusal to surrender during the miserable dark days of World War II. His amazing determination helped motivate his entire nation and was an inspiration worldwide. Toward the end of his period as prime minister, he was invited to address the patriotic young boys at his old school, Harrow. The headmaster said, "Young gentlemen, the greatest speaker of our time, will be here in a few days to address you, and you should obey whatever sound advice he may give you." The great day arrived. Sir Winston stood up, all five feet, five inches and 107 kilos of him, and gave this short, clear-cut speech: "Young men, never give up. Never give up! Never give up! Never, never, never, never!" 英国的伟大首相温斯顿·丘吉尔爵士,小时候在哈罗公学上学。当时他可不是个好学生,要不是出身名门,他可能早就因为违反纪律被开除了。谢天谢地,他总算从哈罗毕业了,在那里犯下的错误并没影响到他上大学。后来,他凭着军旅生涯中的杰出表现当选为英国首相。他的才思、智慧、公民责任感以及在二战痛苦而黑暗的时期拒绝投降的无畏勇气,为他赢得了美名。他非凡的决心,不仅激励了整个民族,还鼓舞了全世界。 在他首相任期即将结束时,他应邀前往母校哈罗公学,为满怀报国之志的同学们作演讲。校长说:“年轻的先生们,当代最伟大的演说家过几天就会来为你们演讲,他提出的任何中肯的建议,你们都要听从。”那个激动人心的日子终于到了。温斯顿爵士站了起来——他只有5 英尺5 英寸高,体重却有107 公斤。他作了言简意赅的讲话:“年轻人,要永不放弃。永不放弃!永不放弃!永不,永不,永不,永不!” Personal history, educational opportunity, individual dilemmas - none of these can inhibit a strong spirit committed to success. No task is too hard. No amount of preparation is too long or too difficult. Take the example of two of the most scholarly scientists of our age, Albert Einstein and Thomas Edison. Both faced immense obstacles and extreme criticism. Both were called "slow to learn" and written off as idiots by their teachers. Thomas Edison ran away from school because his teacher whipped him repeatedly for asking too many questions. Einstein didn't speak fluently until he was almost nine years old and was such a poor student that some thought he was unable to learn. Yet both boys' parents believed in them. They worked intensely each day with their sons, and the boys learned to never bypass the long hours of hard work that they needed to succeed. In the end, both Einstein and Edison overcame their childhood persecution and went on to achieve magnificent discoveries that benefit the entire world today. Consider also the heroic example of Abraham Lincoln, who faced substantial hardships, failures and repeated misfortunes in his lifetime. His background was certainly not glamorous. He was raised in a very poor family with only one year of formal education. He failed in business twice, suffered a nervous breakdown when his first love died suddenly and lost eight political

新视野大学英语读写教程2-(第三版)-unit-2-课文原文及翻译

Text A 课文 A The humanities: Out of date? 人文学科:过时了吗? When the going gets tough, the tough takeaccounting. When the job market worsens, manystudents calculate they can't major in English orhistory. They have to study something that booststheir prospects of landing a job. 当形势变得困难时,强者会去选学会计。当就业市场恶化时,许多学生估算着他们不能再主修英语或历史。他们得学一些能改善他们就业前景的东西。 The data show that as students have increasingly shouldered the ever-rising c ost of tuition,they have defected from the study of the humanities and toward applied science and "hard"skills that they bet will lead to employment. In oth er words, a college education is more andmore seen as a means for economic betterment rather than a means for human betterment.This is a trend that i s likely to persist and even accelerate. 数据显示,随着学生肩负的学费不断增加,他们已从学习人文学科转向他们相信有益于将来就业的应用科学和“硬”技能。换言之,大学教育越来越被看成是改善经济而不是提升人类自身的手段。这种趋势可能会持续,甚至有加快之势。 Over the next few years, as labor markets struggle, the humanities will proba bly continue theirlong slide in succession. There already has been a nearly 50 percent decline in the portion of liberal arts majors over the past generatio n, and it is logical to think that the trend is boundto continue or even accel erate. Once the dominant pillars of university life, the humanities nowplay li ttle roles when students take their college tours. These days, labs are more vi vid and compelling than libraries. 在未来几年内,由于劳动力市场的不景气,人文学科可能会继续其长期低迷的态势。在上一代大学生中,主修文科的学生数跌幅已近50%。这种趋势会持续、甚至加速的想法是合情合理的。人文学科曾是大学生活的重要支柱,而今在学生们参观校园的时候,却只是一个小点缀。现在,实验室要比图书馆更栩栩如生、受人青睐。 Here, please allow me to stand up for and promote the true value that the h umanities add topeople's lives. 在这儿,请允许我为人文学科给人们的生活所增添的真实价值进行支持和宣传。

新大学日语课文翻译。

第10课 日本的季节 日本的一年有春、夏、秋、冬四个季节。 3月、4月和5月这三个月是春季。春季是个暖和的好季节。桃花、樱花等花儿开得很美。人们在4月去赏花。 6月到8月是夏季。夏季非常闷热。人们去北海道旅游。7月和8月是暑假,年轻人去海边或山上。也有很多人去攀登富士山。富士山是日本最高的山。 9月、10月和11月这3个月是秋季。秋季很凉爽,晴朗的日子较多。苹果、桔子等许多水果在这个季节成熟。 12月到2月是冬季。日本的南部冬天不太冷。北部非常冷,下很多雪。去年冬天东京也很冷。今年大概不会那么冷吧。如果冷的话,人们就使用暖气炉。 第12课 乡下 我爷爷住哎乡下。今天,我要去爷爷家。早上天很阴,但中午天空开始变亮,天转好了。我急急忙忙吃完午饭,坐上了电车。 现在,电车正行驶在原野上。窗外,水田、旱地连成一片。汽车在公路上奔驰。 这时,电车正行驶在大桥上。下面河水在流动。河水很清澈,可以清澈地看见河底。可以看见鱼在游动。远处,一个小孩在挥手。他身旁,牛、马在吃草。 到了爷爷居住的村子。爷爷和奶奶来到门口等着我。爷爷的房子是旧房子,但是很大。登上二楼,大海就在眼前。海岸上,很多人正在全力拉缆绳。渐渐地可以看见网了。网里有很多鱼。和城市不同,乡下的大自然真是很美。 第13课 暑假 大概没有什么比暑假更令学生感到高兴的了。大学在7月初,其他学校在二十四日左右进入暑假。暑假大约1个半月。 很多人利用这个假期去海边、山上,或者去旅行。学生中,也有人去打工。学生由于路费等只要半价,所以在学期间去各地旅行。因此,临近暑假时,去北海道的列车上就挤满了这样的人。从炎热的地方逃避到凉爽的地方去,这是很自然的事。一般在1月、最迟在2月底之前就要预定旅馆。不然的话可能会没有地方住。 暑假里,山上、海边、湖里、河里会出现死人的事,这种事故都是由于不注意引起的。大概只能每个人自己多加注意了。 在东京附近,镰仓等地的海面不起浪,因此挤满了游泳的人。也有人家只在夏季把海边的房子租下来。 暑假里,学校的老师给学生布置作业,但是有的学生叫哥哥或姐姐帮忙。 第14课 各式各样的学生 我就读的大学都有各种各样的学生入学。学生有的是中国人,有的是美国人,有的是英国人。既有年轻的,也有不年轻的。有胖的学生,也有瘦的学生。学生大多边工作边学习。因此,大家看上去都很忙。经常有人边听课边打盹。 我为了学习日本先进的科学技术和日本文化来到日本。预定在这所大学学习3年。既然特意来了日本,所以每天都很努力学习。即便如此,考试之前还是很紧张。其他学生也是这

新视野大学英语5课文翻译(全)

教育界的科技革命 如果让生活在年的人来到我们这个时代,他会辨认出我们当前课堂里发生的许多事情——那盛行的讲座、对操练的强调、从基础读本到每周的拼写测试在内的教学材料和教学活动。可能除了教堂以外,很少有机构像主管下一代正规教育的学校那样缺乏变化了。 让我们把上述一贯性与校园外孩子们的经历作一番比较吧。在现代社会,孩子们有机会接触广泛的媒体,而在早些年代这些媒体简直就是奇迹。来自过去的参观者一眼就能辨认出现在的课堂,但很难适应现今一个岁孩子的校外世界。 学校——如果不是一般意义上的教育界——天生是保守的机构。我会在很大程度上为这种保守的趋势辩护。但变化在我们的世界中是如此迅速而明确,学校不可能维持现状或仅仅做一些表面的改善而生存下去。的确,如果学校不迅速、彻底地变革,就有可能被其他较灵活的机构取代。 计算机的变革力 当今时代最重要的科技事件要数计算机的崛起。计算机已渗透到我们生活的诸多方面,从交通、电讯到娱乐等等。许多学校当然不能漠视这种趋势,于是也配备了计算机和网络。在某种程度上,这些科技辅助设施已被吸纳到校园生活中,尽管他们往往只是用一种更方便、更有效的模式教授旧课程。 然而,未来将以计算机为基础组织教学。计算机将在一定程度上允许针对个人的授课,这种授课形式以往只向有钱人提供。所有的学生都会得到符合自身需要的、适合自己学习方法和进度的课程设置,以及对先前所学材料、课程的成绩记录。 毫不夸张地说,计算机科技可将世界上所有的信息置于人们的指尖。这既是幸事又是灾难。我们再也无须花费很长时间查找某个出处或某个人——现在,信息的传递是瞬时的。不久,我们甚至无须键入指令,只需大声提出问题,计算机就会打印或说出答案,这样,人们就可实现即时的"文化脱盲"。 美中不足的是,因特网没有质量控制手段;"任何人都可以拨弄"。信息和虚假信息往往混杂在一起,现在还没有将网上十分普遍的被歪曲的事实和一派胡言与真实含义区分开来的可靠手段。要识别出真的、美的、好的信息,并挑出其中那些值得知晓的, 这对人们构成巨大的挑战。 对此也许有人会说,这个世界一直充斥着错误的信息。的确如此,但以前教育当局至少能选择他们中意的课本。而今天的形势则是每个人都拥有瞬时可得的数以百万计的信息源,这种情况是史无前例的。 教育的客户化 与以往的趋势不同,从授权机构获取证书可能会变得不再重要。每个人都能在模拟的环境中自学并展示个人才能。如果一个人能像早些时候那样"读法律",然后通过计算机模拟的实践考试展现自己的全部法律技能,为什么还要花万美元去上法学院呢?用类似的方法学开飞机或学做外科手术不同样可行吗? 在过去,大部分教育基本是职业性的:目的是确保个人在其年富力强的整个成人阶段能可靠地从事某项工作。现在,这种设想有了缺陷。很少有人会一生只从事一种职业;许多人都会频繁地从一个职位、公司或经济部门跳到另一个。 在经济中,这些新的、迅速变换的角色的激增使教育变得大为复杂。大部分老成持重的教师和家长对帮助青年一代应对这个会经常变换工作的世界缺乏经验。由于没有先例,青少年们只有自己为快速变化的"事业之路"和生活状况作准备。

相关文档