文档库 最新最全的文档下载
当前位置:文档库 › Optimal Ordered Problem Solver

Optimal Ordered Problem Solver

a r X i v :c s /0207097v 2 [c s .A I ] 23 D e c 2002Technical Report IDSIA-12-02,Version 2.0(arXiv:cs.AI/0207097v2)1-43December 23,2002

Optimal Ordered Problem Solver

J¨u rgen Schmidhuber

juergen@idsia.ch -www.idsia.ch/?juergen

IDSIA,Galleria 2,6928Manno-Lugano,Switzerland Abstract In a quite pragmatic sense oops is the fastest general way of solving one task after another,always optimally exploiting solutions to earlier tasks when possible.It can be used for increasingly hard problems of optimization or prediction.Suppose there is only one task and a bias in form of a probability distribution P on programs for a universal computer.In the i -th phase (i =1,2,3,...)of asymptotically optimal non incremental universal search (Levin,1973,1984)we test all programs p with runtime ≤2i P (p )until the task is solved.Now suppose there is a sequence of tasks,e.g.,the n -th task is to ?nd a shorter path through a maze than the best found so far.To reduce the search time for new tasks,previous incremental extensions of universal search tried to modify P through experience with earlier tasks—but in a heuristic and non-general and suboptimal way prone to over?tting.Oops ,however,does it right.Tested self-delimiting program pre?xes (beginnings of code that may continue)are immediately executed while being generated.They grow by one instruction whenever they request this.The storage for the ?rst found program computing a solution to the current task becomes non-writeable.Programs tested during search for solutions to later task may copy non-writeable code into separate modi?able storage,to edit it and execute the modi?ed result.Pre?xes may also recompute the probability distribution on their su?xes in arbitrary computable ways.To solve the n -th task we sacri?ce half the total search time for testing (via universal search)programs that have the most recent successful program as a pre?x.The other half remains for testing fresh programs starting at the address right above the top non-writeable address.When we are searching for a universal solver for all tasks in the sequence we have to time-share the second half (but not the ?rst!)among all tasks 1..n .For realistic limited computers we need e?cient backtracking in program space to reset storage contents modi?ed by tested programs.We introduce a recursive procedure for doing this in time-optimal fashion.Oops can solve tasks unsolvable by traditional reinforcement learners and AI planners,

such as Towers of Hanoi with 30disks (minimal solution size >109).In our experiments

OOPS demonstrates incremental learning by reusing previous solutions to discover a pre?x

that temporarily rewrites the distribution on its su?xes,such that universal search is

accelerated by a factor of 1000.This illustrates how oops can bene?t from self-improvement and metasearching,that is,searching for faster search procedures.

We mention several oops variants and outline oops -based reinforcement learners.Since

oops will scale to larger problems in essentially unbeatable fashion,we also examine its

physical limitations.

Keywords:oops ,bias-optimality,incremental optimal universal search,e?cient plan-

ning &backtracking in program space,metasearching &metalearning,self-improvement

Schmidhuber

Based on arXiv:cs.AI/0207097v1(TR-IDSIA-12-02version1.0,July2002)(Schmidhuber, 2002d,a).All sections are illustrated by Figures1and2at the end of this paper.Frequently used symbols are collected in reference Table3(general oops-related symbols)and Table A.1 (less important implementation-speci?c symbols,explained in the appendix,Section A). Contents

1Introduction3

1.1Overview (3)

2Survey of Universal Search and Suboptimal Incremental Extensions4

2.1Bias-Optimality (4)

2.2Near-Bias-Optimal Nonincremental Universal Search (4)

2.3Asymptotically Fastest Nonincremental Problem Solver (5)

2.4Previous Work on Incremental Extensions of Universal Search (6)

2.5Other Work on Incremental Learning (6)

3OOPS on Universal Computers7

3.1Formal Setup and Notation (7)

3.2Basic Principles of OOPS (10)

3.3Essential Properties of OOPS (10)

3.4Summary (12)

4OOPS on Realistic Computers12

4.1Multitasking&Pre?x Tracking By Recursive Procedure“Try” (12)

4.1.1Overview of“Try” (13)

4.1.2Details of“Try:”Bias-Optimal Depth-First Planning in Program Space (14)

4.2Realistic OOPS for Finding Universal Solvers (16)

4.3Near-Bias-Optimality of Realistic OOPS (16)

4.4Realistic OOPS Variants for Optimization etc (17)

4.5Illustrated Informal Recipe for OOPS Initialization (18)

4.6Example Initial Programming Language (19)

5Limitations and Possible Extensions of OOPS20

5.1How Often Can we Expect to Pro?t from Earlier Tasks? (20)

5.2Fundamental Limitations of OOPS (20)

5.3Outline of OOPS-based Reinforcement Learning(OOPS-RL) (21)

6Experiments22

6.1On Task-Speci?c Initialization (23)

6.2Towers of Hanoi:the Problem (23)

6.3Task Representation and Domain-Speci?c Primitives (24)

6.4Incremental Learning:First Solve Simpler Context Free Language Tasks (24)

6.5C-Code (25)

6.6Experimental Results for Both Task Sets (25)

6.7Analysis of the Results (27)

6.8Physical Limitations of OOPS (29)

Optimal Ordered Problem Solver

A Example Programming Language30

A.1Data Structures on Tapes (30)

A.2Primitive Instructions (31)

A.2.1Basic Data Stack-Related Instructions (32)

A.2.2Control-Related Instructions (33)

A.2.3Bias-Shifting Instructions to Modify Su?x Probabilities (34)

A.3Initial User-De?ned Programs:Examples (35)

1.Introduction

We train children and most machine learning systems on sequences of harder and harder tasks.This makes sense since new problems often are more easily solved by reusing or adapting solutions to previous problems.

Often new tasks depend on solutions for earlier tasks.For example,given an NP-hard optimization problem,the n-th task in a sequence of tasks may be to?nd an approximation to the unknown optimal solution such that the new approximation is at least1%better (according to some measurable performance criterion)than the best found so far.

Alternatively we may want to?nd a strategy for solving all tasks in a given sequence of more and more complex tasks.For example,we might want to teach our learner a program that computes fac(n)=1×2×...n for any given positive integer n.Naturally,the n-th task in the“training sequence”will be to compute fac(n).

In general we would like our learner to continually pro?t from useful information con-veyed by solutions to earlier tasks.To do this in an optimal fashion,the learner may also have to improve the way it exploits earlier solutions.Is there a general yet time-optimal way of achieving such a feat?Indeed,there is.The Optimal Ordered Problem Solver(oops) is a simple,general,theoretically sound way of solving one task after another,e?ciently searching the space of programs that compute solution candidates,including programs that organize and manage and adapt and reuse earlier acquired knowledge.

1.1Overview

Section2will survey previous relevant work on general optimal search algorithms.Section 3will use the framework of universal computers to explain oops and how it bene?ts from incrementally extracting useful knowledge hidden in training sequences.The remainder of the paper is devoted to“Realistic”oops which uses a recursive procedure for time-optimal planning and backtracking in program space to perform e?cient storage management(Sec-tion4)on realistic,limited computers.Appendix A describes an pilot implementation of Realistic oops based on a stack-based universal programming language inspired by Forth (Moore and Leach,1970),with initial primitives for de?ning and calling recursive functions, iterative loops,arithmetic operations,domain-speci?c behavior,and even for rewriting the search procedure itself.Experiments in Section6use the language of Appendix A to solve 60tasks in a row:we?rst teach oops something about recursion,by training it to construct samples of the simple context free language{1k2k}(k1’s followed by k2’s),for k up to30. This takes roughly0.3days on a standard personal computer(PC).Thereafter,within a few additional days,oops demonstrates the bene?ts of incremental knowledge transfer:it exploits certain properties of its previously discovered universal1k2k-solver to greatly ac-

Schmidhuber

celerate the search for a universal solver for all k disk Towers of Hanoi problems,solving all instances up to k=30(solution size2k?1).Previous,less general reinforcement learners and non learning AI planners tend to fail for much smaller instances.

2.Survey of Universal Search and Suboptimal Incremental Extensions Let us start by brie?y reviewing general,asymptotically optimal search methods by Levin (1973,1984)and Hutter(2002a).These methods are non incremental in the sense that they do not attempt to accelerate the search for solutions to new problems through experience with previous problems.We will point out drawbacks of existing heuristic extensions for incremental search.The remainder of the paper will describe oops which overcomes these drawbacks.

2.1Bias-Optimality

For the purposes of this paper,a problem r is de?ned by a recursive procedure f r that takes as an input any potential solution(a?nite symbol string y∈Y,where Y represents a search space of solution candidates)and outputs1if y is a solution to r,and0otherwise. Typically the goal is to?nd as quickly as possible some y that solves r.

De?ne a probability distribution P on a?nite or in?nite set of programs for a given computer.P represents the searcher’s initial bias(e.g.,P could be based on program length,or on a probabilistic syntax diagram).A bias-optimal searcher will not spend more time on any solution candidate than it deserves,namely,not more than the candidate’s probability times the total search time:

De?nition1(Bias-Optimal Searchers)Given is a problem class R,a search space C of solution candidates(where any problem r∈R should have a solution in C),a task-dependent bias in form of conditional probability distributions P(q|r)on the candidates q∈C,and a prede?ned procedure that creates and tests any given q on any r∈R within time t(q,r)(typically unknown in advance).A searcher is n-bias-optimal(n≥1)if for any maximal total search time T max>0it is guaranteed to solve any problem r∈R if it has a solution p∈C satisfying t(p,r)≤P(p|r)T max/n.It is bias-optimal if n=1.

This de?nition makes intuitive sense:the most probable candidates should get the lion’s share of the total search time,in a way that precisely re?ects the initial bias.

2.2Near-Bias-Optimal Nonincremental Universal Search

The following straight-forward method(sometimes referred to as Levin Search or Lsearch) is near-bias-optimal.For simplicity,we notationally suppress conditional dependencies on the current https://www.wendangku.net/doc/0a1161653.html,pare Levin(1973,1984),Solomono?(1986),Schmidhuber et al. (1997b),Li and Vit′a nyi(1997),Hutter(2002a)(Levin also attributes similar ideas to Al-lender):

Method2.1(Lsearch)Set current time limit T=1.While problem not solved do: Test all programs q such that t(q),the maximal time spent on creating and

running and testing q,satis?es t(q)

Optimal Ordered Problem Solver

Note that Lsearch has the optimal order of computational complexity:Given some prob-lem class,if some unknown optimal program p requires f(k)steps to solve a problem instance of size k,then Lsearch will need at most O(P(p)f(k))=O(f(k))steps—the constant factor P(p)may be huge but does not depend on k.

The near-bias-optimality of Lsearch is hardly a?ected by the fact that for each value of T we repeat certain computations for the previous value.Roughly half the total search time is still spent on T’s maximal value(ignoring hardware-speci?c overhead for parallelization and nonessential speed-ups due to halting programs if there are any).Note also that the time for testing is properly taken into account here:any result whose validity is hard to test is automatically penalized.

Universal Lsearch provides inspiration for nonuniversal but very practical methods which are optimal with respect to a limited search space,while su?ering only from very small slowdown factors.For example,designers of planning procedures often just face a binary choice between two options such as depth-?rst and breadth-?rst search.The latter is often preferrable,but its greater demand for storage may eventually require to move data from on-chip memory to disk.This can slow down the search by a factor of 10,000or more.A straightforward solution in the spirit of Lsearch is to start with a 50%bias towards either technique,and use both depth-?rst and breadth-?rst search in parallel—this will cause a slowdown factor of at most2with respect to the best of the two options(ignoring a bit of overhead for parallelization).Such methods have presumably been used long before Levin’s1973paper.Wiering and Schmidhuber(1996)and Schmidhuber et al.(1997b)used rather general but nonuniversal variants of Lsearch to solve machine learning toy problems unsolvable by traditional methods.Probabilistic alternatives based on probabilistically chosen maximal program runtimes in Speed-Prior style(Schmidhuber, 2000,2002e)also outperformed traditional methods on toy problems(Schmidhuber,1995, 1997).

2.3Asymptotically Fastest Nonincremental Problem Solver

Recently my postdoc Hutter(2002a)developed a more complex asymptotically optimal search algorithm for all well-de?ned problems.Hsearch(or Hutter Search)cleverly al-locates part of the total search time to searching the space of proofs for provably correct candidate programs with provable upper runtime bounds;at any given time it focuses re-sources on those programs with the currently best proven time bounds.Unexpectedly, Hsearch manages to reduce the constant slowdown factor to a value smaller than5.In fact,it can be made smaller than1+?,where?is an arbitrary positive constant(M.Hutter, personal communication,2002).

Unfortunately,however,Hsearch is not yet the?nal word in computer science,since the search in proof space introduces an unknown additive problem class-speci?c constant slowdown,which again may be huge.While additive constants generally are preferrable to multiplicative ones,both types may make universal search methods practically infeasible—in the real world constants do matter.For example,the last to cross the?nish line in the Olympic100m dash may be only a constant factor slower than the winner,but this will not comfort him.And since constants beyond2500do not even make sense within this universe, both Lsearch and Hsearch may be viewed as academic exercises demonstrating that

Schmidhuber

the O()notation can sometimes be practically irrelevant despite its wide use in theoretical computer science.

2.4Previous Work on Incremental Extensions of Universal Search

“Only math nerds would consider2500?nite.”(Leonid Levin) Hsearch and Lsearch(Sections2.2,2.3)neglect one potential source of speed-up: they are nonincremental in the sense that they do not attempt to minimize their constant slowdowns by exploiting experience collected in previous searches for solutions to earlier tasks.They simply ignore the constants—from an asymptotic point of view,incremental search does not buy anything.

A heuristic attempt(Schmidhuber et al.,1997b)to greatly reduce the constants through experience was called Adaptive Lsearch or Als—compare related ideas by Solomono?(1986,1989).Essentially Als works as follows:whenever Lsearch?nds a program q that computes a solution for the current problem,q’s probability P(q)is substantially increased using a“learning rate,”while probabilities of alternative programs decrease appropriately. Subsequent Lsearch es for new problems then use the adjusted P,etc.Schmidhuber et al. (1997b)and Wiering and Schmidhuber(1996)used a nonuniversal variant of this approach to solve reinforcement learning(RL)tasks in partially observable environments unsolvable by traditional RL algorithms.

Each Lsearch invoked by Als is bias-optimal with respect to the most recent adjust-ment of P.On the other hand,the rather arbitrary P-modi?cations themselves are not necessarily optimal.They might lead to over?tting in the following sense:modi?cations of P after the discovery of a solution to problem1could actually be harmful and slow down the search for a solution to problem2,etc.This may provoke a loss of near-bias-optimality with respect to the initial bias during exposure to subsequent tasks.Furthermore,Als has a?xed prewired method for changing P and cannot improve this method by experience. The main contribution of this paper is to overcome all such drawbacks in a principled way.

2.5Other Work on Incremental Learning

Since the early attempts of Newell and Simon(1963)at building a“General Problem Solver”—see also Rosenbloom et al.(1993)—much work has been done to develop mostly heuristic machine learning algorithms that solve new problems based on experience with previous problems,by incrementally shifting the inductive bias in the sense of Utgo?(1986).Many pointers to learning by chunking,learning by macros,hierarchical learning,learning by analogy,etc.can be found in the book by Mitchell(1997).Relatively recent general attempts include program evolvers such as Adate(Olsson,1995)and simpler heuristics such as Genetic Programming(GP)(Cramer,1985,Banzhaf et al.,1998).Unlike logic-based program synthesizers(Green,1969,Waldinger and Lee,1969,Deville and Lau,1994), program evolvers use biology-inspired concepts of Evolutionary Computation(Rechenberg, 1971,Schwefel,1974)or Genetic Algorithms(Holland,1975)to evolve better and better computer programs.Most existing GP implementations,however,do not even allow for programs with loops and recursion,thus ignoring a main motivation for search in program

Optimal Ordered Problem Solver

space.They either have very limited search spaces(where solution candidate runtime is not even an issue),or are far from bias-optimal,or both.Similarly,traditional reinforcement learners(Kaelbling et al.,1996)are neither general nor close to being bias-optimal.

A?rst step to make GP-like methods bias-optimal would be to allocate runtime to tested programs in proportion to the probabilities of the mutations or“crossover operations”that generated them.Even then there would still be room for improvement,however,since GP has quite limited ways of making new programs from previous ones—it does not learn better program-making strategies.

This brings us to several previous publications on learning to learn or metalearning (Schmidhuber,1987),where the goal is to learn better learning algorithms through self-improvement without human intervention—compare the human-assisted self-improver by Lenat(1983).We introduced the concept of incremental search for improved,probabilis-tically generated code that modi?es the probability distribution on the possible code con-tinuations:incremental self-improvers(Schmidhuber et al.,1997a)use the success-story algorithm SSA to undo those self-generated probability modi?cations that in the long run do not contribute to increasing the learner’s cumulative reward per time interval.An earlier meta-GP algorithm(Schmidhuber,1987)was designed to learn better GP-like strategies; Schmidhuber(1987)also combined principles of reinforcement learning economies(Holland, 1985)with a“self-referential”metalearning approach.A gradient-based metalearning tech-nique(Schmidhuber,1993)for continuous program spaces of di?erentiable recurrent neural networks(RNNs)was also designed to favor better learning algorithms;compare the re-markable recent success of the related but technically improved RNN-based metalearner by Hochreiter et al.(2001).

The algorithms above generally are not near-bias-optimal though.The method discussed in this paper,however,combines optimal search and incremental self-improvement,and will be n-bias-optimal,where n is a small and practically acceptable number,such as8.

3.OOPS on Universal Computers

An informed reader familiar with concepts such as universal computers(Turing,1936)and self-delimiting programs(Levin,1974,Chaitin,1975)will probably understand the simple basic principles of oops by just reading the abstract.For the others,Subsection3.1will start the formal description of oops by introducing notation and explaining program sets that are pre?x codes.Subsection3.2will provide oops pseudocode and point out its essential properties and a few essential di?erences to previous work.The remainder of the paper is about practical implementations of the basic principles on realistic computers with limited storage.

3.1Formal Setup and Notation

Unless stated otherwise or obvious,to simplify notation,throughout the paper newly in-troduced variables are assumed to be integer-valued and to cover the range implicit in the context.Given some?nite or countably in?nite alphabet Q={Q1,Q2,...},let Q?de-note the set of?nite sequences or strings over Q,whereλis the empty string.Then let q,q1,q2,...∈Q?be(possibly variable)strings.l(q)denotes the number of symbols in string q,where l(λ)=0;q n is the n-th symbol of string q;q m:n=λif m>n and q m q m+1...q n

Schmidhuber Symbol

Q

Q i

n Q

Q?

q

q n

q n

qp

a last

a frozen

q1:a

f rozen

R

S

S?

s i

s(r)

s i(r)

l(s)

z(i)(r)

ip(r)

p(r)

p i(r)

Optimal Ordered Problem Solver

the content of address i as z(i)(r):=q i if0

Q

(r)}( i p i(r)=1)on Q.

Code is executed in a way inspired by self-delimiting binary programs(Levin,1974, Chaitin,1975)studied in the theory of Kolmogorov complexity and algorithmic probability (Solomono?,1964,Kolmogorov,1965).Section4.1will present details of a practically useful variant of this approach.Code execution is time-shared sequentially among all current tasks. Whenever any ip(r)has been initialized or changed such that its new value points to a valid address≥?l(s(r))but≤l(q),and this address contains some executable token Q i,then Q i will de?ne task r’s next instruction to be executed.The execution may change s(r) including ip(r).Whenever the time-sharing process works on task r and ip(r)points to the smallest positive currently unused address l(q)+1,q will grow by one token(so l(q)will increase by1),and the current value of p i(r)will de?ne the current probability of selecting Q i as the next token,to be stored at new address l(q)and to be executed immediately.That is,executed program beginnings or pre?xes de?ne the probabilities of their possible su?xes. (Programs will be interrupted through errors or halt instructions or when all current tasks are solved or when certain search time limits are reached—see Section3.2.)

To summarize and exemplify:programs are grown incrementally,token by token;their pre?xes are immediately executed while being created;this may modify some task-speci?c internal state or memory,and may transfer control back to previously selected tokens(e.g., loops).To add a new token to some program pre?x,we?rst have to wait until the execution of the pre?x so far explicitly requests such a prolongation,by setting an appropriate signal in the internal state.Pre?xes that cease to request any further tokens are called self-delimiting programs or simply programs(programs are their own pre?xes).So our procedure yields task-speci?c pre?x codes on program space:with any given task,programs that halt because they have found a solution or encountered some error cannot request any more tokens.Given a single task and the current task-speci?c inputs,no program can be the pre?x of another one.On a di?erent task,however,the same program may continue to request additional tokens.

a frozen≥0is a variable address that can increase but not decrease.Once chosen,the

code bias q0:a

f rozen will remain unchangeable forever—it is a(possibly empty)sequence of

programs q1q2...,some of them prewired by the user,others frozen after previous successful searches for solutions to previous task sets(possibly completely unrelated to the current task set R).

To allow for programs that exploit previous solutions,the instruction set Q should contain instructions for invoking or calling code found below a frozen,for copying such code into some s(r),and for editing the copies and executing the results.Examples of such instructions will be given in the appendix(Section A).

Schmidhuber

3.2Basic Principles of OOPS

Given a sequence of tasks,we solve one task after another in the given order.The solver of the n-th task(n≥1)will be a program q i(i≤n)stored such that it occupies successive addresses somewhere between1and l(q).The solver of the1st task will start at address 1.The solver of the n-th task(n>1)will either start at the same address as the solver of the n?1-th task,or right after its end address.To?nd a universal solver for all tasks in a given task sequence,do:

Method3.1(oops)FOR task index n=1,2,...DO:

1.Initialize current time limit T:=

2.

2.Spend at most T/2on a variant of Lsearch that searches for a program solving task n and starting at the start address a last of the most recent successful code(1if there is

none).That is,the problem-solving program either must be equal to q a

last:a f rozen or must

have q a

last:a f rozen as a pre?x.If solution found,go to5.

3.Spend at most T/2on Lsearch for a fresh program that starts at the?rst writeable address and solves all tasks1..n.If solution found,go to5.

4.Set T:=2T,and go to2.

5.Let the top non-writeable address a frozen point to the end of the just discovered problem-solving program.

3.3Essential Properties of OOPS

The following observations highlight important aspects of oops and point out in which sense oops is optimal.

Observation3.1A program starting at a last and solving task n will also solve all tasks up to n.

Proof(exploits the nature of self-delimiting programs):Obvious for n=1.For n>1:By induction,the code between a last and a frozen,which cannot be altered any more,already solves all tasks up to n?1.During its application to task n it cannot request any additional tokens that could harm its performance on these previous tasks.So those of its prolongations that solve task n will also solve tasks1,...,n?1.

Observation3.2a last does not increase if task n can be more quickly solved by testing

prolongations of q a

last:a f rozen on task n,than by testing fresh programs starting above a frozen

on all tasks up to n.

Observation3.3Once we have found an optimal solver for all tasks in the sequence,at most half of the total future time will be wasted on searching for faster alternatives. Observation3.4Unlike the learning rate-based bias shifts of Als(Section2.4),those of oops do not reduce the probabilities of programs that were meaningful and executable before the addition of any new q i.

Optimal Ordered Problem Solver

But consider formerly meaningless program pre?xes trying to access code for earlier solu-tions when there weren’t any:such pre?xes may suddenly become prolongable and success-ful,once some solutions to earlier tasks have been stored.That is,unlike with Als the acceleration potential of oops is not bought at the risk of an unknown slowdown due to nonoptimal changes of the underlying probability distribution through a heuristically cho-sen learning rate.As new tasks come along,oops remains near-bias-optimal with respect to the initial bias,while still being able to pro?t in from subsequent code bias shifts in an optimal way.

Observation3.5Given the initial bias and subsequent code bias shifts due to q1,q2,...,no bias-optimal searcher with the same initial bias will solve the current task set substantially faster than oops.

Ignoring hardware-speci?c overhead(e.g.,for measuring time and switching between tasks), oops will lose at most a factor2through allocating half the search time to prolongations

of q a

last:a f rozen ,and another factor2through the incremental doubling of time limits in

Lsearch(necessary because we do not know in advance the?nal time limit).

Observation3.6If the current task(say,task n)can already be solved by some previously frozen program q k,then the probability of a solver for task n is at least equal to the probability of the most probable program computing the start address a(q k)of q k and setting instruction pointer ip(n):=a(q k).

Observation3.7As we solve more and more tasks,thus collecting and freezing more and more q i,it will generally become harder and harder to identify and address and copy-edit useful code segments within the earlier solutions.

As a consequence we expect that much of the knowledge embodied by certain q j actually will be about how to access and copy-edit and otherwise use programs q i(i

Observation3.8Tested program pre?xes may rewrite the probability distribution on their su?xes in computable ways(based on previously frozen q i),thus temporarily rede?ning the search space structure of Lsearch,essentially rewriting the search procedure.If this type of metasearching for faster search algorithms is useful to accelerate the search for a solution to the current problem,then oops will automatically exploit this.

Since there is no fundamental di?erence between domain-speci?c problem-solving programs and programs that manipulate probability distributions and rewrite the search procedure itself,we collapse both learning and metalearning in the same time-optimal framework. Observation3.9If the overall goal is just to solve one task after another,as opposed to ?nding a universal solver for all tasks,it su?ces to test only on task n in step3.

For example,in an optimization context the n-th task usually is not to?nd a solver for all tasks in the sequence,but just to?nd an approximation to some unknown optimal solution such that the new approximation is better than the best found so far.

Schmidhuber

3.4Summary

Lsearch is about optimal time-sharing,given one problem.Oops is about optimal time-sharing,given a sequence of problems.The basic principles of Lsearch can be explained in one line:time-share all program tests such that each program gets a constant fraction of the total search time.Those of oops require just a few more lines:use self-delimiting programs and freeze those that were successful;given a new task,spend a?xed fraction of the total search time on programs starting with the most recently frozen pre?x(test only on the new task,never on previous tasks);spend the rest of the time on fresh programs (when looking for a universal solver,test them on all previous tasks).

Oops spends part of the total search time for a new problem on programs that exploit previous solutions in computable ways.If the new problem can be solved faster by copy-editing/invoking previous code than by solving the new problem from scratch,then oops will?nd this out.If not,then at least it will not su?er from the previous solutions.

If oops is so simple indeed,then why does the paper not end here but has31additional pages?The answer is:to describe the additional e?orts required to make OOPS work on realistic limited computers,as opposed to universal machines.

4.OOPS on Realistic Computers

Unlike the Turing machines originally used to describe Lsearch and Hsearch,realistic computers have limited storage.So we need to e?ciently reset storage modi?cations com-puted by the numerous programs oops is testing.Furthermore,our programs typically will be composed from more complex primitive instructions than those of typical Turing machines.In what follows we will address such issues in detail.

4.1Multitasking&Pre?x Tracking By Recursive Procedure“Try”

Hsearch and Lsearch assume potentially in?nite storage.Hence they may largely ignore questions of storage management.In any practical system,however,we have to e?ciently reuse limited storage.Therefore,in both subsearches of Method3.1(steps2and3),Re-alistic oops evaluates alternative pre?x continuations by a practical,token-oriented back-tracking procedure that can deal with several tasks in parallel,given some code bias in the form of previously found code.

The novel recursive method Try below essentially conducts a depth-?rst search in pro-gram space,where the branches of the search tree are program pre?xes(each modifying a bunch of task-speci?c states),and backtracking(partial resets of partially solved task sets and modi?cations of internal states and continuation probabilities)is triggered once the sum of the runtimes of the current pre?x on all current tasks exceeds the current time limit multiplied by the pre?x probability(the product of the history-dependent probabilities of the previously selected pre?x components in Q).This ensures near-bias-optimality(Def.

1),given some initial probabilistic bias on program space?Q?.

Given task set R,the current goal is to solve all tasks r∈R,by a single program that

either appropriately uses or extends the current code q0:a

f rozen (no additional freezin

g will

take place before all tasks in R are solved).

Optimal Ordered Problem Solver

4.1.1Overview of“Try”

We assume an initial set of user-de?ned primitive behaviors re?ecting prior knowledge and assumptions of the user.Primitives may be assembler-like instructions or time-consuming software,such as,say,theorem provers,or matrix operators for neural network-like parallel architectures,or trajectory generators for robot simulations,or state update procedures for multiagent systems,etc.Each primitive is represented by a token∈Q.It is essential that those primitives whose runtimes are not known in advance can be interrupted by oops at any time.

The searcher’s initial bias is also a?ected by initial,user-de?ned,task-dependent prob-ability distributions on the?nite or in?nite search space of possible self-delimiting program pre?xes.In the simplest case we start with a maximum entropy distribution on the tokens, and de?ne pre?x probabilities as the products of the probabilities of their tokens.But pre?x continuation probabilities may also depend on previous tokens in context sensitive fashion de?ned by a probabilistic syntax diagram.In fact,we even permit that any executed pre?x assigns a task-dependent,self-computed probability distribution to its own possible su?xes (compare Section3.1).

Consider the left-hand side of Figure1.All instruction pointers ip(r)of all current tasks r are initialized by some address,typically below the topmost code address,thus accessing the code bias common to all tasks,and/or using task-speci?c code fragments written into tapes.All tasks keep executing their instructions in parallel until interrupted or all tasks are solved,or until some task’s instruction pointer points to the yet unused address right after the topmost code address.The latter case is interpreted as a request for code prolongation through a new token,where each token has a probability according to the present task’s current state-encoded distribution on the possible next tokens.The deterministic method Try systematically examines all possible code extensions in a depth-?rst fashion(probabilities of pre?xes are just used to order them for runtime allocation). Interrupts and backtracking to previously selected tokens(with yet untested alternatives) and the corresponding partial resets of states and task sets take place whenever one of the tasks encounters an error,or the product of the task-dependent probabilities of the currently selected tokens multiplied by the sum of the runtimes on all tasks exceeds a given total search time limit T.

To allow for e?cient backtracking,Try tracks e?ects of tested program pre?xes,such as task-speci?c state modi?cations(including probability distribution changes)and partially solved task sets,to reset conditions for subsequent tests of alternative,yet untested pre?x continuations in an optimally e?cient fashion(at most as expensive as the pre?x tests themselves).

Since programs are created online while they are being executed,Try will never create impossible programs that halt before all their tokens are read.No program that halts on a given task can be the pre?x of another program halting on the same task.It is important to see,however,that in our setup a given pre?x that has solved one task(to be removed from the current task set)may continue to demand tokens as it tries to solve other tasks.

Schmidhuber

4.1.2Details of“Try:”Bias-Optimal Depth-First Planning in Program

Space

To allow us to e?ciently undo state changes,we use global Boolean variables mark i(r) (initially False)for all possible state components s i(r).We initialize time t0:=0;prob-ability P:=1;q-pointer qp:=a frozen and state s(r)—including ip(r)and p(r)—with task-speci?c information for all task names r in a so-called ring R0of tasks,where the expression“ring”indicates that the tasks are ordered in cyclic fashion;|R|denotes the number of tasks in ring R.Given a global search time limit T,we Try to solve all tasks in R0,by using existing code in q=q1:qp and/or by discovering an appropriate prolongation of q:

—————————————————————————————–

Method4.1(Boolean Try(qp,r0,R0,t0,P))(r0∈R0;returns True or False;may

): have the side e?ect of increasing a frozen and thus prolonging the frozen code q1:a

f rozen

1.Make an empty stack S;set local variables r:=r0;R:=R0;t:=t0;Done:=False. While there are unsolved tasks(|R|>0)and there is enough time left(t≤P T)and instruction pointer valid(?l(s(r))≤ip(r)≤qp)and instruction valid(1≤z(ip(r))(r)≤n Q)and no halt condition is encountered(e.g.,error such as division by0,or robot bumps into obstacle;evaluate conditions in the above order until?rst satis?ed,if any)Do: Interpret/execute token z(ip(r))(r)according to the rules of the given program-

ming language,continually increasing t by the consumed time.This may modify

s(r)including instruction pointer ip(r)and distribution p(r),but not code q.

Whenever the execution changes some state component s i(r)whose mark i(r)=

False,set mark i(r):=True and save the previous value?s i(r)by pushing the

triple(i,r,?s i(r))onto S.Remove r from R if solved.If|R|>0,set r equal

to the next task in ring R(like in the round-robin method of standard op-

erating systems).Else set Done:=True;a frozen:=qp(all tasks solved;new

code frozen,if any).

https://www.wendangku.net/doc/0a1161653.html,e S to e?ciently reset only the modi?ed mark i(k)to False(the global mark variables will be needed again in step3),but do not pop S yet.

3.If ip(r)=qp+1(i.e.,if there is an online request for prolongation of the current pre?x through a new token):While Done=False and there is some yet untested token Z∈Q(untried since t0as value for q qp+1)Do:

Set q qp+1:=Z and Done:=Try(qp+1,r,R,t,P?p(r)(Z)),where p(r)(Z)is

Z’s probability according to current distribution p(r).

https://www.wendangku.net/doc/0a1161653.html,e S to e?ciently restore only those s i(k)changed since t0,thus restoring all tapes to their states at the beginning of the current invocation of Try.This will also restore instruction pointer ip(r0)and original search distribution p(r0).Return the value of Done.

—————————————————————————————–

Optimal Ordered Problem Solver

A successful Try will solve all tasks,possibly increasing a frozen and prolonging total code q.In any case Try will completely restore all states of all tasks.It never wastes time on recomputing previously computed results of pre?xes,or on restoring unmodi?ed state components and marks,or on already solved tasks—tracking/undoing e?ects of pre?xes essentially does not cost more than their execution.So the n in Def.1of n-bias-optimality is not greatly a?ected by the undoing procedure:we lose at most a factor2,ignoring hardware-speci?c overhead such as the costs of single push and pop operations on a given computer,or the costs of measuring time,etc.

Since the distributions p(r)are modi?able,we speak of self-generated continuation prob-

abilities.As the variable su?x q′:=q a

f rozen+1:qp of the total code q=q1:qp is growing,its

probability can be readily updated:

P(q′|s0)=

qp

i=a f rozen+1

P i(q i|s i),(1)

where s0is an initial state,and P i(q i|s i)is the probability of q i,given the state s i of the task r whose variable distribution p(r)(as a part of s i)was used to determine the probability of token q i at the moment it was selected.So we allow the probability of q qp+1to depend on q0:qp and intial state s0in a fairly arbitrary computable fashion.Note that unlike the traditional Turing machine-based setup by Levin(1974)and Chaitin(1975) (always yielding binary programs q with probability2?l(q))this framework of self-generated continuation probabilities allows for token selection probabilities close to1.0,that is,even long programs may have high probability.

Example.In many programming languages the probability of token“(”,given a previous token“While”,equals1.Having observed the“(”there is not a lot of new code to execute yet—in such cases the rules of the programming language will typically demand another increment of instruction pointer ip(r),which will lead to the request of another token through subsequent increment of the topmost code address.However,once we have observed a complete expression of the form“While(condition)Do(action),”it may take a long time until the conditional loop—interpreted via ip(r)—is exited and the top address is incremented again,thus asking for a new token.

The round robin Try variant above keeps circling through all unsolved tasks,executing one instruction at a time.Alternative Try variants could also sequentially work on each task until it is solved,then try to prolong the resulting q on the next task,and so on, appropriately restoring previous tasks once it turns out that the current task cannot be solved through prolongation of the pre?x solving the earlier tasks.One potential advantage of round robin Try is that it will quickly discover whether the currently studied pre?x causes an error for at least one task,in which case it can be discarded immediately.

Nonrecursive C-Code.An e?cient iterative(nonrecursive)version of Try for a broad variety of initial programming languages was implemented in C.Instead of local stacks S, a single global stack is used to save and restore old contents of modi?ed cells of all tapes/ tasks.

Schmidhuber

4.2Realistic OOPS for Finding Universal Solvers

Recall that the instruction set Q should contain instructions for invoking or calling code found below a frozen,for copying such code into s(r),and for editing the copies and executing the results(examples in Appendix A).

Now suppose there is an ordered sequence of tasks r1,r2,....Inductively suppose we have solved the?rst n tasks through programs stored below address a frozen,and that the most recently discovered program starting at address a last≤a frozen actually solves all of them,possibly using information conveyed by earlier programs q1,q2,....To?nd a program solving the?rst n+1tasks,Realistic oops invokes Try as follows(using set notation for task rings,where the tasks are ordered in cyclic fashion—compare basic Method3.1):

—————————————————————————————–

Method4.2(Realistic oops(n+1))Initialize current time limit T:=2and q-pointer qp:=a frozen(top frozen address).

1.Set instruction pointer ip(r n+1):=a last(start address of code solving all tasks up to n).

If Try(qp,r n+1,{r n+1},0,1

2

)set a last:=a and exit.

(This means that half the time is assigned to all new programs with fresh starts).

3.Set T:=2T,and go to1.

—————————————————————————————–

Therefore,given tasks r1,r2,...,?rst initialize a last;then for i:=1,2,...invoke Realistic oops(i)to?nd programs starting at(possibly increasing)address a last,each solving all tasks so far,possibly eventually discovering a universal solver for all tasks in the sequence.

As address a last increases for the n-th time,q n is de?ned as the program starting at a last’s old value and ending right before its new value.Program q m(m>i,j)may exploit q i by calling it as a subprogram,or by copying q i into some state s(r),then editing it there, e.g.,by inserting parts of another q j somewhere,then executing the edited variant.

4.3Near-Bias-Optimality of Realistic OOPS

oops for realistic computers is not only asymptotically optimal in the sense of Levin(1973) (see Method2.1),but also near bias-optimal(compare Def.1,Observation3.5).To see this,consider a program p solving the current task set within k steps,given current code

bias q0:a

f rozen and a last.Denote p’s probability by P(p)(compare Eq.(1)and Method

4.2;for simplicity we omit the obvious conditions).A bias-optimal solver would?nd a

Optimal Ordered Problem Solver

solution within at most k/P(p)steps.We observe that oops will?nd a solution within at most23k/P(p)steps,ignoring a bit of hardware-speci?c overhead(for marking changed tape components,measuring time,switching between tasks,etc,compare Section4.1):At most a factor2might be lost through allocating half the search time to prolongations of the most recent code,another factor2for the incremental doubling of T(necessary because we do not know in advance the best value of T),and another factor2for Try’s resets of states and tasks.So the method is essentially8-bias-optimal(ignoring hardware issues) with respect to the current task.If we do not want to ignore hardware issues:on currently widely used computers we can realistically expect to su?er from slowdown factors smaller than acceptable values such as,say,100.

The advantages of oops materialize when P(p)>>P(p′),where p′is among the most probable fast solvers of the current task set that do not use previously found code.Ideally, p is already identical to the most recently frozen code.Alternatively,p may be rather short and thus likely because it uses information conveyed by earlier found programs stored below a frozen.For example,p may call an earlier stored q i as a subprogram.Or maybe p is a short and fast program that copies a large q i into state s(r j),then modi?es the copy just a little bit to obtainˉq i,then successfully appliesˉq i to r j.Clearly,if p′is not many times faster than p,then oops will in general su?er from a much smaller constant slowdown factor than non incremental asymptotically optimal search,precisely re?ecting the extent to which solutions to successive tasks do share useful mutual information,given the set of primitives for copy-editing them.

Given an optimal problem solver,problem r,current code bias q0:a

f rozen ,the most recent

start address a last,and information about the starts and ends of previously frozen programs q1,q2,...,q k,the total search time T(r,q1,q2,...,q k,a last,a frozen)for solving r can be used to de?ne the degree of bias

B(r,q1,q2,...,q k,a last,a frozen):=1/T(r,q1,q2,...,q k,a last,a frozen). Compare the concept of conceptual jump size(Solomono?,1986,1989).

4.4Realistic OOPS Variants for Optimization etc.

Sometimes we are not searching for a universal solver,but just intend to solve the most recent task r n+1. E.g.,for problems of?tness function maximization or optimization,the n-th task typically is just to?nd a program than outperforms the most recently found program.In such cases we should use a reduced variant of oops which replaces step2of Method4.2by:

2.Set a:=a frozen+1;set ip(r n+1):=a.If Try(qp,r n+1,{r n+1},0,1

Schmidhuber

tasks,where m is an integer constant,etc.Yet other oops variants will assign more(or less)than half of the total time to the most recent code and prolongations thereof.We may also consider probabilistic oops variants in Speed-Prior style(Schmidhuber,2000,2002e).

One not necessarily useful idea:Suppose the number of tasks to be solved by a single program is known in advance.Now we might think of an OOPS variant that works on all tasks in parallel,again spending half the search time on programs starting at a last,half on programs starting at a frozen+1;whenever one of the tasks is solved by a prolongation of

q a

last:a f rozen (usually we cannot know in advance which task),we remove it from the current

task ring and freeze the code generated so far,thus increasing a frozen(in contrast to Try which does not freeze programs before the entire current task set is solved).If it turns out, however,that not all tasks can be solved by a program starting at a last,we have to start from scratch by searching only among programs starting at a frozen+1.Unfortunately,in general we cannot guarantee that this approach of early freezing will converge.

4.5Illustrated Informal Recipe for OOPS Initialization

Given some application,before we can switch on oops we have to specify our initial bias.

1.Given a problem sequence,collect primitives that embody the prior knowledge.Make

sure one can interrupt any primitive at any time,and that one can undo the e?ects of(partially)executing it.

For example,if the task is path planning in a robot simulation,one of the primi-tives might be a program that stretches the virtual robot’s arm until its touch sensors encounter an obstacle.Other primitives may include various traditional AI path plan-ners(Russell and Norvig,1994),arti?cial neural networks(Werbos,1974,Rumelhart et al.,1986,Bishop,1995)or support vector machines(Vapnik,1992)for classify-ing sensory data written into temporary internal storage,as well as instructions for repeating the most recent action until some sensory condition is met,etc.

2.Insert additional prior bias by de?ning the rules of an initial probabilistic programming

language for combining primitives into complex sequential programs.

For example,a probabilistic syntax diagram may specify high probability for executing the robot’s stretch-arm primitive,given some classi?cation of a sensory input that was written into temporary,task-speci?c memory by some previously invoked classi?er primitive.

3.To complete the bias initialization,add primitives for addressing/calling/copying

&editing previously frozen programs,and for temporarily modifying the probabilistic rules of the language(that is,these rules should be represented in modi?able task-speci?c memory as well).Extend the initial rules of the language to accommodate the additional primitives.

For example,there may be a primitive that counts the frequency of certain primitive combinations in previously frozen programs,and temporarily increases the probability of the most frequent ones.Another primitive may conduct a more sophisticated but also more time-consuming Bayesian analysis,and write its result into task-speci?c storage such that it can be read by subsequent primitives.Primitives for editing code

Optimal Ordered Problem Solver

may invoke variants of Evolutionary Computation(Rechenberg,1971,Schwefel,1974), Genetic Algorithms(Holland,1975),Genetic Programming(Cramer,1985,Banzhaf et al.,1998),Ant Colony Optimization(Gambardella and Dorigo,2000,Dorigo et al., 1999),etc.

https://www.wendangku.net/doc/0a1161653.html,e oops,which invokes Try,to bias-optimally spend your limited computation time

on solving your problem sequence.

The experiments(Section6)will use assembler-like primitives that are much simpler(and thus in a certain sense less biased)than those mentioned in the robot example above.They will su?ce,however,to illustrate the basic principles.

4.6Example Initial Programming Language

“If it isn’t100times smaller than’C’it isn’t Forth.”(Charles Moore) The e?cient search and backtracking mechanism described in Section4.1is designed for a broad variety of possible programming languages,possibly list-oriented such as LISP,or based on matrix operations for recurrent neural network-like parallel architectures.Many other alternatives are possible.

A given language is represented by Q,the set of initial tokens.Each token corresponds to a primitive instruction.Primitive instructions are computer programs that manipulate tape contents,to be composed by oops such that more complex programs result.In princi-ple,the“primitives”themselves could be large and time-consuming software,such as,say, traditional AI planners,or theorem provers,or multiagent update procedures,or learning algorithms for neural networks represented on https://www.wendangku.net/doc/0a1161653.html,pare Section4.5.

For each instruction there is a unique number between1and n Q,such that all such numbers are associated with exactly one instruction.Initial knowledge or bias is introduced by writing appropriate primitives and adding them to Q.Step1of procedure Try(see Section4.1)translates any instruction number back into the corresponding executable code (in our particular implementation:a pointer to a C-function).If the presently executed instruction does not directly a?ect instruction pointer ip(r),e.g.,through a conditional jump,or the call of a function,or the return from a function call,then ip(r)is simply incremented.

Given some choice of programming language/initial primitives,we typically have to write a new interpreter from scratch,instead of using an existing one.Why?Because procedure Try(Section4.1)needs total control over all(usually hidden and inaccessible) aspects of storage management,including garbage collection etc.Otherwise the storage clean-up in the wake of executed and tested pre?xes could become suboptimal.

For the experiments(Section6)we wrote(in C)an interpreter for an example,stack-based,universal programming language inspired by Forth(Moore and Leach,1970),whose disciples praise its beauty and the compactness of its programs.

The appendix(Section A)describes the details.Data structures on tapes(Section A.1) can be manipulated by primitive instructions listed in Sections A.2.1,A.2.2,A.2.3.Section A.3shows how the user may compose complex programs from primitive ones,and insert them into total code q.Once the user has declared his programs,n Q will remain?xed.

Schmidhuber

5.Limitations and Possible Extensions of OOPS

In what follows we will discuss to which extent“no free lunch theorems”are relevant to oops(Section5.1),which are the essential limitations of oops(Section5.2),and how to use oops for reinforcement learning(Section5.3).

5.1How Often Can we Expect to Pro?t from Earlier Tasks?

How likely is it that any learner can indeed pro?t from earlier solutions?At?rst naive glance this seems unlikely,since it has been well-known for many decades that most possible pairs of symbol strings(such as problem-solving programs)do not share any algorithmic information;e.g.,Li and Vit′a nyi(1997).Why not?Most possible combinations of strings x,y are algorithmically incompressible,that is,the shortest algorithm computing y,given x,has the size of the shortest algorithm computing y,given nothing(typically a bit more than l(y)symbols),which means that x usually does not tell us anything about y.Papers in evolutionary computation often mention no free lunch theorems(Wolpert and Macready,1997)which are variations of this ancient insight of theoretical computer science.

Such at?rst glance discouraging theorems,however,have a quite limited scope:they refer to the very special case of problems sampled from i.i.d.uniform distributions on?nite problem spaces.But of course there are in?nitely many distributions besides the uniform one.In fact,the uniform one is not only extremely unnatural from any computational perspective—although most objects are random,computing random objects is much harder than computing nonrandom ones—but does not even make sense as we increase data set size and let it go to in?nity:There is no such thing as a uniform distribution on in?nitely many things,such as the integers.

Typically,successive real world problems are not sampled from uniform distributions. Instead they tend to be closely related.In particular,teachers usually provide sequences of more and more complex tasks with very similar solutions,and in optimization the next task typically is just to outstrip the best approximative solution found so far,given some basic setup that does not change from one task to the next.Problem sequences that humans consider to be interesting are atypical when compared to arbitrary sequences of well-de?ned problems(Schmidhuber,1997).In fact,it is no exaggeration to claim that almost the entire?eld of computer science is focused on comparatively few atypical problem sets with exploitable regularities.For all interesting problems the consideration of previous work is justi?ed,to the extent that interestingness implies relatedness to what’s already known (Schmidhuber,2002b).Obviously,oops-like procedures are advantageous only where such relatedness does exist.In any case,however,they will at least not do much harm.

5.2Fundamental Limitations of OOPS

An appropriate task sequence may help oops to reduce the slowdown factor of plain Lsearch through experience.Given a single task,however,oops does not by itself invent an appropriate series of easier subtasks whose solutions should be frozen?rst.Of course, since both Lsearch and oops may search in general algorithm space,some of the pro-grams they execute may be viewed as self-generated subgoal-de?ners and subtask solvers. But with a single given task there is no incentive to freeze intermediate solutions before the

声律启蒙十五删

xīnɡduìfèi,fùduìpān 兴对废,附对攀 lùcǎo duìshuānɡjiān 露草对霜菅 ɡēlián duìjièkòu 歌廉对借寇 xíkǒnɡduìxīyán 习孔对希颜 shān lěi lěi,shuǐchán chán 山垒垒,水潺潺 fènɡbìduìtàn huán 奉璧对探镮 lǐyóu ɡōnɡ dàn zuò 礼由公旦作 shīběn zhònɡníshān 诗本仲尼删lǘkùn kèfānɡjīnɡbàshu ǐ 驴困客方经灞水 jīmínɡrén yǐchūhánɡuān 鸡鸣人已出函关 jǐyèshuānɡfēi 几夜霜飞 yǐyǒu cānɡhónɡcíběi sài 已有苍鸿辞北塞 shùzhāo wùàn 数朝雾暗 qǐwúxuán bào yǐn nán shān 岂无玄豹隐南山 【解析】 兴对废,附对攀,露草对霜菅 兴废,兴盛和衰废。 [南朝梁] 刘勰《文心雕龙.史传》云:”表微盛衰,殷鉴兴废。” 《大宋宣和遗事.元集》云:”上下三千余年,兴废百千万事。” 攀,向上爬;附,靠近,依从。有成语“攀龙附凤”比喻依附权贵以成就功业。亦比喻依附有声望的人以立名。 [汉] 扬雄《法言·渊骞》:“攀龙鳞,附凤翼,巽以扬之,勃勃乎其不可及也。”唐·杜甫《洗兵马》:攀龙附凤势莫当,天下尽化为侯王。 露草:沾露的草。 [唐] 李华《木兰赋》:“露草白兮山凄凄,鹤既唳兮猿復啼。”[清] 谭嗣同《武昌夜泊》诗之二:“露草逼蛩语,霜花凋雁翎。”

霜菅:霜后枯萎的菅草。用以比喻白发。[宋] 苏轼《再用前韵(追饯正辅表兄至博罗赋诗为别)》:“乐天双鬢如霜菅,始知谢遣素与蛮。” [宋] 陆游《怀昔》诗:“岂知堕老境,槁木蒙霜菅。” 歌廉对借寇,习孔对希颜 歌廉歌颂廉范。 《后汉书》记载,东汉名臣廉范,字叔度,任蜀郡太守时为官清廉,更改禁民夜作旧令,让百姓储水以防火,百姓掌灯夜作,日渐丰裕。百姓歌曰:“廉叔度,来何暮,不禁火,民安作,昔无襦,今五衿”。 借寇挽留寇恂。 汉名臣寇恂,字子翼,历任河内、颍川、汝南太守。治理颍川期间颇有政绩,升迁离任后,次年随光武帝再至颍川平寇,所到之处群寇望风而降,百姓们纷纷于帝驾之前拦道,请求再借寇恂在颍川任职一年。后就用“借寇”表示挽留地方官,含有对政绩的称美之意。 习孔希颜:学习孔子,效仿颜回。习、希:都是学习和效仿的意思。 山垒垒,水潺潺 山垒垒垒垒:重叠的样子。《文选·曹丕·善哉行》:“还望故乡,鬱何垒垒。”[明]何景明《雁门太守行》诗云:“垒垒高山,莽莽代谷。” 水潺潺溪水徐徐流动。[三国]曹丕《丹霞蔽日行》云:“谷水潺潺,木落翩翩。” [唐] 杜牧《中秋日拜起居表晨渡天津桥即事十六韵献》诗云:“楼齐云漠漠,桥束水潺潺”。[唐] 李涉《竹枝词》诗云:“荆门滩急水潺潺,两岸猿啼烟满山”。 奉壁对探镮 奉璧即蔺相如“完璧归赵”典故。(参见本系列第十六讲:《作赋观书双雄事,回文锦字几华章?》中“奉璧蔺相如”一句之详解。) 探镮亦作“探环”。《晋书·羊祜传》载,西晋大臣羊祜(此前“羊公德大,邑人竖堕泪之碑”以及“叔子带”都曾讲到他)五岁时,叫乳母把他玩过的金环取来,乳母说:“你没有这种玩具呀!”羊祜就自己爬到邻居李家的树上,

汽车用胶管基本参数

汽车用胶管基本常识 1.胶管按结构通常分为如下3类: 1.1有加强层结构的胶管 1.1.1织物加强结构胶管 1.1.2金属加强结构胶管 1.1.3按增强层结构分为 1.1.3.1夹布胶管:以涂胶织物(或胶布)作为骨架层材料制成的胶管,可在外面加钢线进行固定。 特点:夹布耐压胶管主要是由平纹交织的布料制成的胶布(其经纬密度和强度基本相等),经45°裁断、拼接,并包贴而成。制造工艺简单,对产品规格、层数范围等适应性较强,并具有 管体挺性好等优点。但效率低。 1.1.3.2编织胶管:以各种线材(纤维或金属线)作为骨架层材料,经编织而制成的胶管,称为编织胶管。 特点:编织胶管的编织层通常都按平衡角(54°44′)进行交织而成的,因此这种结构的胶管具有承压性能好、弯曲性能好、与夹布胶管相比材料利用率高。 1.1.3.3缠绕胶管: 以各种线材(纤维或金属线)作为骨架层材料,经缠绕而制成的胶管,称为缠绕胶管。 特点:其特点与编织胶管相似,其承压强度高、耐冲击及屈挠性能好。生产效率高。 1.1.3.4针织胶管: 以棉线或其它纤维作为骨架层材料,经针织而制成的胶管,称为针织胶管。 特点:由针织线沿着与轴成一定的角度交织在内管坯上,其交差点比较稀疏,一般都以单层结构组成 1.1.4常用增强层材料 汽车胶管所用的增强层材料分纤维、金属两大类。 1.1.4.1纤维又分为:天然纤维和合成纤维两类 1.1.4.1.1天然纤维:棉、麻、玻璃纤维、石棉纤维 1.1.4.1.2合成纤维:人造丝、维纶、涤纶、锦纶、芳纶、树脂。 1.1.4.2 纺织纤维的单位通常用D或tex 表示: D是纤维的一种单位,称“丹尼尔”,意思是:单位长度的纤维的重量。 Tex也是纤维的一种单位称“特克斯”与D互为倒数,意思是:单位重量的纤维的长度。 例如:1100D 的芳纶纤维是指1000米的芳纶纤维的重量是1100 克,即每米纤维的重量是1.1克; 同样的纤维用特克斯表示则为0.909 tex,即每克重量的纤维的是0.909米 2.汽车胶管常用的橡胶材料 聚合物名称 英文缩写 性能 应用范围 丙烯酸酯橡胶 ACM 有优越的耐矿物油及耐高温氧化性能,耐油性仅次于氟 胶与中高档的丁腈胶相似,在石油系、动植物油中体积 变化很小,拉伸强度,扯断伸长率,硬度的变化较通用 橡胶小。在175℃下可长期使用抗臭氧性、气密性佳; 不耐水,对水蒸气有机酸、无机酸、碱几乎不能抵抗。 用于现代汽车液压传动系统,承受150℃润滑油的冷却 管 乙烯丙烯酸酯橡胶 AEM 改善了丙烯酸酯的低温性能工作温度可以达到‐40℃ ~175℃其它性能同丙烯酸酯橡胶 氯化丁基橡胶 CIIR 氯化聚乙烯橡胶 CM 环氧氯丙烷橡胶 CO 氯丁橡胶 CR 氯丁胶的强伸性与天然胶相似,属自补强性橡胶,有良 好的耐老化性能,优异的耐燃性,耐油性低于丁腈优于 其它通用橡胶,有导电性,耐水性佳,有良好的粘合性;低温性,贮存稳定性差 氯磺化聚乙烯橡胶 CSM 抗臭氧性优异,耐热性可达150℃,耐化学药品,耐候性,低温特点物理性能,耐燃性好,宜在干燥环境中贮

(完整版)声律启蒙十四寒(详细注解及典故来历)

duō duìshǎo,yì duì nán 多对少,易对难 hǔ jù duì lónɡ pán 虎踞对龙蟠 lónɡzhōu duìfènɡniǎn 龙舟对凤辇 bái hè duìqīnɡ luán 白鹤对青鸾 fēnɡxīxī,lù tuán tuán 风淅淅,露漙漙 xiùɡǔ duìdiāoān 绣毂对雕鞍 yú yóu hé yèzhǎo 鱼游荷叶沼lù lìliǎo huātān 鹭立蓼花滩 yǒu jiǔruǎn diāo xī yònɡjiě 有酒阮貂奚用解 wú yú fénɡ jiá bìxū tán 无鱼冯铗必须弹 dīnɡɡùmènɡsōnɡ 丁固梦松 kē yèhū rán shēnɡ fùshànɡ 柯叶忽然生腹上 wén lánɡ huà zhú 文郎画竹 zhīshāo shūěr zhǎnɡ háo duān 枝梢倏尔长毫端

hán duìshǔ,shī duìgān 寒对暑,湿对干 lǔyǐn duì qí huán 鲁隐对齐桓 hán zhān duìnuǎn xí 寒毡对暖席 yèyǐn duì chén cān 夜饮对晨餐 shūzǐ dài,zhònɡ yóu ɡuān 叔子带,仲由冠 jiárǔ duì hán dān 郏鄏对邯郸 jiā héyōu xià hàn 嘉禾忧夏旱shuāi liǔ nài qiū hán 衰柳耐秋寒 yánɡliǔlǜzhē yuán liànɡ zhái 杨柳绿遮元亮宅 xìnɡhuāhónɡyìnɡzhònɡ ní tán 杏花红映仲尼坛 jiānɡshuǐ liúchánɡ 江水流长 huán rào sìqīnɡ luó dài 环绕似青罗带 hǎi chán lún mǎn 海蟾轮满 chénɡmínɡ rú bái yù pán 澄明如白玉盘 【解析】 寒对暑,湿对干,鲁隐对齐桓。 鲁隐:春秋鲁国第十四代君主,隐公姬息姑。孔子所作之《春秋》就起于鲁隐公元年(前722)。由于春秋以鲁国国史为基础而编,故当时的国际大事都是以鲁国纪年来记录。鲁隐公也因为其纪年年号常被提及而出名。 齐桓:春秋齐国桓公姜小白,是春秋五霸之首。是历史上第一个代替周天子充当盟主的诸侯。齐桓公晚年昏庸,管仲去世后,任用易牙、竖刁等小人,最终在内乱中饿死。 寒毡对暖席 寒毡:唐代画家郑虔,享有“诗书画三绝”之誉,与李白、杜甫为诗酒朋友,却生活清贫。杜甫曾经赠以诗曰:‘才名四十年,坐客寒无毡’云。”后以“寒毡”形容寒士清苦的生活。

声律启蒙全文详解

《声律启蒙》全文详解 一东1 ————————注释———————— 1一东:“东”指“东韵”,是宋金时期的“平水韵”(也叫“诗韵”)中的一个韵部。“东”叫韵目,即这个韵部的代表字。东韵中包含有许多字,它们的共同点便是韵母相同(当然是指隋唐五代两宋时期的读音),像下面的三段文字中,每个句号之前的那个字,即风、空、虫、弓、东、宫、红、翁、同、童、穷、铜、通、融、虹等15字,尽管在现代汉语中的韵母并不完全相同,但都同属于东韵,如果是作格律诗,这些字就可以互相押韵。“一”,是指东韵在平水韵中的次序。平水韵按照平、上、去、人四个声调分为106个韵部,其中因为平声的字较多,故分为上下两个部分,东韵是上平声中的第一个韵部。后面的“二冬”、“三江”等情况也相同,不再一一说明。 云对雨,雪对风。晚照对晴空。来鸿对去燕,宿鸟对鸣虫。 三尺剑,六钧弓1。岭北对江东。人间清暑殿,天上广寒宫2。 两岸晓烟杨柳绿,一园春雨杏花红。 两鬓风霜,途次早行之客;一蓑烟雨,溪边晚钓之翁3。 ————————注释———————— 1这一联是两个典故。上联出自《史记·高祖本纪》。汉朝的开国君主刘邦曾经说:我以普通百姓的身份提着三尺长的宝剑而夺取了天下。下联出自《左传》,鲁国有个勇士叫颜高,他使用的弓为六钧(钧为古代重量单位,一钧三十斤),要用180斤的力气才能拉开。2清暑殿:洛阳的一座宫殿。广寒宫:《明皇杂录》说,唐明皇于中秋之夜游月宫,看见大门上悬挂着“广寒清虚之府”的匾额,后代便以广寒宫代指月宫。3次:军队临时驻扎,引申为一

般的短暂停留。途次,旅途的意思。 沿对革,异对同1。白吏对黄童2。江风对海雾,牧子对渔翁。 颜巷陋,阮途穷3。冀北对辽东。池中濯足水,门外打头风4。 梁帝讲经同泰寺,汉皇置酒未央宫5。 尘虑萦心,懒抚七弦绿绮;霜华满鬓,羞看百炼青铜6 ————————注释———————— 1沿:沿袭、遵照原样去做。革:变化、变革。2黄童:黄口之童,即儿童。黄,黄口,雏鸟的喙边有一圈黄色的边,长大就消失,故以黄口喻指年龄幼小的。3这是两个典故。上联出自《论语·雍也》,颜指颜回(字子渊),孔子的学生。孔子称赞他说:“一箪食、一瓢饮、在陋巷,人不堪其忧,回也不改其乐。贤哉,回也!”(吃一竹筐饭食,喝一瓢凉水,住在偏僻的巷子里,别人忍受不了这种贫穷,颜回不改变他快乐的心情。颜回呀,真是个贤人!)下联出自《晋书·阮籍传》。阮指阮籍(字嗣宗),魏晋时代人,博览群书,好老庄之学,为竹林七贤之一。《晋书》记载,阮籍经常驾车信马由缰地乱走,走到无路可走的时候便大哭而返。穷,到……的尽头,此处指无路可走之处。4濯(音zhuó)足水:屈原《渔父》中有“沧浪之水清兮,可以濯我缨;沧浪之水浊兮,可以濯我足”的句子,故濯足水指污水。打头风:行船时所遇到的逆风。5梁帝:南朝的梁武帝萧衍。他笃信佛教,经常和高僧们在同泰寺研讨佛经。汉皇:汉朝的开国之君刘邦。他曾宴请群臣于长安的未央宫,接受群臣的朝贺。6尘虑:对尘世间琐碎小事的忧虑。萦:缠绕。绿绮:琴名,据说汉代的司马相如曾弹琴向卓文君求爱,卓文君就用绿绮琴应和他。霜华:即霜花(“华”为“花”的古字),借指白发。百炼青铜:借指镜子,古人用青铜镜照面。 贫对富,塞对通。野叟对溪童。鬓皤对眉绿,齿皓对唇红1。 天浩浩,日融融2。佩剑对弯弓3。半溪流水绿,千树落花红。

(完整版)《声律启蒙》最全注解与译文(五微)

上:五微 来对往,密对稀,燕舞对莺飞。风清对月朗①,露重对烟微。 霜菊瘦,雨梅肥,客路对渔矶②。晚霞舒锦绣,朝露缀珠玑③。 夏暑客思欹石枕,秋寒妇念寄边衣④。 春水才深,青草岸边渔父去;夕阳半落,绿莎原上牧童归⑤。 【注释】 ①朗:月光明亮。 ②(jī)矶:水边的石滩或突出的大石头。 【原文】来对往,密对稀,燕舞对莺飞。风清对月朗,露重对烟微。霜菊瘦,雨梅肥,客路对渔矶。 【译文】来和往相对,密集和稀疏相对,春燕轻舞和黄莺翻飞相对。清风和明月相对,浓重的露水和轻轻的炊烟相对,经霜的菊花冷峻清逸,经雨的梅子果繁叶茂。他乡的曲折小路和水边突出的岩石相对。 ③朝:早晨。珠玑:珍珠的统称。圆者为珠,不圆者为玑。 ④(qī)欹:不正、倾斜,这里是斜靠着、斜倚着的意思。念:想着。边衣:供戍守边防的战士穿的衣裳。古代军队战士的衣服(特别是寒衣)要由家中的妻子寄送。 【原文】晚霞舒锦绣,朝露缀珠玑。夏暑客思欹石枕,秋寒妇念寄边衣。 【译文】七彩的晚霞,犹如锦绣铺满天空;早晨的露珠晶莹剔透,犹如大大小小的珍珠。盛夏的时候,身在他乡作客的人躺在石板上纳凉思念家乡,秋天转寒时,妻子为戍守边疆的丈夫捎寄棉衣。 ⑤莎:草名,即香附。其块茎叫香附子,呈细长的纺锤形,可入药。 【原文】春水才深,青草岸边渔父去;夕阳半落,绿莎原上牧童归。 【译文】春天的水面,天色刚刚暗下来,青草岸边的渔夫就回家去了;夕阳还在半山腰,绿绿的草原上已有牧童归来。 宽对猛,是对非①,服美对乘肥②。珊瑚对玳瑁,锦绣对珠玑③。 桃灼灼,柳依依④,绿暗对红稀⑤。窗前莺并语,帘外燕双飞。 汉致太平三尺剑,周臻大定一戎衣⑥。 吟成赏月之诗,只愁月堕;斟满送春之酒,惟憾春归。 【注释】 ①宽对猛:宽指政策宽缓,猛指政策严厉,《左传》昭公二十年说:“宽以济猛,猛以济宽,政是以和。”(宽缓的政令和严厉的政令互相补充调剂,国家的政局就能上下和谐。) ②乘:乘坐。此处动词做名词用,指乘坐的马匹。是一种借代的修辞手法。也可能出自《论语?雍也》:“乘肥马,衣轻裘。”如此,则“服”和“乘”都应该是动词,而“美”和“肥”才应该看做借代,分别指“美丽的衣服”和“肥壮的马匹”。 ③珊瑚:海洋中一种腔肠动物的骨髓形成的树枝状的东西,颜色多样,可作装饰品。玳瑁:海洋中的一种动物,形状似大龟,背壳有花纹,四肢为鳍足状,甲片可作装饰,亦可入药。 ④此联两句均出自《诗经》。 上联出自《国风?周南?桃夭》,原文为:“桃之夭夭,灼灼其华。”(桃树长得多么茂盛呀,它的花开得像火焰一样。)。夭:盛貌;灼:鲜明,灼灼:鲜明兴盛状。 下联出自《小雅?采薇》,原文为:“昔我往矣,杨柳依依。”(以前我动身去打仗的时候,杨柳随风飘动)。 ⑤“绿暗”指绿叶颜色变深,“红稀”指红花凋谢变少,这都是晚春到初夏的景色。绿和红分别代指绿树和红花,是修辞中的借代手法。 【原文】宽对猛,是对非,服美对乘肥。珊瑚对玳瑁,锦绣对珠玑。桃灼灼,柳依依,绿暗对红稀。窗前莺并语,帘外燕双飞。 【译文】宽容和严厉相对,是和非相对,穿着华丽的衣裳和骑着高头大马相对,形状如花如树,珊瑚和玳瑁相对,精美鲜艳的丝织品和晶莹剔透的珍珠相对。桃花鲜艳夺目,柳树柔弱、随风摇摆,枝叶繁密茂盛和红花零星稀落相对。窗前两只黄营相对鸣叫,帘外一对燕子往来飞舞。 ⑥这是两个典故。 上联出自《史记?高祖本纪》,见一东注。 下联出自《尚书?武成》,书中说周朝“一戎衣,天下大定”,传统的解释是:周武王一穿上打仗的服装(戎衣),就消灭了商纣王,建立周朝,天下安定。(zhēn)臻:至、到。 【原文】汉致太平三尺剑,周臻大定一戎衣。吟成赏月之诗,只愁月堕;斟满送春之酒,惟憾春归。

《声律启蒙》最全注解与译文(三江)知识讲解.docx

《声律启蒙》最全注解与译文 ( 三江 )

上:三江 楼对阁,户对窗,巨海对长江。蓉裳对蕙帐①,玉斝对银釭②。 青布幔,碧油幢③,宝剑对金缸④。忠心安社稷,利口覆家邦⑤。 世祖中兴延马武,桀王失道杀龙逄⑥。 秋雨潇潇,漫烂黄花都满径;春风袅袅,扶疏绿竹正盈窗⑦。 【注释】 ① 蓉裳:绣有芙蓉花的衣裳,这里指绣有荷花的衣裳,比喻高洁; 蕙帐:用惠草做的帷帐, 比喻芳美 . ②(ji ǎ)斝:古代一种铜制的饮酒的器具。( gāng)釭:灯。意思为镶嵌在车( gǔ)毂之中的用来插车轴的铁制套环。 ③ 幢:古代一种用羽毛作装饰的用于仪仗的旗帜。又指佛教用物经幢。经幢有两种:在圆形的长筒状的绸伞上书写 佛经叫经幢,在圆形石柱上雕刻佛经叫石幢。此字还另有一个意思,指张挂于车或船上的帷幕,属于去声绛韵。这里 是用前者的读音、后者的意思来构成对仗,是“借对”的一种。 ④ 釭:原文作“缸”,疑误。东汉刘熙的《释名》说,函谷关以西的方言,称箭簇为“釭”。金釭,金属铸成的箭睐,只有此义方能与“宝剑”构成对仗。 【原文】楼对阁,户对窗,巨海对长江。蓉裳对蕙帐,玉斝对银釭。青布幔,碧油幢,宝剑对金缸。 【译文】楼和阁相对,户和窗相对,浩瀚的大海和滾滾的长江相对。蓉裳和蕙帐相对,玉制的酒器和银制的灯盏相 对。青色的布幔 ,碧绿的油幢,青绿色的油布帷幕,锋利的宝剑和金色的酒缸相对。⑤社稷:国家。社和稷分别指祭 祀土神和谷神的庙,是国家最重要的神庙,故用以代指国家。利口:能言善辩的嘴,代指只说不做的清谈家。家邦: 国家。邦:国。 【原文】忠心安社稷,利口覆家邦. 【译文】忠诚之心能使江山安定,尖利的嘴使国家毁灭。 ⑥这是两个典故。 上联出自《后汉书 ?马武传》。世祖,指光武帝刘秀,因其为首推翻了王莽建立的新朝,建立东汉,恢复了刘姓的 天下,故被称为中兴之主。马武字子张,骁勇善战,刘秀在一次宴会后,曾独自与马武一起登上丛台,延请马武为将 军,率领其精锐部队渔阳上谷突骑。马武十分感激刘秀的知遇之恩,所以忠心不二,在战争中功勋卓著。刘秀称帝后, 马武被封为捕虏将军扬虚侯,为云台二十八将之一。延:请。 下联出自《庄子 ?人间世》。(ji é)桀王指夏朝的亡国之君夏桀,据说他十分残暴。龙逄指夏朝的贤臣关龙逢(“逄”:为“逢”的俗字,音páng)。夏桀荒淫,关龙逢屡次直言进谏,后被囚杀。 ⑦ 黄花:此处特指菊花。扶疏:植物错落有致的样子。 【原文】秋雨潇潇,漫烂黄花都满径;春风袅袅,扶疏绿竹正盈窗 【译文】秋风冷雨,黄花飘落,铺满山路,微微的春风,茂盛的绿竹正遮挡住窗户。 旌对旆,盖对幢①,故国对他邦。千山对万水,九泽对三江②。 山岌岌,水淙淙,鼓振对钟撞③。清风生酒舍,皓月照书窗④。 阵上倒戈辛纣战,道旁系剑子婴降⑤。 夏日池塘,出沿浴波鸥对对;春风帘幕,往来营垒燕双双⑥。 【注释】 ①(pèi)旆:一种旗帜。盖:车盖,古代竖立在车上用来遮阳蔽雨的器具,形状类似现在的雨伞。 幢:张挂于车或船上的帷幕,此处是借对,参考前注。 ② 九泽:指占代分处于九州的九个湖泊,各书记载的名称小有差异,较为通行的说法是:具区(吴)、云梦(楚)、阳 华(秦)、大陆(晋)、圃田(梁)、孟诸(宋)、海隅(齐)、钜鹿(赵)、大沼(燕)。(见于《吕氏春秋? 有始》)三江:古代的三条江,其名称各书记载大不相同。《尚书 ?禹贡》中的“三江”,据唐陆德明《经典择文》的 说法,是指松江、委江、东江。 【原文】旌对旆,盖对幢,故国对他邦。千山对万水,九泽对三江。 【译文】旌和旆相对,车盖和帷幔相对,故国和他邦相对。千山和万水相对,众多湖泽和许多大江相对。

感觉系统

感觉器官练习题 一、单选题 1.下列哪项不属于感觉器官的是( ) A.耳 B.鼻 C.神经D.皮肤 E.以上均错 2.视器包括( ) A.眼球壁和附属结构 B.眼球壁和屈光装臵 C.眼球及其附属结构 D.眼球及其屈光装臵 E.眼球及其眼睑 3.眼球() A.壁仅由巩膜、脉络膜、视网膜构成 B.折光系统包括角膜、房水、晶状体和玻璃体 C.视神经盘是感光最敏锐的部位 D.房水由虹膜分泌形成 E.角膜中央一圆孔称瞳孔 4.巩膜() A.乳白色,厚而坚韧,是硬脑膜的延伸结构 B.前方与晶状体相连C.占纤维膜的前1/6 D.有屈光作用 E.以上均错 5.瞳孔大小() A.随眼压高低而变化 B.随光线强弱而变化 C.由睫状体收缩来调节D.与三叉神经眼神经的作用有关E.随晶状体突度变化而变化 7.眼前房是指() A.角膜与玻璃体之间腔隙 B.角膜与虹膜之间腔隙 C.虹膜与晶状体之 间腔隙 D.虹膜与玻璃体之间腔隙 E.角膜与晶状体之间腔隙 8.黄斑() A.位于视神经乳头(盘)外侧约3-4mm 处 B.感光作用强,但无辨色能力C.中央有中央凹,该处对光不敏感D.视网膜节细胞轴突由此向后穿出眼 球壁 E.此处无感光细胞,称为生理性盲点 9.上直肌收缩时,瞳孔转向()A.上内方 B.下内方 C.上外方 D.下外方 E.外侧 10.上斜肌可使() A.瞳孔转向上外方 B.瞳孔转向下外方 C.瞳孔转向上方D.瞳孔转向外侧 E.瞳孔 转向下方 11.眼球的折光装臵为() A.晶状体 B.角膜、晶 状体 C.角膜、房水、晶状 体 D.角膜、房水、晶状体、玻璃体 E.角 膜、房水、晶状体、玻璃体、视网膜 12.泪道包括() A.鼻泪管、泪小管 B.泪小管、 泪囊 C.泪小管、泪囊、鼻泪管 D.泪点、泪小管、泪囊、鼻泪管 E.泪腺、结膜囊、泪小管、泪囊、鼻 泪管 13.视网膜中央动脉来源于() A.颈内动脉B.颈外动脉 C.椎动脉 D.脑膜中动脉 E.面 动脉 17. 属于生理性盲点的是 A、脉络膜 B、角膜 C、虹膜 D、视轴 E、视网膜中央凹 14. 眼前房与后房的分界是() A.睫状体 B.虹膜 C.脉 络从 D.晶状体 E.玻璃体 15.关于中耳鼓室壁的描述中,何者是错误的() A.上壁为鼓室盖,分隔鼓室与颅中 窝 B.内壁为乳突窦壁 C.下壁为颈静脉壁,将鼓室与颅内 静脉起始部隔开 D.外侧壁为鼓膜 E.前壁为颈动脉壁,此壁上部有咽 鼓管鼓口 16. 位于鼓室内的结构是() A.球囊 B.面神经 C.听 小骨 D.螺旋器(Corti器) E.半规管 17.耳蜗( ) A.由软骨构成B.由蜗管围绕蜗轴约两周半形成的 C.仅分为前庭阶和鼓阶两部分D.前庭阶和鼓阶充满内淋巴 E.以上均不对 18.不属于位觉感受器的是() A.椭圆囊斑 B.球囊斑 C.壶 腹嵴 D.螺旋器 E.以上均不对 19.前庭阶和鼓阶借何结构相通 () A.蜗孔 B.蜗管 C.蜗 窗 D.前庭窗 E.联合管 20.将声波的振动传人内耳的是 () A.听小骨 B.前庭 C.耳

声律启蒙注释

一东1 ————————注释———————— 1一东:“东”指“东韵”,是宋金时期的“平水韵”(也叫“诗韵”)中的一个韵部。“东”叫韵目,即这个韵部的代表字。东韵中包含有许多字,它们的共同点便是韵母相同(当然是指隋唐五代两宋时期的读音),像下面的三段文字中,每个句号之前的那个字,即风、空、虫、弓、东、宫、红、翁、同、童、穷、铜、通、融、虹等 15字,尽管在现代汉语中的韵母并不完全相同,但都同属于东韵,如果是作格律诗,这些字就可以互相押韵。“一”,是指东韵在平水韵中的次序。平水韵按照平、上、去、人四个声调分为 106个韵部,其中因为平声的字较多,故分为上下两个部分,东韵是上平声中的第一个韵部。后面的“二冬”、“三江”等情况也相同,不再一一说明。 云对雨,雪对风。晚照对晴空。来鸿对去燕,宿鸟对鸣虫。 三尺剑,六钧弓1。岭北对江东。人间清暑殿,天上广寒宫2。 两岸晓烟杨柳绿,一园春雨杏花红。 两鬓风霜,途次早行之客;一蓑烟雨,溪边晚钓之翁3。 ————————注释———————— 1这一联是两个典故。上联出自《史记·高祖本纪》。汉朝的开国君主刘邦曾经说:我以普通百姓的身份提着三尺长的宝剑而夺取了天下。下联出自《左传》,鲁国有个勇士叫颜高,他使用的弓为六钧(钧为古代重量单位,一钧三十斤),要用 180斤的力气才能拉开。 2清暑殿:洛阳的一座宫殿。广寒宫:《明皇杂录》说,唐明皇于中秋之夜游月宫,看见大门上悬挂着“广寒清虚之府”的匾额,后代便以广寒宫代指月宫。 3次:军队临时驻扎,引申为一般的短暂停留。途次,旅途的意思。 沿对革,异对同1。白吏对黄童2。江风对海雾,牧子对渔翁。 颜巷陋,阮途穷3。冀北对辽东。池中濯足水,门外打头风4。 梁帝讲经同泰寺,汉皇置酒未央宫5。 尘虑萦心,懒抚七弦绿绮;霜华满鬓,羞看百炼青铜6 ————————注释———————— 1沿:沿袭、遵照原样去做。革:变化、变革。 2黄童:黄口之童,即儿童。黄,黄口,雏鸟的喙边有一圈黄色的边,长大就消失,故以黄口喻指年龄幼小的。 3这是两个典故。上联出自《论语·雍也》,颜指颜回(字子渊),孔子的学生。孔子称赞他说:“一箪食、一瓢饮、在陋巷,人不堪其忧,回也不改其乐。贤哉,回也!”(吃一竹筐饭食,喝一瓢凉水,住在偏僻的巷子里,别人忍受不了这种贫穷,颜回不改变他快乐的心情。颜回呀,真是个贤人!)下联出自《晋书·阮籍传》。阮指阮籍(字嗣宗),魏晋时代人,博览群书,好老庄之学,为竹林七贤之一。《晋书》记载,阮籍经常驾车信马由缰地乱走,走到无路可走的时候便大哭而返。穷,到……的尽头,此处指无路可走之处。 4濯(音zhu?)足水:屈原《渔父》中有“沧浪之水清兮,可以濯我缨;沧浪之水浊兮,可以濯我足”的句子,故濯足水指污水。打头风:行船时所遇到的逆风。 5梁帝:南朝的梁武帝萧衍。他笃信佛教,经常和高僧们在同泰寺研讨佛经。汉皇:汉朝的开国之君刘邦。他曾宴请群臣于长安的未央宫,接受群臣的朝贺。 6尘虑:对尘世间琐碎小事的忧虑。萦:缠绕。绿绮:琴名,据说汉代的司马相如曾弹琴向卓文君求爱,卓文君就用绿绮琴应和他。霜华:即霜花(“华”为“花”的古字),借指白发。百炼青铜:借指镜子,古人用青铜镜照面。 贫对富,塞对通。野叟对溪童。鬓皤对眉绿,齿皓对唇红1。 天浩浩,日融融2。佩剑对弯弓3。半溪流水绿,千树落花红。 野渡燕穿杨柳雨,芳池鱼戏芰荷风4。 女子眉纤,额下现一弯新月;男儿气壮,胸中吐万丈长虹。 ————————注释———————— 1皤(音p?):白色。绿:这里指青色、黑色。皓:白色。 2浩浩:广阔无边的样子。融融:暖气上腾的样子。 3佩剑、弯弓:这两个词组既可看成动宾词组,即佩上剑、拉弯弓;也可看成偏正词组,即佩带的剑、被拉弯的弓。无论是哪种情况,都对仗。 4芰(音jì):菱角的一种。两角为菱,四角为芰。 二冬1

汽车异型胶管的新工艺及配方

1 汽车用异型胶管新工艺及特点 原胶管生产设备工艺复杂、工序多。人为因素影响大。精确度不高。而新生产线生产工艺全部由微机控制,胶管内、外胶挤出机的工艺参数全部自控,减少人为因素影响,产品质量均一。外胶挤出采用负压挤出,提高了内、外胶的粘合力。内管挤出后,由激光测径仪测径。增强层采用针织结构,自动按要求裁断,硫化采用抽真空工艺。生产工艺如下:混炼胶制备→物理性能测试→热炼→内管挤出→冷却→织物编制→外胶挤出→冷却→打印标识→裁断→硫化→清洗→切头→检验→包装人库 由于采用同步联动生产,冷喂料挤出,比原工艺减少了混炼胶热炼、内管停放、穿棒、涂胶浆、脱棒等繁杂的手工操作。因而。与原工艺相比,有如下优点: (1)胶管质量高。由微机控制挤出机各部位温度、冷却温度,精确度高,胶管内、外胶厚度均匀性稳定,内外层胶粘合性好。 (2)生产联动化,效率高。由原来10人减少为3人,挤出速度提高到(6~10)m/min。 (3)自动化程度高,操作简单。由原来的手工经验操作,变为微机控制。 (4)生产安全。一个局部出现故障,自动停车。 2 汽车用异型胶管的种类 该汽车用异型胶管主要为发动机系统用胶管。 由于发动机设计结构紧凑,空间小,需要将不同胶管根据不同车型来设计为不同弯度。不同直径。不同长度,以避开其他固定件的弯管。发动机系统主要弯管有燃料油胶管、散热器胶管、加热器胶管、油冷却胶管、空气滤清器胶管等。直径由8 mm至50 mm。 3 汽车用异型胶管的材料选择及配方设计 由于汽车用异型胶管对性能要求的多样性。用于制造胶管的橡胶品种越来越多,并要求不断用新材料以满足高性能的要求。在选择材料时。不仅需要考虑满足使用性能的技术条件,还要考虑它的加工性能及成本。安装在发动机或散热器附近的汽车用异型胶管,在汽车发动之后,胶管温度上升,在高温条件下。使胶管的耐油、耐热性下降,并促进胶管的老化,降低使用寿命。现代汽车对胶管提出了新的要求,这主要是发动机温度的提高,高芳烃无铅燃

《声律启蒙》注解与译文(整理打印版,适合孩子自己阅读理解记忆)讲解学习

《声律启蒙》注解与译文(整理打印版,适合孩子自己阅读理解 记忆)

上:一东 “东”指“东韵”,是宋金时期的“平水韵”(也叫“诗韵”)中的一个韵部。“东”叫韵目,即这个韵部的代表字。东韵中包含有许多字,它们的共同点便是韵母相同(当然是指隋唐五代两宋时期的读音),像下面的三段文字中,每个句号之前的那个字,即风、空、虫、弓、东、宫、红、翁、同、童、穷、铜、通、融、虹等 15字,尽管在现代汉语中的韵母并不完全相同,但都同属于东韵,如果是作格律诗,这些字就可以互相押韵。 “一”,是指东韵在平水韵中的次序。平水韵按照平、上、去、入四个声调分为 106个韵部,其中因为平声的字较多,故分为上下两个部分,东韵是上平声中的第一个韵部;“二冬”、“三江”等情况相同。 云对雨,雪对风。晚照对晴空。来鸿对去燕,宿鸟对鸣虫。 三尺剑,六钧弓①。岭北对江东。人间清暑殿,天上广寒宫②。 两岸晓烟杨柳绿,一园春雨杏花红。 两鬓风霜途次早行之客,一蓑烟雨溪边晚钓之翁③。 【注释】 这一联是两个典故。 ①上联出自《史记?高祖本纪》。汉朝的开国君主刘邦曾经说:我以普通百姓的身份提着三尺长的宝剑而夺取了天下。 下联出自《左传》,鲁国有个勇士叫颜高,他使用的弓为六钧(钧为古代重量单位,一钧30斤),要用180斤的力气才能拉开。 【原文】云对雨,雪对风。晚照对晴空。来鸿对去燕,宿鸟对鸣虫。三尺剑,六钧弓。岭北对江东。 【译文】云和雨相对,雪和风相对,晚上的夕阳和晴朗的天空相对。飞来的大雁和离去的燕子相对,回巢的鸟儿和低鸣的虫子相对。三尺长的剑,六钩重的弓,岭北和江东相对。 ②清暑殿:洛阳的一座宫殿。广寒宫:《明皇杂录》说,唐明皇于中秋之夜游月宫,看见大门上悬挂着“广寒清虚之府”的匾额,后代便以广寒宫代指月宫。

胶管详细规格

CONTENTS目录 Technical Specification 技术参数 (2) 1.Straight Hose 直管 (3) For connection between the turbocharger and the engine. 用于连接涡轮增压器和发动机。 2.Straight Hose Reducer 变径直管 (6) Used to connect piping of different lining, different dimensions. 用于连接不同内衬层,不同口径的管道。 3.Standard Elbow Hose(45℃, 90℃, 135℃) 标准弯管(45℃, 90℃, 135℃) (8) Can be produced in many sizes and various colors. 尺寸和颜色可以定制。 4.Elbow Hose Reducer 变径弯管 (14) For connection piping of different diameters. 用于连接不同口径的管道。 5.Hump Hose 波纹管 (16) For superior connections between engine-mounted charge air system componets. 用于连接发动机进气系统元件。 6.Hump Hose Ruducer 变径波纹管 (18) Used to connect piping of different dimensions. 用于连接不同口径的管道。 7.T-Piece Hose T型管 (19) Can be connected to a third component . 可以连接分支管。 8.Vaccum Hose 真空管 (20) For heavy duty pressure connections in hostile engine environments. 用于重载车恶劣的发动机环境中的管路连接。 9.Custom Bend Hose 异型管 (21) Custom bend silicone rubber hoses can be made according to customers' designs. 可以根据客户的要求定做。 10.Performance Silicone Hose 特种硅胶管 (22) Performance silicone hoses which used on modified cars possess the excellent performance of high temperature and high pressure resistance, max temperature can reach to +300℃ and max pressure can reach to 3.0MPa. 极好的耐高温高压特性,高温可达+300℃,高压可达3.0Mpa,主要用于改装车上,以提高汽车的整体性能。

《声律启蒙》注解与译文(整理打印版,适合孩子自己阅读理解记忆)

上:一东 “东”指“东韵”,是宋金时期的“平水韵”(也叫“诗韵”)中的一个韵部。“东”叫韵目,即这个韵部的代表字。东韵中包含有许多字,它们的共同点便是韵母相同(当然是指隋唐五代两宋时期的读音),像下面的三段文字中,每个句号之前的那个字,即风、空、虫、弓、东、宫、红、翁、同、童、穷、铜、通、融、虹等 15字,尽管在现代汉语中的韵母并不完全相同,但都同属于东韵,如果是作格律诗,这些字就可以互相押韵。 “一”,是指东韵在平水韵中的次序。平水韵按照平、上、去、入四个声调分为 106个韵部,其中因为平声的字较多,故分为上下两个部分,东韵是上平声中的第一个韵部;“二冬”、“三江”等情况相同。 云对雨,雪对风。晚照对晴空。来鸿对去燕,宿鸟对鸣虫。 三尺剑,六钧弓①。岭北对江东。人间清暑殿,天上广寒宫②。 两岸晓烟杨柳绿,一园春雨杏花红。 两鬓风霜途次早行之客,一蓑烟雨溪边晚钓之翁③。 【注释】 这一联是两个典故。 ①上联出自《史记?高祖本纪》。汉朝的开国君主刘邦曾经说:我以普通百姓的身份提着三尺长的宝剑而夺取了天下。 下联出自《左传》,鲁国有个勇士叫颜高,他使用的弓为六钧(钧为古代重量单位,一钧30斤),要用180斤的力气才能拉开。 【原文】云对雨,雪对风。晚照对晴空。来鸿对去燕,宿鸟对鸣虫。三尺剑,六钧弓。岭北对江东。 【译文】云和雨相对,雪和风相对,晚上的夕阳和晴朗的天空相对。飞来的大雁和离去的燕子相对,回巢的鸟儿和低鸣的虫子相对。三尺长的剑,六钩重的弓,岭北和江东相对。 ②清暑殿:洛阳的一座宫殿。广寒宫:《明皇杂录》说,唐明皇于中秋之夜游月宫,看见大门上悬挂着“广寒清虚之府”的匾额,后代便以广寒宫代指月宫。【原文】人间清暑殿,天上广寒宫。两岸晓烟杨柳绿,一园春雨杏花红。

《声律启蒙》原文与翻译

《声律启蒙》原文与翻译 一东 云对雨,雪对风,晚照对晴空。来鸿对去燕,宿鸟对鸣虫。三尺剑,六钧弓,岭北对江东。人间清暑殿,天上广寒宫。两岸晓烟杨柳绿,一园春雨杏花红。两鬓风霜,途次早行之客;一蓑烟雨,溪边晚钓之翁。 沿对革,异对同,白叟对黄童。江风对海雾,牧子对渔翁。颜巷陋,阮途穷,冀北对辽东。池中濯足水,门外打头风。梁帝讲经同泰寺,汉皇置酒未央宫。尘虑萦心,懒抚七弦绿绮;霜华满鬓,羞看百炼青铜。 贫对富,塞对通,野叟对溪童。鬓皤对眉绿,齿皓对唇红。天浩浩,日融融,佩剑对弯弓。半溪流水绿,千树落花红。野渡燕穿杨柳雨,芳池鱼戏芰荷风。女子眉纤,额下现一弯新月;男儿气壮,胸中吐万丈长虹。 一东原文跟翻译 云对雨,雪对风,晚照对晴空。来鸿对去燕,宿鸟对鸣虫。三尺剑,六钧弓,岭北对江东。 【翻译】:云和雨相对,雪和风相对,晚上的夕阳和晴朗的天空相对。飞来的大雁和离去的燕子相对,回巢的鸟儿和低鸣的虫子相对。三尺长的剑,六钩重的弓,岭北和江东相对。 人间清暑殿,天上广寒宫。两岸晓烟杨柳绿,一园春雨杏花红。两鬓风霜,途次早行之客;一蓑烟雨,溪边晚钓之翁。 【翻译】:人间有消夏的消暑殿,天上有凄冷的广寒宫。两岸晨雾弥漫杨柳翠绿,满园春雨中杏花红艳。两鬓白发沧桑,清晨就有急于赶路的客人了。傍晚烟雨迷

蒙,老翁身披蓑衣在溪水边钓鱼。 沿对革,异对同,白叟对黄童。江风对海雾,牧子对渔翁。颜巷陋,阮途穷,冀北对辽东。 【翻译】:沿续和变革相对,差异和相同相对,白发老翁和黄口小儿相对。江风和海雾相对,牧童和渔翁相对。颜回甘心居住在陋巷,阮籍哭于途穷,冀北和辽东相对。 池中濯足水,门外打头风。梁帝讲经同泰寺,汉皇置酒未央宫。尘虑萦心,懒抚七弦绿绮;霜华满鬓,羞看百炼青铜。 【翻译】:池中水可以浴足,迎面的风往往打头。梁武帝曾在同泰寺讲经,汉高祖曾在未央宫宴请功臣。尘世间烦恼事萦绕心头,懒得去抚弄七弦绿琴。鬓角生满白发,不敢对着青铜镜梳妆打扮。 贫对富,塞对通,野叟对溪童。鬓皤对眉绿,齿皓对唇红。天浩浩,日融融,佩剑对弯弓。 【翻译】:贫穷和富贵相对,阻塞和畅通相对,山林老翁和溪边幼童相对。鬟间白发和乌黑眉毛相对,洁白的牙齿和鲜艳的红唇相对。天空广阔无边,日光和煦温暖,佩剑和弯弓相对。 半溪流水绿,千树落花红。野渡燕穿杨柳雨,芳池鱼戏芰荷风。 【翻译】:浅浅的溪水清澈碧绿,千树的丛林中落英缤纷。乡村渡口燕子在杨柳新绿的枝条中飞行,绿色的池塘里鱼儿欢快的在荷叶和菱叶下游来游去。 女子眉纤,额下现一弯新月;男儿气壮,胸中吐万丈长虹。 【翻译】:女子眉毛纤细,额头下好像挂着一弯新月,男儿心胸宽广,胸中充满凌云壮志。

液压胶管参数

液压胶管接头总成 额定工作压力及最小弯曲半径 工作压力(MPa)最小弯曲半径(mm)胶管内径 ⅠⅡⅢⅠⅡⅢ430--90-- 63052-100120140 8264446110140160 10243040130160180 13203236190190240 16152832220240300 192325260300330 222022320350380 25101820350380400 3281518420450450 3861214500500500 45-812-550550 512610-600600 额定工作压力及最小弯曲半径 公称通径(mm)钢丝缠绕层数工作压力(Mpa)爆破压力(Mpa)最小弯曲半径(mm) 64s60210130 104s52208160 134s50176210 164s48160260 194s40140280 196s50172330 254s32112360 256s40138400 324s2884460

326s35104490 384s2270560 514s2064720 低压 规格(内径×外径)工作压力(Mpa)规格(内径×外径)工作压力(Mpa)φ4×φ92φ19×φ29 φ6×φ132φ22×φ31 φ8×φ162φ25×φ34 φ10×φ20φ32×φ41 φ13×φ22φ38×φ471 φ16×φ26 公称内径(mm)工作压力(Mpa) 公称内径(mm) 工作压力(Mpa)ⅠⅡ 6216022916 8184225815 104032611 131******** 1612215148 181018? 本产品适用于与O型密封的焊接式管接头连接使用

《声律启蒙》详解全文

声律启蒙 一东 【注释】 1一东:“东”指“东韵”,是宋金时期的“平水韵”(也叫“诗韵”)中的一个韵部。“东”叫韵目,即这个韵部的代表字。东韵中包含有许多字,它们的共同点便是韵母相同(当然是指隋唐五代两宋时期的读音),像下面的三段文字中,每个句号之前的那个字,即风、空、虫、弓、东、宫、红、翁、同、童、穷、铜、通、融、虹等15字,尽管在现代汉语中的韵母并不完全相同,但都同属于东韵,如果是作格律诗,这些字就可以互相押韵。“一”,是指东韵在平水韵中的次序。平水韵按照平、上、去、人四个声调分为106个韵部,其中因为平声的字较多,故分为上下两个部分,东韵是上平声中的第一个韵部。后面的“二冬”、“三 江”等情况也相同,不再一一说明。 云对雨,雪对风。晚照对晴空。来鸿对去燕,宿鸟对鸣虫。 三尺剑,六钧弓岭北对江东。人间清暑殿,天上广寒宫 两岸晓烟杨柳绿,一园春雨杏花红。 两鬓风霜,途次早行之客;一蓑烟雨,溪边晚钓之翁 【注释】 1这一联是两个典故。上联出自《史记·高祖本纪》。汉朝的开国君主刘邦曾经说:我以普通百姓的身份提着三尺长的宝剑而夺取了天下。下联出自《左传》,鲁国有个勇士叫颜高,他使用的弓为六钧(钧为古代重量单位,一钧三十斤),要用80斤的力气才能拉开。 2清暑殿:洛阳的一座宫殿。广寒宫:《明皇杂录》说,唐明皇于中秋之夜游月宫,看见大门上悬挂着“广寒清虚之府”的匾额,后代便以广寒宫代指月宫。 3次:军队临时驻扎,引申为一般的短暂停留。途次,旅途的意思。 沿对革,异对同白吏对黄童江风对海雾,牧子对渔翁。 颜巷陋,阮途穷冀北对辽东。池中濯足水,门外打头风 梁帝讲经同泰寺,汉皇置酒未央宫 尘虑萦心,懒抚七弦绿绮;霜华满鬓,羞看百炼青铜6 【注释】 1沿:沿袭、遵照原样去做。革:变化、变革。2黄童:黄口之童,即儿童。黄,黄口,雏鸟的喙边有一圈黄色的边,长大就消失,故以黄口喻指年龄幼小的。 3这是两个典故。上联出自《论语·雍也》,颜指颜回(字子渊),孔子的学生。孔子称赞他说:“一箪食、一瓢饮、在陋巷,人不堪其忧,回也不改其乐。贤哉,回也!”(吃一竹筐饭食,喝一瓢凉水,住在偏僻的巷子里,别人忍受不了这种贫穷,颜回不改变他快乐的心情。颜回呀,真是个贤人!)下联出自《晋书·阮籍传》。阮指阮籍(字嗣宗),魏晋时代人,博览群书,好老庄之学,为竹林七贤之一。《晋书》记载,阮籍经常驾车信马由缰地乱走,走到无路可走的时候便大哭而返。穷,到……的尽头,此处指无路可走之处。4濯(音zhu?)足水:屈原《渔父》中有“沧浪之水清兮,可以濯我缨;沧浪之水浊兮,可以濯我足”的句子,故濯足水指污水。打头风:行船时所遇到的逆风。 5梁帝:南朝的梁武帝萧衍。他笃信佛教,经常和高僧们在同泰寺研讨佛经。汉皇:汉朝的开国之君刘邦。他曾宴请群臣于长安的未央宫,接受群臣的朝贺。 6尘虑:对尘世间琐碎小事的忧虑。萦:缠绕。绿绮:琴名,据说汉代的司马相如曾弹琴向卓文君求爱,

(完整版)感觉系统检查--浅感觉

感觉系统检查--浅感觉 1.痛觉:用大头针的针尖轻刺被检者皮肤以检查痛觉,左右远近对比并记录感觉障碍类型(过敏、减退或消失)与范围。 2.触觉:用棉签或软纸片轻触被检者的皮肤或粘膜。 3.温度觉:用盛有热水(40 ℃~50 ℃)或冷水(5~10 ℃)的试管接触皮肤,辨别冷热感觉。深感觉 1、运动觉:被检者闭目,检查者轻轻夹住被检者的手指或足趾两侧,上下移动5°左右,令被检者说出“向上”或“向下”。如感觉不明显可加大活动幅度或测试较大的关节。 2、位置觉:被检者闭目,检查者将其肢体摆成某一姿势,请患者描述该姿势或用对侧肢体模仿。 3、振动觉:用振动着的音叉柄置于骨隆起处(如内、外踝,手指、桡尺骨茎突、胫骨、膝盖等),询问有无振动感觉和持续时间,并两侧对比。 .腹壁反射(abdominal reflex):被检者仰卧,下肢稍屈曲,使腹壁松弛,然后用钝头竹签分别沿肋弓下缘(T7~8)、脐孔水平(T9 ~10)及腹股沟上(T11 ~12)的平行方向,由外向内轻划腹壁皮肤。正常反应是该侧腹肌收缩,脐孔向刺激部分偏移。 .提睾反射(cremasteric reflex)反射中心L1 ~2。 与检查腹壁反射相同,竹签由下而上轻划大腿上部内侧皮肤,反应为该侧提睾肌收缩使睾丸上提。 跖反射(plantar reflex) :反射中心S1~2。被检者仰卧、下肢伸直,医生手持被检者踝部。用钝头竹签划足底外侧,由足根向前至小趾根部足掌时转向内侧,正常反应为足趾跖屈(即Babinski征阴性)。 肛门反射:反射中心S4~5。用竹签轻划肛门周围皮肤,可引起肛门外括约肌收缩。 膝反射(k nee reflex)(L2~4) 坐位检查时,患者小腿完全松弛下垂,与大腿成直角,卧位检查则患者仰卧,检查者以左手托起其膝关节使之屈曲约120°,用右手持叩诊锤叩击膝盖髌骨下方股四头肌腱,可引起小腿伸展。 踝反射(ankle reflex) (S1~2) 又称跟腱反射。患者仰卧,屈膝约90°,检查者左手将其足部背屈成直角,以叩诊锤叩击跟腱,反应为腓肠肌收缩,足跖屈;或俯卧位,屈膝90°,检查者用左手按足跖,再扣击跟腱;或患者跪于床边,足悬于床外,扣击跟腱。 .Babinski巴宾斯基征取位与检查跖反射一样,用竹签沿患者足底外侧缘,由后向前至小趾跟部并转向内侧,阳性反应为踇趾背屈,其余各趾呈扇形展开。

相关文档
相关文档 最新文档