文档库 最新最全的文档下载
当前位置:文档库 › Diverge-merge processor (DMP) Dynamic predicated execution of complex control-flow graphs b

Diverge-merge processor (DMP) Dynamic predicated execution of complex control-flow graphs b

Diverge-merge processor (DMP) Dynamic predicated execution of complex control-flow graphs b
Diverge-merge processor (DMP) Dynamic predicated execution of complex control-flow graphs b

Diverge-Merge Processor(DMP):Dynamic Predicated Execution of Complex Control-Flow Graphs Based on Frequently Executed Paths Hyesoon Kim Jos′e A.Joao Onur Mutlu§?Yale N.Patt

Department of Electrical and Computer Engineering The University of Texas at Austin

{hyesoon,joao,patt}@https://www.wendangku.net/doc/4f1109122.html, §Microsoft Research onur@https://www.wendangku.net/doc/4f1109122.html,

Abstract

This paper proposes a new processor architecture for handling hard-to-predict branches,the diverge-merge processor(DMP).The goal of this paradigm is to eliminate branch mispredictions due to hard-to-predict dynamic branches by dynamically predicating them without requiring ISA support for predicate registers and predicated instructions.To achieve this without incurring large hardware cost and complexity,the compiler provides control-?ow information by hints and the processor dynamically predicates instructions only on frequently executed program paths.The key insight behind DMP is that most control-?ow graphs look and behave like simple hammock (if-else)structures when only frequently executed paths in the graphs are considered.Therefore,DMP can dynamically predicate a much larger set of branches than simple hammock branches.

Our evaluations show that DMP outperforms a baseline proces-sor with an aggressive branch predictor by19.3%on average over SPEC integer95and2000benchmarks,through a reduction of38%in pipeline?ushes due to branch mispredictions,while consuming9.0% less energy.We also compare DMP with previously proposed predica-tion and dual-path/multipath execution paradigms in terms of perfor-mance,complexity,and energy consumption,and?nd that DMP is the highest performance and also the most energy-ef?cient design. 1.Introduction

State-of-the-art high performance processors employ deep pipelines to extract instruction level parallelism(ILP)and to support high clock frequencies.In the near future,processors are expected to support a large number of in-?ight instructions[30,42,10,7, 13]to extract both ILP and memory-level parallelism(MLP).As shown by previous research[27,40,41,30,42],the performance improvement provided by both pipelining and large instruction windows critically depends on the accuracy of the processor’s branch predictor.Branch predictors still remain imperfect despite decades of intensive research in branch prediction.Hard-to-predict branches not only limit processor performance but also result in wasted energy consumption.

Predication has been used to avoid pipeline?ushes due to branch mispredictions by converting control dependencies into data depen-dencies[2].With predication,the processor fetches instructions from both paths of a branch but commits only results from the correct path, effectively avoiding the pipeline?ush associated with a branch mispre-diction.However,predication has the following problems/limitations: 1.It requires signi?cant support(i.e.predicate registers and predi-

cated instructions)in the instruction set architecture(ISA).

2.Statically predicated code incurs the performance overhead of

predicated execution regardless of whether a branch is easy to predict or hard to predict at run-time.The overhead of predicated code is twofolds:(i)the processor always has to fetch instruc-tions from both paths of an if-converted branch,(ii)the processor cannot execute predicated instructions or instructions that are de-pendent on them until the predicate value is resolved,causing ad-

1If the compiler does not predicate all basic blocks between A and H be-cause one of the branches is easy-to-predict,then the remaining easy-to-predict branch is likely to become a hard-to-predict branch after if-conversion.This problem is called misprediction migration[3,39].Therefore,the compiler(e.g. ORC[31])usually predicates all control-?ow dependent basic blocks inside a region(the region is A,B,C,D,E,F,G and H in this example.).This problem can be mitigated with reverse if-conversion[46,4]or by incorporating predicate information into the branch history register[3].

B,C,and E.To simplify the hardware,DMP uses some control-?ow information provided by the compiler.The compiler identi?es and marks suitable branches as candidates for dynamic predication.These branches are called diverge branches .The compiler also selects a control-?ow merge (or reconvergence)point corresponding to each diverge branch.In this example,the compiler marks the branch at block A as a diverge branch and the entry of block H as a control-?ow merge (CFM)point.Instead of the compiler specifying which blocks are predicated (and thus fetched),the processor decides what to fetch/predicate at run-time.If a diverge branch is estimated to be low-con?dence at run-time,the processor follows and dynamically predi-cates both paths after the branch until the CFM point.The processor follows the branch predictor outcomes on the two paths to fetch only the frequently executed blocks between a diverge branch and a CFM point.

(c)

A C

Figure 1.Control-?ow graph (CFG)example:(a)source code (b)CFG (c)possible paths (hammocks)that can be predicated by DMP

The compiler could predicate only blocks B,C,and E based on pro-?ling [29]rather than predicating all control-dependent blocks.Un-fortunately,frequently executed paths change at run-time (depending on the input data set and program phase),and code predicated for only a few paths can hurt performance if other paths turn out to be frequently executed.In contrast,DMP determines and follows fre-quently executed paths at run-time and therefore it can ?exibly adapt its dynamic predication to run-time changes (Figure 1c shows the pos-sible hammock-shaped paths that can be predicated by DMP for the example control-?ow graph).Thus,DMP can dynamically predicate hard-to-predict instances of a branch with less overhead than static predication and with minimal support from the compiler.Further-more,DMP can predicate a much wider range of control-?ow graphs than dynamic-hammock-predication [23]because a control-?ow graph does not have to be a simple if-else structure to be dynamically pred-icated;it just needs to look like a simple hammock when only fre-quently executed paths are considered.

Our evaluation shows that DMP improves performance by 19.3%over a baseline processor that uses an aggressive 64KB branch pre-dictor,without signi?cantly increasing maximum power requirements.DMP reduces the number of pipeline ?ushes by 38%,which results in a 23%reduction in the number of fetched instructions and a 9.0%reduction in dynamic energy consumption.This paper provides a de-tailed description and analysis of DMP as well as a comparison of its performance,hardware complexity,and power/energy consumption with several previously published branch processing paradigms.

2.The Diverge-Merge Concept

2.1.The Basic Idea

The compiler identi?es conditional branches with control ?ow suit-able for dynamic predication as diverge branches .A diverge branch is a branch instruction after which the execution of the program usually reconverges at a control-independent point in the control-?ow graph,

a point we call the control-?ow merge (CFM)point .In other words,diverge branches result in hammock-shaped control ?ow based on fre-quently executed paths in the control-?ow graph of the program but they are not necessarily simple hammock branches that require the control-?ow graph to be hammock-shaped.The compiler also identi-?es a CFM point associated with the diverge branch.Diverge branches and CFM points are conveyed to the microarchitecture through modi-?cations in the ISA,which are described in Section 3.11.

When the processor fetches a diverge branch,it estimates whether or not the branch is hard to predict using a branch con?dence esti-mator.If the diverge branch has low con?dence,the processor enters dynamic predication mode (dpred-mode).In this mode,the proces-sor fetches both paths after the diverge branch and dynamically predi-cates instructions between the diverge branch and the CFM point.On each path,the processor follows the branch predictor outcomes until it reaches the CFM point.After the processor reaches the CFM point on both paths,it exits dpred-mode and starts to fetch from only one path.If the diverge branch is actually mispredicted,then the processor does not need to ?ush its pipeline since instructions on both paths of the branch are already fetched and the instructions on the wrong path will become NOPs through dynamic predication.

In this section,we describe the basic concepts of the three major mechanisms to support diverge-merge processing:instruction fetch support,select-μops,and loop branches.A detailed implementation of DMP is described in Section 3.

2.1.1.Instruction Fetch Support In dpred-mode,the processor

fetches instructions from both directions (taken and not-taken paths)of a diverge branch using two program counter (PC)registers and a round-robin scheme to fetch from the two paths in alternate cycles.On each path,the processor follows the outcomes of the branch predictor.Note that the outcomes of the branch predictor favor the frequently executed basic blocks in the control ?ow graph.The processor uses a separate global branch history register (GHR)to predict the next fetch address on each path,and it checks whether the predicted next fetch address is the CFM point of the diverge branch.2If the processor reaches the CFM point on one path,it stops fetching from that path and fetches from only the other path.When the processor reaches the CFM point on both paths,it exits dpred-mode.

2.1.2.Select-μops Instructions after the CFM point should have

data dependencies on instructions from only the correct path of a di-verge branch.Before the diverge branch is executed,the processor does not know which path is correct.Instead of waiting for the res-olution of the diverge branch,the processor inserts select-μops to continue renaming/execution after exiting dpred-mode.Select-μops are similar to the φ-functions in the static single-assignment (SSA)form [14]in that they “merge”the register values produced on both sides of the hammock.3Select-μops ensure that instructions depen-dent on the register values produced on either side of the hammock are supplied with the correct data values that depend on the correct direction of the diverge branch.After inserting select-μops,the pro-cessor can continue fetching and renaming instructions.If an instruc-tion fetched after the CFM point is dependent on a register produced on either side of the hammock,it sources (i.e.depends on)the output of a select-μop.Such an instruction will be executed after the diverge branch is resolved.However,instructions that are not dependent on select-μops are executed as soon as their sources are ready without waiting for the resolution of the diverge branch.Figure 2illustrates the dynamic predication process.Note that instructions in blocks C,

B,and E,which are fetched during dpred-mode,are also executed be-fore the resolution of the diverge

branch.

select?uop pr43 = p1? pr13 : pr33select?uop pr40 = p1? pr20 : pr30C

B

E

H

add pr24

A (b)

(a)

(c)

C

B E H

A Figure 2.An example of how the instruction stream in Figure 1b is dynamically predicated:(a)fetched blocks (b)fetched assembly instructions (c)instructions after register renaming

2.1.

3.Loop Branches DMP can dynamically predicate loop

branches.The bene?t of dynamically predicating loop branches using DMP is very similar to the bene?t of wish loops [22].The key mech-anism to predicate a loop-type diverge branch is that the processor needs to predicate each loop iteration separately.This is accomplished by using a different predicate register for each iteration and inserting select-μops after each iteration.Select-μops choose between live-out register values before and after the execution of a loop iteration,based on the outcome of each dynamic instance of the loop branch.Instruc-tions that are executed in later iterations and that are dependent on live-outs of previous predicated iterations source the outputs of select-μops.Similarly,instructions that are fetched after the processor ex-its the loop and that are dependent on registers produced within the loop source the outputs of select-μops so that they receive the correct source values even though the loop branch may be mispredicted.The pipeline does not need to be ?ushed if a predicated loop is iterated more times than it should be because the predicated instructions in the extra loop iterations will become NOPs and the live-out values from the correct last iteration will be propagated to dependent instructions via select-μops.Figure 3illustrates the dynamic predication process of a loop-type diverge branch (The processor enters dpred-mode

after

A select?uop pr33 = p2? pr30 : pr23add pr17

B

(c)

branch A, pr10

A pr10 = (cond1) add pr11

A (b)

(a)

p1= pr10

add r1

A

A

B

Figure 3.An example of how a loop-type diverge branch is dynam-ically predicated:(a)CFG (b)fetched assembly instructions (c)instructions after register renaming

the ?rst iteration and exits after the third iteration).

There is a negative effect of predicating loops:instructions that source the results of a previous loop iteration (i.e.loop-carried depen-dencies)cannot be executed until the loop-type diverge branch is re-solved because such instructions are dependent on select-μops.How-ever,we found that the negative effect of this execution delay is much less than the bene?t of reducing pipeline ?ushes due to loop branch mispredictions.Note that the dynamic predication of a loop does not provide any performance bene?t if the branch predictor iterates the loop fewer times than required by correct execution,or if the predictor has not exited the loop by the time the loop branch is resolved.

2.2.DMP vs.Other Branch Processing Paradigms

We compare DMP with ?ve previously proposed mechanisms in predication and multipath execution paradigms:dynamic-hammock-predication [23],software predication [2,32],wish branches [22],selective/limited dual-path execution (dual-path)[18,15],and mul-tipath/PolyPath execution (multipath)[34,25].First,we classify control-?ow graphs (CFGs)into ?ve different categories to illustrate the differences between these mechanisms more clearly.

Figure 4shows examples of the ?ve different CFG types.Sim-ple hammock (Figure 4a)is an if or if-else structure that does not have any nested branches inside the hammock.Nested hammock (Figure 4b)is an if-else structure that has multiple levels of nested branches.Frequently-hammock (Figure 4c)is a CFG that becomes a simple hammock if we consider only frequently executed paths.Loop (Figure 4d)is a cyclic CFG (for ,do-while ,or while structure).Non-merging control-?ow (Figure 4e)is a CFG that does not have a control-?ow merge point even if we consider only frequently ex-ecuted paths.4Figure 5shows the frequency of branch mispredic-tions due to each CFG type.Table 1summarizes which blocks are fetched/predicated in different processing models for each CFG type,assuming that the branch in block A is hard to predict.

(a)

(b)

Figure 4.Control-?ow graphs:(a)simple hammock (b)nested ham-mock (c)frequently-hammock (d)loop (e)non-merging control ?ow

Dynamic-hammock-predication can predicate only simple ham-mocks which account for 12%of all mispredicted branches.Simple hammocks by themselves account for a signi?cant percentage of mis-predictions in only two benchmarks:vpr (40%)and twolf (36%).We expect dynamic-hammock-predication will improve the performance of these two benchmarks.

Software predication can predicate both simple and nested ham-mocks,which in total account for 16%of all mispredicted branches.Software predication fetches all basic blocks between an if-converted branch and the corresponding control-?ow merge point.For example,in the nested hammock case (Figure 4b),software predication fetches blocks B,C,D,E,F,G,H,and I,whereas DMP fetches blocks B,C,D,G,H,and I.Current compilers usually do not predicate frequently-hammocks since the overhead of predicated code would be too high if

Table 1.Fetched instructions in different processing models (after the branch at A is estimated to be low-con?dence)We assume that the loop branch in block A (Figure 4d)is predicted taken twice after it is estimated to be low-con?dence.

Processing model simple hammock frequently-hammock non-merging DMP

B,C,D,E,F

B,C,D,E,H

can’t predicate can’t predicate

can’t predicate

Software predication B,C,D,E,F

usually don’t/can’t predicate

can’t predicate B,C,D,E,F,G,H,I

A,A,B,C

path1:B,D,E,F path1:B,D,E,H path1:B ...Dual-path

path2:C,D,E,F

path2:C,E,H

path2:C ...

g z

i v p

r

g c

m

c c r a f t p a r s e o p e r l b g a v o r t b z i p t w o c o m g o

i j p

e l i m 88k a m e Figure 5.Distribution o

f mispredicted branches based on CFG type

these CFGs include function calls,cyclic control-?ow,too many exit points,or too many instructions [2,32,44,28,9,31].Note that hyper-block formation [29]can predicate frequently-hammocks at the cost of increased code size,but it is not an adaptive technique because fre-quently executed basic blocks change at run-time.Even if we assume that software predication can predicate all frequently-hammocks,it could predicate up to 56%of all mispredicted branches.

Wish branches can predicate even loops,which account for 10%of all mispredicted branches,in addition to what software predica-tion can do.The main difference between wish branches and soft-ware predication is that the wish branch mechanism can selectively predicate each dynamic instance of a branch.With wish branches,a branch is predicated only if it is hard to predict at run-time,whereas with software predication a branch is predicated for all its dynamic instances.Thus,wish branches reduce the overhead of software pred-ication.However,even with wish branches,all basic blocks be-tween an if-converted branch and the corresponding CFM point are fetched/predicated.Therefore,wish branches also have higher perfor-mance overhead for nested hammocks than DMP .

Note that software predication (and wish branches)can eliminate a branch misprediction due to a branch that is control-dependent on another hard-to-predict branch (e.g.branch at B is control-dependent on branch at A in Figure 4b),since it predicates all the basic blocks within a nested hammock.This bene?t is not possible with any of the other paradigms except multipath,but we found that it provides signi?cant performance bene?t only in two benchmarks (3%in twolf,2%in go).

Selective/limited dual-path execution fetches from two paths af-ter a hard-to-predict branch.The instructions on the wrong path are se-lectively ?ushed when the branch is resolved.Dual-path execution is applicable to any kind of CFG because the control-?ow does not have to reconverge.Hence,dual-path can potentially eliminate the branch misprediction penalty for all ?ve CFG types.However,the dual-path mechanism needs to fetch a larger number of instructions than any of the other mechanisms (except multipath)because it continues fetching

two paths until the hard-to-predict branch is resolved even though processor may have already reached a control-independent point in CFG.For example,in the simple hammock case (Figure 4a),DMP blocks D,E,and F only once,but dual-path fetches D,E,and twice (once for each path).Therefore,the overhead of dual-path much higher than that of DMP.Detailed comparisons of the over-and performance of different processing models are provided in 5.

Multipath execution is a generalized form of dual-path execution that it fetches both paths after every low-con?dence branch and it can execute along many (more than two)different paths the same time.This increases the probability of having the cor-path in the processor’s instruction window.However,only one of the outstanding paths is the correct path and instructions on every other path have to be ?ushed.Furthermore,instructions after a control-?ow independent point have to be fetched/executed separately for each path (like dual-path but unlike DMP),which causes the processing re-sources to be wasted for instructions on all paths but one.For exam-ple,if the number of outstanding paths is 8,then a multipath processor wastes 87.5%of its fetch/execution resources for wrong-path/useless instructions even after a control-independent point.Hence,the over-head of multipath is much higher than that of DMP .In the example of Table 1the behavior of multipath is the same as that of dual-path because the example assumes there is only one hard-to-predict branch to simplify the explanation.

DMP can predicate simple hammocks,nested hammocks,frequently-hammocks,and loops.On average,these four CFG types account for 66%of all branch mispredictions.The number of fetched instructions in DMP is less than or equal to other mechanisms for all CFG types,as shown in Table 1.Hence,we expect DMP to eliminate branch mispredictions more ef?ciently (i.e.with less overhead)than the other processing paradigms.

3.Implementation of DMP

3.1.Entering Dynamic Predication Mode

The diverge-merge processor enters dynamic predication mode (dpred-mode)if a diverge branch is estimated to be low-con?dence at run-time.5When the processor enters dpred-mode,it needs to do the following:

1.The front-end stores the address of the CFM point associated with the diverge branch into a buffer called CFM register.The processor also marks the diverge branch as the branch that caused entry into dpred-mode.

2.The front-end forks (i.e.creates a copy of)the return address stack (RAS)and the GHR when the processor enters dpred-mode.In dpred-mode,the processor accesses the same branch predictor table with two different GHRs (one for each path)but only correct path instructions update the table after they commit.

A separate RAS is needed for each path.The processor forks the

register alias table(RA T)when the diverge branch is renamed so that each path uses a separate RA T for register renaming in dpred-mode.This hardware support is similar to the dual-path execution mechanisms[1].

3.The front-end allocates a predicate register for the initiated

dpred-mode.An instruction fetched in dpred-mode carries the predicate register identi?er(id)with an extra bit indicating whether the instruction is on the taken or the not-taken path of the diverge branch.

3.2.Multiple CFM points

DMP can support more than one CFM point for a diverge branch to enable the predication of dynamic hammocks that start from the same branch but end at different control-independent points.The compiler provides multiple CFM points.At run-time,the processor chooses the CFM point reached?rst on any path of the diverge branch and uses it to end dpred-mode.To support multiple CFM points,the CFM register is extended to hold multiple CFM-point addresses.

3.3.Exiting Dynamic Predication Mode

DMP exits dpred-mode when either(1)both paths of a diverge branch have reached the corresponding CFM point or(2)a diverge branch is resolved.The processor marks the last instruction fetched in dpred-mode(i.e.the last predicated instruction).The last predicated instruction triggers the insertion of select-μops after it is renamed.

DMP employs two policies to exit dpred-mode early to increase the bene?t and reduce the overhead of dynamic predication:

1.Counter Policy:CFM points are chosen based on frequently executed paths determined through compile-time pro?ling.At run-time,the processor might not reach a CFM point if the branch pre-dictor predicts that a different path should be executed.For example, in Figure4c,the processor could fetch blocks C and F.In that case, the processor never reaches the CFM point and hence continuing dy-namic predication is less likely to provide bene?t.To stop dynamic predication early(before the diverge branch is resolved)in such cases, we use a heuristic.If the processor does not reach the CFM point until a certain number of instructions(N)are fetched on any of the two paths,it exits dpred-mode.N can be a single global threshold or it can be chosen by the compiler for each diverge branch.We found that a per-branch threshold provides

2.3%higher performance than a global threshold because the number of instructions executed to reach the CFM point varies across diverge branches.After exiting dpred-mode early,the processor continues to fetch from only the predicted direction of the diverge branch.

2.Yield Policy:DMP fetches only two paths at the same time.If the processor encounters another low-con?dence diverge branch dur-ing dpred-mode,it has two choices:it either treats the branch as a normal(non-diverge)branch or exits dpred-mode for the earlier di-verge branch and enters dpred-mode for the later branch.We found that a low-con?dence diverge branch seen on the predicted path of a dpred-mode-causing diverge branch usually has a higher probability to be mispredicted than the dpred-mode-causing diverge branch.More-over,dynamically predicating the later control-?ow dependent diverge branch usually has less overhead than predicating the earlier diverge branch because the number of instructions inside the CFG of the later branch is smaller(since the later branch is usually a nested branch of the previous diverge branch).Therefore,our DMP implementation ex-its dpred-mode for the earlier diverge branch and enters dpred-mode for the later diverge branch.

3.4.Select-μop Mechanism

Select-μops are inserted when the processor reaches the CFM point on both paths.Select-μops choose data values that were produced from the two paths of a diverge branch so that instructions after the CFM point receive correct data values from select-μops.Our select-μop generation mechanism is similar to Wang et al.’s[45].However, our scheme is simpler than theirs because it needs to compare only two RA Ts to generate the select-μops.A possible implementation of our scheme is explained below.

When a diverge branch that caused entry into dpred-mode reaches the renaming stage,the processor forks the RAT.The processor uses two different RATs,one for each path of the diverge branch.We extend the RAT with one extra bit(M-modi?ed-)per entry to indi-cate that the corresponding architectural register has been renamed in dpred-mode.Upon entering dpred-mode,all M bits are cleared.When an architectural register is renamed in dpred-mode,its M bit is set.

When the last predicated instruction reaches the register renam-ing stage,the select-μop insertion logic compares the two RATs.6 If the M bit is set for an architectural register in either of the two RA Ts,a select-μop is inserted to choose,according to the predi-cate register value,between the two physical registers assigned to that architectural register in the two RATs.A select-μop allo-cates a new physical register(PR new)for the architectural regis-ter.Conceptually,the operation of a select-μop can be summa-rized as PR new=(predicate value)?PR T:PR N T, where PR T(PR NT)is the physical register assigned to the architec-tural register in the RAT of the taken(not-taken)path.

A select-μop is executed when the predicate value and the selected source operand are ready.As a performance optimization,a select-μop does not wait for a source register that will not be selected.Note that the select-μop generation logic operates in parallel with work done in other pipeline stages and its implementation does not increase the pipeline depth of the processor.

3.5.Handling Loop Branches

Loop branches are treated differently from non-loop branches.One direction of a loop branch is the exit of the loop and the other direction is one more iteration of the loop.When the processor enters dpred-mode for a loop branch,only one path(the loop iteration direction) is executed and the processor will fetch the same static loop branch again.Entering dpred-mode for a loop branch always implies the exe-cution of one more loop iteration.

The processor enters dpred-mode for a loop if the loop-type di-verge branch is low con?dence.When the processor fetches the same static loop branch again during dpred-mode,it exits dpred-mode and inserts select-μops.If the branch is predicted to iterate the loop once more,the processor enters dpred-mode again with a different predicate register id7,regardless of the con?dence of the branch prediction.In other words,once the processor dynamically predicates one iteration of the loop,it continues to dynamically predicate the iterations until the loop is exited by the branch predictor.The processor stores the predicate register ids associated with the same static loop branch in a small buffer and these are later used when the branch is resolved as we will describe in Section3.6.If the branch is predicted to exit the loop,the processor does not enter dpred-mode again but it starts to fetch from the exit of the loop after inserting select-μops.

3.6.Resolution of Diverge Branches

When a diverge branch that caused entry into dpred-mode is re-solved,the processor does the following:

1.It broadcasts the predicate register id of the diverge branch with

the correct branch direction(taken or not-taken).Instructions

with the same predicate id and the same direction are said to

be predicated-TRUE and those with the same predicate id but

different direction are said to be predicated-FALSE.

2.If the processor is still in dpred-mode for that predicate register

id,it simply exits dpred-mode and continues fetching only from

the correct path as determined by the resolved branch.If the

processor has already exited dpred-mode,it does not need to take

any special action.In either case,the pipeline is not?ushed.

3.If a loop-type diverge branch exits the loop(i.e.resolved as

not-taken in a backward loop),the processor also broadcasts the

predicate id’s that were assigned for later loop iterations along

with the correct branch direction in consecutive cycles.8This

ensures that the select-μops after each later loop iteration choose

the correct live-out values.

DMP?ushes its pipeline for any mispredicted branch that did not

cause entry into dpred-mode,such as a mispredicted branch that was

fetched in dpred-mode and turned out to be predicated-TRUE.

3.7.Instruction Execution and Retirement

Dynamically predicated instructions are executed just like other in-

structions(except for store-load forwarding described in Section3.8).

Since these instructions depend on the predicate value only for retire-

ment purposes,they can be executed before the predicate value(i.e.

the diverge branch)is resolved.If the predicate value is known to be

FALSE,the processor does not need to execute the instructions or allo-

cate resources for them.Nonetheless,all predicated instructions con-

sume retirement bandwidth.When a predicated-FALSE instruction

is ready to be retired,the processor simply frees the physical regis-

ter(along with other resources)allocated for that instruction and does

not update the architectural state with its results.9The predicate reg-

ister associated with dpred-mode is released when the last predicated

instruction is retired.

3.8.Load and Store Instructions

Dynamically predicated load instructions are executed like normal

load instructions.Dynamically predicated store instructions are sent to

the store buffer with their predicate register id.However,a predicated

store instruction is not sent further down the memory system(i.e.into

the caches)until it is known to be predicated-TRUE.The processor

drops all predicated-FALSE store requests.Thus,DMP requires the

store buffer logic to check the predicate register value before sending

a store request to the memory system.

DMP requires support in the store-load forwarding logic.The for-

warding logic should check not only the addresses but also the pred-

icate register ids.The logic can forward from:(1)a non-predicated

store to any later load,(2)a predicated store whose predicate register

value is known to be TRUE to any later load,or(3)a predicated store

whose predicate register is not ready to a later load with the same

predicate register id(i.e.on the same dynamically predicated path).

10Gcc,vortex,and perl in SPEC95are not included because later versions

of these benchmarks are included in SPEC CPU2000.

Table2.Hardware support required for different branch processing paradigms.(m+1)is the maximum number of outstanding paths in multipath.

Hardware Dynamic-hammock Software predication

CFM registers,+1PC+1/m PC selection between Fetch support

in simple hammock

-

Hardware-generated required required(path IDs)-

+1GHR,+1RAS+1/m GHR,+1/m RAS-

mark diverge br./CFM-mark wish branches

required required required

CFM point info-predicated instructions

+1RA T+1/m RAT-

required-required

required-optional(performance)

check predicate check path IDs check predicate

check?ush/no?ush

Branch resolution check?ush/no?ush-

Retirement check predicate check predicate

64KB,2-way,2-cycle I-cache;fetches up to3conditional branches but fetch ends at the?rst predicted-taken branch;8RAT ports

64KB(64-bit history,1021-entry)perceptron branch predictor[20];4K-entry BTB

Branch Predictors

scheduling window is partitioned into8sub-windows of64entries each;4-cycle pipelined wake-up and selection logic

L1D-cache:64KB,4-way,2-cycle,2ld/st ports;L2cache:1MB,8-way,8banks,10-cycle,1port;LRU replacement and64B line size

300-cycle minimum memory latency;32banks;32B-wide core-to-memory bus at4:1frequency ratio;bus latency:40-cycle round-trip

Stream prefetcher with32streams and16cache line prefetch distance(lookahead)[43]

DMP Support

Fetches up to2conditional branches but fetch ends at the?rst predicted-taken branch;4RAT ports

16KB(31-bit history,511-entry)perceptron branch predictor[20];1K-entry BTB

Branch Predictors

Execution Core

128physical registers;3-cycle pipelined wake-up and selection logic

200-cycle minimum memory latency;bus latency:20-cycle round-trip

binaries are compiled for the Alpha ISA with the-fast optimizations. We use a binary instrumentation tool that marks diverge branches and their respective CFM points after pro?ling.The benchmarks are run to completion with a reduced input set[26]to reduce simulation time.In all the IPC(retired Instructions Per Cycle)performance results shown in the rest of the paper for DMP,instructions whose predicate values are FALSE and select-μops inserted to support dynamic predication do not contribute to the instruction count.A detailed description of how we model different branch processing paradigms in our simulations is provided in an extended version of this paper[21].

4.2.Power Model

We incorporated the Wattch infrastructure[5]into our cycle-accurate simulator.The power model is based on100nm technol-ogy.The frequency we assume is4GHz for the baseline processor and 1.5GHz for the less aggressive processor.We use the aggressive CC3 clock-gating model in Wattch:unused units dissipate only10%of their maximum power when they are not accessed[5].All additional struc-tures and instructions required by DMP are faithfully accounted for in the power model:the con?dence estimator,one more RAT/RAS/GHR, select-μop generation/execution logic,additional microcode?elds to support select-μops,additional?elds in the BTB to mark diverge branches and to cache CFM points,predicate and CFM registers,and modi?cations to handle load-store forwarding and instruction retire-ment.Forking of tables and insertion of select-μops are modeled by increasing the dynamic access counters for every relevant https://www.wendangku.net/doc/4f1109122.html,piler Support for Diverge Branch and CFM

Point Selection

Diverge branch and CFM point candidates are determined based on a combination of CFG analysis and pro?ling.Simple hammocks, nested hammocks,and loops are found by the compiler using CFG analysis.To determine frequently-hammocks,the compiler?nds CFM point candidates(i.e.post-dominators)considering the portions of a program’s control-?ow graph that are executed during the pro?ling run.A branch in a suitable CFG is marked as a possible diverge branch if it is responsible for at least0.1%of the total number of mispredic-tions during pro?ling.A CFM point candidate is selected as a CFM point if it is reached from a diverge branch for at least30%of the dynamic instances of the branch during the pro?ling run and if it is within120static instructions from the diverge branch.The thresholds used in compiler heuristics are determined experimentally.We used the train input sets to collect pro?ling information.

5.Results

5.1.Performance of the Diverge-Merge Processor

Figure6shows the performance improvement of dynamic-hammock-predication,dual-path,multipath,and DMP over the base-line processor.The average IPC improvement over all benchmarks is3.5%for dynamic-hammock-predication,4.8%for dual-path,8.8% for multipath,11and19.3%for DMP.DMP improves the IPC by more

Table 5.Characteristics of the benchmarks:baseline IPC,potential IPC improvement with perfect branch prediction (PBP IPC ?),total number of retired instructions (Insts),number of static diverge branches (Diverge Br.),number of all static branches (All br.),increase in code size with diverge branch and CFM information (Code size ?),base2processor IPC (IPC base2),potential IPC improvement with perfect branch prediction on the base2processor (PBP IPC ?base2).perl,comp,m88are the abbreviations for perlbmk,compress,and m88ksim respectively.

gzip gcc crafty eon gap

bzip2

comp ijpeg m88Base IPC 2.02 1.25 2.54 3.26 2.88

1.48

2.18

2.73

3.27

229%113%137%15%16%

112%227%

60%

Insts (M)24983190129404

316

150

346

145434623792250

235

117

18

All br.(K) 1.629.5 5.1 4.9 4.6

1.4

0.6

2

1.7

0.350.10.030.030.090.160.080.021.390.52 1.36 1.36 1.73

1.71

0.86

1.69

PBP IPC ?base239%46%27%9%9%

46%

50%37%

12%

than 20%on vpr (58%),mcf (47%),parser (26%),twolf (31%),com-press (23%),and ijpeg (25%).A signi?cant portion (more than 60%)of branch mispredictions in these benchmarks is due to branches that can be dynamically predicated by DMP as was shown in Figure 5.Mcf shows additional performance bene?t due to the prefetching effect caused by predicated-FALSE instructions.In bzip2,even though 87%of mispredictions are due to frequently-hammocks,DMP improves IPC by only 12.2%over the baseline.Most frequently-hammocks in bzip2have more than one CFM point and the run-time heuristic used by DMP to decide which CFM point to use for dynamic predication (Section 3.2)does not work well for bzip2.

I P C d e l t a (%)

g z

v g m c r a p a r e p e r l g v o r b z i t w c o g i j p

l m 88h m Figure 6.Performance improvement provided by DMP vs.dynamic-hammock-predication,dual-path,and multipath execution

Dynamic-hammock-predication provides over 10%performance improvement on vpr and twolf because a relatively large portion of mispredictions is due to simple hammocks.The performance bene-?t of dual-path is higher than that of dynamic-hammock-predication but much less than that of DMP ,even though dual-path is applica-ble to any kind of CFG.This is due to two reasons.First,dual-path fetches a larger number of instructions from the wrong path compared to dynamic-hammock-predication and DMP ,as was shown in Table 1.Figure 7shows the average number of fetched wrong-path instructions per each entry into dynamic-predication/dual-path mode in the differ-ent processors.On average,dual-path fetches 134wrong-path instruc-tions,which is much higher than 4for dynamic-hammock-predication,and 20for DMP (note that this overhead is incurred even if the low-con?dence branch turns out to be correctly predicted).Second,dual-path is applicable to one low-con?dence branch at a time.While a

https://www.wendangku.net/doc/4f1109122.html,parisons with Software Predication and

Wish Branches

Figure 9shows the execution time reduction over the baseline for limited software predication 12and wish branches.Since the number of executed instructions is different in limited software predication and wish branches,we use the execution time metric for performance com-parisons.Overall,limited software predication reduces execution time by 3.8%,wish branches by 6.4%,and DMP by 13.0%.In most bench-marks,wish branches perform better than predication because they can selectively enable predicated execution at run-time,thereby reducing the overhead of predication.Wish branches perform signi?cantly bet-ter than limited software predication on vpr,parser,and ijpeg because they can be applied to loop branches.

E x e c u t i o n t i m e n o r m a l i z e d t o t h e b a s e l i n e

g z

i v p r g c c m c f c r a f t p a r s e o n p e r l b g a p v o r t e b z i p t w o l c o m g o

i j p e l i m

88k s a m e a Figure 9.DMP vs.limited software predication and wish branches

There are some differences between previous results [22]and our results in the bene?t of software predication and wish branches.The differences are due to the following:(1)our baseline processor already employs CMOVs which provide the performance bene?t of predica-tion for very small basic blocks,(2)ISA differences (Alpha vs.IA-64),(3)in our model of software predication,there is no bene?t due to compiler optimizations that can be enabled with larger basic blocks in predicated code,(4)since wish branches dynamically reduce the over-head of software predication,they allow larger code blocks to be pred-icated,but we could not model this effect because Alpha ISA/compiler does not support predication.

Even though wish branches perform better than limited software predication,there is a large performance difference between wish branches and DMP.The main reason is that DMP can predicate frequently-hammocks,the majority of mispredicted branches in many benchmarks as shown in Figure 5.Only parser does not have many frequently-hammocks,so wish branches and DMP perform similarly for this benchmark.Figure 10shows the performance improvement of DMP over the baseline if DMP is allowed to dynamically predicate:(1)only simple hammocks,(2)simple and nested hammocks,(3)sim-ple,nested,frequently-hammocks,and (4)simple,nested,frequently-hammocks and loops.There is a large performance provided by the predication of frequently-hammocks as they are the single largest cause of branch mispredictions.Hence,DMP provides large perfor-mance improvements by enabling the predication of a wider range of CFGs than limited software predication and wish branches.

5.3.Analysis of the Performance Impact of Enhanced

DMP Mechanisms

Figure 11shows the performance improvement provided by the en-hanced mechanisms in DMP .Single-cfm supports only a single CFM point for each diverge branch without any enhancements.Single-cfm by itself provides 11.4%IPC improvement over the baseline proces-sor.Multiple-cfm supports more than one CFM point for each diverge

I P C d e l t a (%)

g c r p p e r v b t c i j m 8h

N o r m a l i z e d e x e c u t i o n t i m e

g z i p

v p r g c c m c f c r a f t p a r s e e o n p e r l b g a p v o r t e b z i p t w o l c o m g o

i j p e g

l i m

88k s a m e a Figure 12.Performance comparison of DMP versus other paradigms on the less aggressive processor

The O-GEHL predictor requires a complex hashing mechanism to in-dex the branch predictor tables,but it effectively increases the global branch history length.As Figure 13shows,replacing the base-line processor’s perceptron predictor with a more complex,64KB O-GEHL branch predictor (OGEHL-base)provides 13.8%perfor-

mance improvement,which is smaller than the 19.3%performance improvement provided by implementing diverge-merge processing (perceptron-DMP).Furthermore,using DMP with an O-GEHL pre-dictor (OGEHL-DMP)improves the average IPC by 13.3%over OGEHL-base and by 29%over our baseline processor.Hence,DMP still provides large performance bene?ts when the baseline processor’s branch predictor is more complex and more accurate.

I P C d e l t a (%)Figure 13.DMP performance with different branch predictors

5.4.3.Effect of Con?dence Estimator Size Figure 14shows

the performance of dynamic-hammock-predication,dual-path,mul-tipath and DMP with 512B,2KB,4KB,and 16KB con?dence esti-mators and a perfect con?dence estimator.Our baseline employs a 2KB enhanced JRS con?dence estimator [19],which has 14%PVN ( accuracy)and 70%SPEC ( coverage)[17].13Even with a 512-byte estimator,DMP still provides 18.4%performance improvement.The bene?t of dual-path/multipath increases signi?cantly with a per-fect estimator because dual-path/multipath has very high overhead as shown in Figure 7,and a perfect con?dence estimator eliminates the incurrence of this large overhead for correctly-predicted branches.However,even with a perfect estimator,dual-path/multipath has less potential than DMP because (1)dual-path is applicable to one low-con?dence branch at a time (as explained previously in Section 5.1),(2)the overhead of dual-path/multipath is still much higher than that of DMP for a low-con?dence branch because dual-path/multipath executes the same instructions twice/multiple times after a control-independent point in the program.

Table6.Power and energy comparison of different branch processing paradigms

Less aggressive baseline processor

dyn-ham.multipath wish br.DMP dual-path SW-pred

1.1% 6.5%0.4%0.9%0.8%0.1% Energy?-9.0%-

2.2%-1.5%-0.8%

3.7%-1.5% -0.9%-

4.3%-6.1%-9.7%0.5% 1.2%

presented in this paper are based on our initial implementation of DMP using relatively simple compiler and hardware heuristics/algorithms. The performance improvement provided by DMP can be increased further by future research aimed at improving these techniques.On the compiler side,better heuristics and pro?ling techniques can be de-veloped to select diverge branches and CFM points.On the hardware side,better con?dence estimators are worthy of research since they critically affect the performance bene?t of dynamic predication. Acknowledgments

Special thanks to Chang Joo Lee for the support he provided in power modeling.We thank Paul Racunas,Veynu Narasiman,Nhon Quach,Derek Chiou,Eric Sprangle,Jared Stark,other members of the HPS research group,and the anonymous reviewers for their com-ments and suggestions.We gratefully acknowledge the support of the Cockrell Foundation,Intel Corporation and the Advanced Technology Program of the Texas Higher Education Coordinating Board. References

[1]P.S.Ahuja,K.Skadron,M.Martonosi,and D.W.Clark.Multipath exe-

cution:opportunities and limits.In ICS-12,1998.

[2]J.R.Allen,K.Kennedy,C.Porter?eld,and J.Warren.Conversion of

control dependence to data dependence.In POPL-10,1983.

[3] D.I.August,D.A.Connors,J.C.Gyllenhaal,and W.W.Hwu.Architec-

tural support for compiler-synthesized dynamic branch prediction strate-gies:Rationale and initial results.In HPCA-3,1997.

[4] D.I.August,W.W.Hwu,and S.A.Mahlke.A framework for balancing

control?ow and predication.In MICRO-30,1997.

[5] D.Brooks,V.Tiwari,and M.Martonosi.Wattch:a framework for

architectural-level power analysis and optimizations.In ISCA-27,2000.

[6]P.-Y.Chang,E.Hao,Y.N.Patt,and https://www.wendangku.net/doc/4f1109122.html,ing predicated exe-

cution to improve the performance of a dynamically scheduled machine with speculative execution.In PACT,1995.

[7]S.Chaudhry,P.Caprioli,S.Yip,and M.Tremblay.High-performance

throughput computing.IEEE Micro,25(3):32–45,May2005.

[8] C.-Y.Cher and T.N.Vijaykumar.Skipper:a microarchitecture for ex-

ploiting control-?ow independence.In MICRO-34,2001.

[9]Y.Choi,A.Knies,L.Gerke,and T.-F.Ngai.The impact of if-conversion

and branch prediction on program execution on the Intel Itanium proces-sor.In MICRO-34,2001.

[10]Y.Chou,B.Fahs,and S.Abraham.Microarchitecture optimizations for

exploiting memory-level parallelism.In ISCA-31,2004.

[11]Y.Chou,J.Fung,and J.P.Shen.Reducing branch misprediction penalties

via dynamic control independence detection.In ICS-13,1999.

[12]J.D.Collins,D.M.Tullsen,and H.Wang.Control?ow optimization via

dynamic reconvergence prediction.In MICRO-37,2004.

[13] A.Cristal,O.J.Santana,F.Cazorla,M.Galluzzi,T.Ramirez,M.Peri-

cas,and M.V alero.Kilo-instruction processors:Overcoming the memory wall.IEEE Micro,25(3):48–57,May2005.

[14]R.Cytron,J.Ferrante,B.K.Rosen,M.N.Wegman,and F.K.Zadeck.Ef-

?ciently computing static single assignment form and the control depen-dence graph.ACM Transactions on Programming Languages and Sys-tems,13(4):451–490,Oct.1991.

[15]M.Farrens,T.Heil,J.E.Smith,and G.Tyson.Restricted dual path ex-

ecution.Technical Report CSE-97-18,University of California at Davis, Nov.1997.

[16] A.Gandhi,H.Akkary,and S.T.Srinivasan.Reducing branch mispredic-

tion penalty via selective recovery.In HPCA-10,2004.

[17] D.Grunwald,A.Klauser,S.Manne,and A.Pleszkun.Con?dence esti-

mation for speculation control.In ISCA-25,1998.

[18]T.Heil and J.E.Smith.Selective dual path execution.Technical report,

University of Wisconsin-Madison,Nov.1996.

[19] E.Jacobsen,E.Rotenberg,and J.E.Smith.Assigning con?dence to con-

ditional branch predictions.In MICRO-29,1996.

[20] D.A.Jim′e nez and C.Lin.Dynamic branch prediction with perceptrons.

In HPCA-7,2001.

[21]H.Kim,J.A.Joao,O.Mutlu,and Y.N.Patt.Diverge-merge processor

(DMP):Dynamic predicated execution of complex control-?ow graphs

based on frequently executed paths.Technical Report TR-HPS-2006-008, The University of Texas at Austin,Sept.2006.

[22]H.Kim,O.Mutlu,J.Stark,and Y.N.Patt.Wish branches:Combining

conditional branching and predication for adaptive predicated execution.

In MICRO-38,2005.

[23] A.Klauser,T.Austin,D.Grunwald,and B.Calder.Dynamic hammock

predication for non-predicated instruction set architectures.In PACT, 1998.

[24] A.Klauser and D.Grunwald.Instruction fetch mechanisms for multipath

execution processors.In MICRO-32,1999.

[25] A.Klauser,A.Paithankar,and D.Grunwald.Selective eager execution

on the polypath architecture.In ISCA-25,1998.

[26] A.KleinOsowski and D.J.Lilja.MinneSPEC:A new SPEC benchmark

workload for simulation-based computer architecture https://www.wendangku.net/doc/4f1109122.html,puter Architecture Letters,1,June2002.

[27]https://www.wendangku.net/doc/4f1109122.html,m and R.P.Wilson.Limits of control?ow on parallelism.In

ISCA-19,1992.

[28]S.A.Mahlke,R.E.Hank,R.A.Bringmann,J.C.Gyllenhaal,D.M.

Gallagher,and W.W.Hwu.Characterizing the impact of predicated exe-cution on branch prediction.In MICRO-27,1994.

[29]S.A.Mahlke,D.C.Lin,W.Y.Chen,R.E.Hank,and R.A.Bringmann.

Effective compiler support for predicated execution using the hyperblock.

In MICRO-25,1992.

[30]O.Mutlu,J.Stark,C.Wilkerson,and Y.N.Patt.Runahead execution:An

alternative to very large instruction windows for out-of-order processors.

In HPCA-9,2003.

[31]ORC.Open research compiler for Itanium processor family.http://ipf-

https://www.wendangku.net/doc/4f1109122.html,/.

[32]J.C.H.Park and M.Schlansker.On predicated execution.Technical Re-

port HPL-91-58,Hewlett-Packard Labs,Palo Alto CA,May1991. [33] D.N.Pnevmatikatos and G.S.Sohi.Guarded execution and dynamic

branch prediction in dynamic ILP processors.In ISCA-21,1994. [34] E.M.Riseman and C.C.Foster.The inhibition of potential parallelism

by conditional jumps.IEEE Transactions on Computers,C-21(12):1405–1411,1972.

[35] E.Rotenberg,Q.Jacobson,and J.E.Smith.A study of control indepen-

dence in superscalar processors.In HPCA-5,1999.

[36] E.Rotenberg and J.Smith.Control independence in trace processors.In

MICRO-32,1999.

[37] A.Seznec.Analysis of the O-GEometric History Length branch predic-

tor.In ISCA-32,2005.

[38]J.W.Sias,S.Ueng,G.A.Kent,I.M.Steiner,E.M.Nystrom,and W.W.

Hwu.Field-testing IMPACT EPIC research results in Itanium2.In ISCA-31,2004.

[39] B.Simon,B.Calder,and J.Ferrante.Incorporating predicate information

into branch predictors.In HPCA-9,2003.

[40]K.Skadron,P.S.Ahuja,M.Martonosi,and D.W.Clark.Branch pre-

diction,instruction-window size,and cache size:Performance trade-offs and simulation techniques.ACM Transactions on Computer Systems, 48(11):1260–1281,Nov.1999.

[41] E.Sprangle and D.Carmean.Increasing processor performance by im-

plementing deeper pipelines.In ISCA-29,2002.

[42]S.T.Srinivasan,R.Rajwar,H.Akkary,A.Gandhi,and M.Upton.Con-

tinual?ow pipelines.In ASPLOS-XI,2004.

[43]J.M.Tendler,J.S.Dodson,J.S.Fields,H.Le,and B.Sinharoy.POWER4

system microarchitecture.IBM Technical White Paper,Oct.2001. [44]G.S.Tyson.The effects of predication on branch prediction.In MICRO-

27,1994.

[45]P.H.Wang,H.Wang,R.M.Kling,K.Ramakrishnan,and J.P.Shen.

Register renaming and scheduling for dynamic execution of predicated code.In HPCA-7,2001.

[46]N.J.Warter,S.A.Mahlke,W.W.Hwu,and B.R.Rau.Reverse if-

conversion.In PLDI,1993.

imp和exp命令导入和导出.dmp文件

Oracle数据库文件中的导入\导出(imp/exp命令) Oracle数据导入导出imp/exp就相当于oracle数据还原与备份。exp命令可以把数据从远程数据库服务器导出到本地的dmp文件,imp命令可以把dmp文件从本地导入到远处的数据库服务器中。 执行环境:可以在SQLPLUS.EXE或者DOS(命令行)中执行,DOS中可以执行时由于在oracle 8i 中安装目录ora81BIN被设置为全局路径,该目录下有EXP.EXE与IMP.EXE文件被用来执行导入导出。 下面介绍的是导入导出的实例。 数据导出: 1 将数据库TEST完全导出,用户名system密码manager 导出到D:daochu.dmp中 exp system/manager@TEST file=d:daochu.dmp full=y 2 将数据库中system用户与sys用户的表导出 exp system/manager@TEST file=d:daochu.dmp owner=(system,sys) 3 将数据库中的表inner_notify、notify_staff_relat导出 exp aichannel/aichannel@TESTDB2 file= d:datanewsmgnt.dmp tables=(inner_notify,notify_staff_relat) 4 将数据库中的表table1中的字段filed1以"00"打头的数据导出 exp system/manager@TEST file=d:daochu.dmp tables=(table1) query=" where filed1 like '00%'" 上面是常用的导出,对于压缩,既用winzip把dmp文件可以很好的压缩。 也可以在上面命令后面加上com press=y 来实现。 数据的导入 1 将D:daochu.dmp 中的数据导入TEST数据库中。 im p system/manager@TEST file=d:daochu.dmp im p aichannel/aichannel@HUST full=y file=d:datanewsmgnt.dmp ignore=y

Linux下Oracle导入dmp文件

Linux下向oracle数据库倒入dmp包的方式 1、登录linux,以oracle用户登录(如果是root用户登录的,登录后用 su - oracle命令切换成oracle用户) 2、以sysdba方式来打开sqlplus,命令如下: sqlplus "/as sysdba" 3、查看常规将用户表空间放置位置:执行如下sql: select name from v$datafile; 上边的sql一般就将你的用户表空间文件位置查出来了。 4、创建用户表空间: CREATE TABLESPACE 表空间名DATAFILE '/oracle/oradata/test/notifydb.dbf(表空间位置)' SIZE 200M AUTOEXTEND ON EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO; 5、创建用户,指定密码和上边创建的用户表空间 CREATE USER 用户名 IDENTIFIED BY 密码 DEFAULT TABLESPACE 表空间名; 6、赋予权限 grant connect,resource to 用户名; grant unlimited tablespace to用户名; grant create database link to用户名; grant select any sequence,create materialized view to用户名; 经过以上操作,我们就可以使用用户名/密码登录指定的实例,创

建我们自己的表了续: 创建临时表空间: create temporary tablespace test_temp tempfile 'F:\app\think\oradata\orcl\test_temp01.dbf' size 32m autoextend on next 32m maxsize 2048m extent management local; 创建表空间: create tablespace test_data logging datafile 'F:\app\think\oradata\orcl\test_data01.dbf' size 32m autoextend on next 32m maxsize 2048m extent management local; 创建用户: create user jack identified by jack default tablespace test_data temporary tablespace test_temp; 为用户赋予权限: GRANT create any table TO jack; GRANT resource,dba TO jack; GRANT select any table TO jack; 第一个是授予所有table有create权限, 第二个就是赋予DBA的权限,这才是最重要的,其实只要第二就可以了. 第三是授予所有table有select权限. 四:删除用户表空间的步骤: Alter tablespace 表空间名称 offline;

Oracle数据导入导出imp,exp命令

Oracle数据导入导出imp/exp命令10g以上expdp/impdp命令 Oracle数据导入导出imp/exp就相当于oracle数据还原与备份。exp命令可以 把数据从远程数据库服务器导出到本地的dmp文件,imp命令可以把dmp文件从本地导入到远处的数据库服务器中。利用这个功能可以构建两个相同的数据库,一个用来测试,一个用来正式使用。 执行环境:可以在SQLPLUS.EXE或者DOS(命令行)中执行, DOS中可以执行时由于在oracle 8i 中安装目录ora81BIN被设置为全局路径, 该目录下有EXP.EXE与IMP.EXE文件被用来执行导入导出。 oracle用java编写,SQLPLUS.EXE、EXP.EXE、IMP.EXE这两个文件有可能是被包装后的类文件。 SQLPLUS.EXE调用EXP.EXE、IMP.EXE所包裹的类,完成导入导出功能。 下面介绍的是导入导出的实例。 数据导出: 1 将数据库TEST完全导出,用户名system 密码manager 导出到 D:\daochu.dmp中 exp system/manager@TEST file=d:\daochu.dmp full=y 2 将数据库中system用户与sys用户的表导出 exp system/manager@TEST file=d:\daochu.dmp owner=(system,sys) 3 将数据库中的表inner_notify、notify_staff_relat导出 exp aichannel/aichannel@TESTDB2 file= d:\datanewsmgnt.dmp tables=(inner_notify,notify_staff_relat) 4 将数据库中的表table1中的字段filed1以"00"打头的数据导出 exp system/manager@TEST file=d:\daochu.dmp tables=(table1) query=" where filed1 like '00%'" 上面是常用的导出,对于压缩,既用winzip把dmp文件可以很好的压缩。 也可以在上面命令后面加上compress=y 来实现。 数据的导入 1 将D:\daochu.dmp 中的数据导入TEST数据库中。 imp system/manager@TEST file=d:\daochu.dmp imp aichannel/aichannel@TEST full=y file=d:\datanewsmgnt.dmp ignore=y 上面可能有点问题,因为有的表已经存在,然后它就报错,对该表就不进行导入。

PLSQLDeveloper导出导入数据库

一、 二、 1 导出存储过程,触发器,序列等所有用户对象。(备份) 在PL/SQL Developer的菜单Tools(工具) => Export User Objects(导出用户对象)中出来一个对话框界面

附上中文版:

备注: 建议红色框住部分都不选,这样执行这个sql 时,就根据当前你的登录账户来进行创建。在对象列表中ctrl+a 全选所有(如果你只导出部分,可单独选择) 设置输出文件地址,文件名。点击导出完成。 2 导出数据。(备份) 在PL/SQL Developer的菜单Tools(工具) => 导出表中出来一个对话框界面

如果数据量较大,选择oracle导出,勾压缩选项,然后设置输出文件地址。如果只导出部分数据,可以在Where 处添加条件。例如rownum<=1000 ( 导出1000条记录),此种方式导出dmp 格式文件。 如果数据量较小,可以选择sql 插入,此种方式导出sql文件。 如果数据量较小,可以选择sql 插入,此种方式导出sql文件。 上图的选项,勾选约束,索引,行数,触发器。 注意:如果表中包含clob 或nclob 字段,就只能用dmp格式进行导入,如果是少量表,不包含这种字段,可以用sql插入方式。 还原时,注意需要先还原dmp文件。 3 表数据还原。 a. 如果dmp 文件,那么在PL/SQL Developer的菜单Tools(工具) => 导入表中出来一个对话框界面 可以在“到用户” 处选择你登录的账户。 b 如果是sql 格式文件,一样在PLSQL中新建一个命令窗口(command windows),粘贴(ctrl+v)刚才负责的sql内容。然后就开始自动执行还原了。 4 还原其他对象(存储过程,触发器,序列,函数等)

oracle 如何导入dmp文件到指定表空间

oracle 如何导入dmp文件到指定表空间 2010年01月14日星期四 13:27 1. 打开工具Oracle SQL Plus 以dba身份登录sys用户 user: sys password: sys 主机字符串(H):orcl as sysdba 2. 创建用户并指定表空间 --create user 用户名 identified by 密码 default tablespace 缺省表空间Temporary tablespace 临时表空间; drop user jandardb cascade; create user jandardb identified by jandardb; alter user jandardb default tablespace jandardb; grant connect,resource,dba to jandardb; --grant connect,resource,dba to 用户名; revoke unlimited tablespce from jandardb; --revoke unlimited tablespace from 用户名; alter user jandardb quota 0 on users; --alter user 用户名 quota 0 on Users; alter user jandardb quota unlimited on jandardb; --alter user 用户名quota unlimited on 用户缺省表空间; 3. 使用imp工具导入dmp数据文件 imp jandardb/jandardb@orcl file=c:\jandardb.dmp fromuser=jandardb touser=jandardb log=c:\log.txt 数据库中用户try的数据一直放在system表空间中;今天把该用户的所有数据exp到文件try.dmp中,准备再导入到另一个测试数据数据中的test 用户中,同时放在test表空间中。 1、在第一个数据库导出数据:exp try/try wner=try file=/try.dmp log=try.log 2、将try.dmp ftp到第二个数据库所在主机上 3、在第二个数据库导入数据:imp test/test fromuser=try touser=test file=/try.dmp log=test.log 但是导完后发现数据任然被导入到了system表空中。 后通过查询后得知,要成功导入其他表空间需要 1、先将test用户在system空间中的UNLIMITED TABLESPACE权限回收:REVOKE UNLIMITED TABLESPACE FROM test

oracle数据库数据的导入导出

Oracle数据库导入导出命令(备份与恢复) Toad 一个很好的oralce数据库操作与管理工具,使用它可以很方便地导入导出数据表,用户以及整个数据库。今天在这里主要讲一下用命令行来操作oracle数据导入和导出: 备份数据 1、获取帮助: exp help=y 2. 导出一个完整数据库 exp user/pwd@instance file=path full=y 示例:exp system/system@xc file = c:/hehe full =y imp tax/test@tax file=d:/dbbak.dmp full=y 3 、导出一个或一组指定用户所属的全部表、索引和其他对象 exp system/manager file=seapark log=seapark owner=seapark exp system/manager file=seapark log=seapark owner=(seapark,amy,amyc,harold) 示例:exp system/system@xc file=c:/hehe owner=uep 4、导出一个或多个指定表 exp system/manager file=tank log=tank tables=(seapark.tank,amy.artist) 示例:exp system/system@xc file=c:/heh tables=(ueppm.ne_table) 恢复数据 1. 获取帮助 imp help=y 2. 导入一个完整数据库 imp system/manager file=bible_db log=dible_db full=y ignore=y 3. 导入一个或一组指定用户所属的全部表、索引和其他对象 imp system/manager file=seapark log=seapark fromuser=seapark imp system/manager file=seapark log=seapark fromuser=(seapark,amy,amyc,harold) 4. 将一个用户所属的数据导入另一个用户 imp system/manager file=tank log=tank fromuser=seapark touser=seapark_copy imp system/manager file=tank log=tank fromuser=(seapark,amy) touser=(seapark1, amy1) 5. 导入一个表 imp system/manager file=tank log=tank fromuser=seapark TABLES=(a,b) ************************ **************************** 利用Export可将数据从数据库中提取出来,利用Import则可将提取出来的数据送回Oracle 数据库中去。 1. 简单导出数据(Export)和导入数据(Import) Oracle支持三种类型的输出: (1)表方式(T方式),将指定表的数据导出。 (2)用户方式(U方式),将指定用户的所有对象及数据导出。 (3)全库方式(Full方式),将数据库中的所有对象导出。 数据导出(Import)的过程是数据导入(Export)的逆过程,它们的数据流向不同。

Oracle表的导入导出

Oracle数据导入导出imp/exp就相当于oracle数据还原与备份。exp命令可以把数据从远程数据库服务器导出到本地的dmp文件,imp命令可以把dmp文件从本地导入到远处的数据库服务器中。利用这个功能可以构建两个相同的数据库,一个用来测试,一个用来正式使用。 执行环境:可以在SQLPLUS.EXE或者DOS(命令行)中执行,DOS中可以执行时由于在oracle 8i 中安装目录\ora81\BIN被设置为全局路径,该目录下有EXP.EXE与IMP.EXE文件被用来执行导入导出。oracle用java编写,SQLPLUS.EXE、EXP.EXE、IMP.EXE这两个文件有可能是被包装后的类文件。 SQLPLUS.EXE调用EXP.EXE、IMP.EXE所包裹的类,完成导入导出功能。下面介绍的是导入导出的实例。 数据导出: 1 将数据库TEST完全导出,用户名system 密码manager 导出到D:\daochu.dmp 中 exp system/manager@TEST file=d:\daochu.dmp full=y 2 将数据库中system用户与sys用户的表导出 exp system/manager@TEST file=d:\daochu.dmp owner=(system,sys) 3 将数据库中的表inner_notify、notify_staff_relat导出 exp aichannel/aichannel@TESTDB2 file= d:\data\newsmgnt.dmp tables=(inner_notify,notify_staff_relat) 4 将数据库中的表table1中的字段filed1以"00"打头的数据导出 exp system/manager@TEST file=d:\daochu.dmp tables=(table1) query=\" where filed1 like '00%'\" 上面是常用的导出,对于压缩,既用winzip把dmp文件可以很好的压缩。 也可以在上面命令后面加上 compress=y 来实现。 数据的导入: 1 将D:\daochu.dmp 中的数据导入 TEST数据库中。 imp system/manager@TEST file=d:\daochu.dmp

dmp文件导入oracle数据库方法

DMP文件使用IMP导入ORACLE方法 在审计中接到被审计单位的ORACLE数据库EXP导出的备份文件XXX.DMP文件,需要导入ORACLE数据库中进行查询。 一、准备工作 1、将XXX.DMP拷贝到E:\下; 2、使用超大文本查看器logvewer软件打开XXX.DMP,在文件开头中找到导出用户名,使用查找功能输入TABLESPACE查找此单词后的表空间名称。 例如:我们得到财政预算数据库ORACLE数据EXP备份文件IFMIS2012_CJ20121229.DMP文件,经查看用户名为IFMIS2012_CJ,表空间名称为L TSYSDA TA01、L TSYSDA TA02、L TSYSDA TA03、L TINXDA TA01、L TLOBDA TA01、USERS六个,USERS是系统用户表空间,在建立表空间时就不需要再建了。 二、安装ORACLE 按照ORACLE 11G安装图解安装就可以了,建议安装企业版桌面模式,启动ORACLE服务,创建实例,使用统一口令。 我安装的是企业版服务器模式,创建实例ORCL,使用统一口令SQ。 三、建立表空间 方法有2种,一种DOS下SQLPLUS方式,一种是ORACLE的EM方式。

建议使用EM方式建立表空间: 1、启动服务:我的电脑—右键—管理—服务—ORACLE3个服务启动; 2、启动EM:开始--程序-- ORACLE-oradb11g_home1-- Database Control - orcl; 3、登陆:用户名:sys 口令:SQ(安装时统一口令)连接身份:SYSDBA; 4、创建空间表: 选‘服务器’—‘表空间’—‘创建’—‘表空间名称’—‘添 加物理数据库名称’—‘可扩展,无限制’--‘确定’。 如有其他表空间可以继续—‘创建’~~~‘确定’的程序。 例如:将上述事例的5个表空间逐一创建。 注意:表空间大小的选择要合适,必须要选择可扩展。 四、建立用户并授权 建议在DOS下SQLPLUS方式下进行: 1、开始—附件—dos提示符; 2、输入:CD\ 回车 3、以DBA身份登陆超级用户:c:\>SQLPLUS SYS/SQ AS SYSDBA回车 4、建立用户(以事例为内容建立):sql>CREA TE USER ifmis2012_cj IDENTIFIED BY sq; (ifmis2012_cj用户名,sq 口令)。

dmp文件导入到Oracle数据库

向Oracle数据库导入DMP文件 说明:dmp文件为Oracle数据库备份文件。 命令:imp:导入 emp:导出 Oracle数据导入导出imp/exp就相当于oracle数据还原与备份。exp命令可以把数据从远程数据库服务器导出到本地的dmp文件,imp命令可以把dmp文件从本地导入到远处的数据库服务器中。利用这个功能可以构建两个相同的数据库,一个用来测试,一个用来正式使用。 执行环境:可以在SQLPLUS.EXE或者DOS(命令行)中执行, DOS中可以执行时由于在oracle 8i 中安装目录\ora81\BIN被设置为全局路径, 该目录下有EXP.EXE与IMP.EXE文件被用来执行导入导出。 oracle用java编写,SQLPLUS.EXE、EXP.EXE、IMP.EXE这两个文件有可能是被包装后的类文件。SQLPLUS.EXE调用EXP.EXE、IMP.EXE所包裹的类,完成导入导出功能。 下面介绍的是导入导出的实例。 数据导出: 1 将数据库TEST完全导出,用户名system 密码sql 导出到D:\daochu.dmp中 exp system/sql@TEST file=d:\daochu.dmp full=y 2 将数据库中system用户与sys用户的表导出 exp system/sql@TEST file=d:\daochu.dmp owner=(system,sys) 3 将数据库中的表table1、table2导出 exp aichannel/aichannel@TESTDB2 file= d:\data\newsmgnt.dmp tables=( table1、table2) 4 将数据库中的表table1中的字段filed1以"00"打头的数据导出 exp system/sql@TEST file=d:\daochu.dmp tables=(table1) query=\" where filed1 like '00%'\" 上面是常用的导出,对于压缩,既用winzip把dmp文件可以很好的压缩。 也可以在上面命令后面加上compress=y 来实现。 数据的导入: 1 将D:\daochu.dmp 中的数据导入TEST数据库中。 imp system/sql@TEST file=d:\daochu.dmp imp aichannel/aichannel@HUST full=y file=file= d:\data\newsmgnt.dmp ignore=y 上面可能有点问题,因为有的表已经存在,然后它就报错,对该表就不进行导入。 在后面加上ignore=y 就可以了。 2 将d:\daochu.dmp中的表table1 导入 imp system/sql@TEST file=d:\daochu.dmp tables=(table1) 基本上上面的导入导出够用了。不少情况要先是将表彻底删除,然后导入。

oracle数据库导入导出命令

Oracle数据导入导出imp/exp 功能:Oracle数据导入导出imp/exp就相当与oracle数据还原与备份。 大多情况都可以用Oracle数据导入导出完成数据的备份和还原(不会造成数据的丢失)。 Oracle有个好处,虽然你的电脑不是服务器,但是你装了oracle客户端,并建立了连接 (通过Net Configuration Assistant添加正确的服务命名,其实你可以想成是客户端与服务器端修了条路,然后数据就可以被拉过来了) 这样你可以把数据导出到本地,虽然可能服务器离你很远。 你同样可以把dmp文件从本地导入到远处的数据库服务器中。 利用这个功能你可以构建俩个相同的数据库,一个用来测试,一个用来正式使用。 执行环境:可以在SQLPLUS.EXE或者DOS(命令行)中执行, DOS中可以执行时由于在oracle 8i 中安装目录\$ora10g\BIN被设置为全局路径, 该目录下有EXP.EXE与IMP.EXE文件被用来执行导入导出。 oracle用java编写,我想SQLPLUS.EXE、EXP.EXE、IMP.EXE这俩个文件是被包装后的类文件。 SQLPLUS.EXE调用EXP.EXE、IMP.EXE他们所包裹的类,完成导入导出功能。 下面介绍的是导入导出的实例,向导入导出看实例基本上就可以完成,因为导入导出很简单。 数据导出: 1 将数据库TEST完全导出,用户名system 密码manager 导出到D:\daochu.dmp 中 exp system/manager@TEST file=d:\daochu.dmp full=y 2 将数据库中system用户与sys用户的表导出 exp system/manager@TEST file=d:\daochu.dmp owner=(system,sys) 3 将数据库中的表table1 、table2导出 exp system/manager@TEST file=d:\daochu.dmp tables=(table1,table2) 4 将数据库中的表table1中的字段filed1以"00"打头的数据导出 exp system/manager@TEST file=d:\daochu.dmp tables=(table1) query=\" where filed1 like '00%'\" 上面是常用的导出,对于压缩我不太在意,用winzip把dmp文件可以很好的压缩。 不过在上面命令后面加上 compress=y 就可以了 数据的导入 1 将D:\daochu.dmp 中的数据导入 TEST数据库中。 imp system/manager@TEST file=d:\daochu.dmp 上面可能有点问题,因为有的表已经存在,然后它就报错,对该表就不进行导入。

oracle使用命令导入dmp(impdp)

使用命令导入dmp文件 impdp 序:以前写过使用imp、exp的导入导出,现在与时俱进,高版本的oracle使用泵的导入导出更方便了,主要特点是快、压缩率高占用空间小。 这里主要讲解linux环境下的使用,其实windows的环境下使用时一样的,下面一起讲解吧 一、连接到linux 这里使用Xshell工具,连接后如图: 二、切换到oracle用户 su– oracle 注意“-”前后的空格

三、连接到oracle 格式:sqlplus [用户名/密码@url:端口/实例名]或/ as sysdba sqlplus / as sysdba (连接到本机oracle) 注意“/”前后的空格 sqlplus SDP_CMS_HRB/SDP_CMS_HRB@10.9.219.24/orcl(在 linux和oracle下通用) Linux

Windows Ps:如果再windows无法连接到oracle,请确定安装的客户端是否支持服务器上安装的oracle的版本。 四、建用户 create user SDP_CMS_HRB identified by SDP_CMS_HRB; 见上一步windows的图,已经创建用户。 五、建表空间 这里拿到一个dmp文件有时候同事没有告诉表空间,需要自己查看,其实可以使用UE文本编辑软件打开查看,搜索 tablespace,或者先不建表空间直接导入一次后看导入日志。 格式:Create tablespace表空间名datafile表空间文件路径size 32m autoextend on next 32m maxsize 1024m extent management local;

PLSQL导入导出Oracle数据库方法

PL/SQLDeveloper导入导出Oracle数据库方法PL/SQL Developer是Oracle数据库用于导入导出数据库的主要工具之一,本文主要介绍利用PL/SQL导入导出Oracle数据库的过程。 1.Oracle数据库导出步骤 1.1 Tools→Export User Objects...选项,导出.sql文件。 说明:此步骤导出的是建表语句(包括存储结构)。 1.2 Tools→Export Tables...导出表结构及数据 PL/SQL工具包含三种方式导出Oracle表结构及数据,三种方式分别为:Oracle Export 、SQL Inserts、PL/SQL Developer,下面分别简单介绍下区别: 第一种方式导出.dmp格式的文件,.dmp是二进制文件,可跨平台,还能包含权限,效率不错,用的最为广泛。

第二种方式导出.sql格式的文件,可用文本编辑器查看,通用性比较好,效率不如第一种,适合小数据量导入导出。尤其注意的是表中不能有大字段(blob,clob,long),如果有,会提示不能导出(提示如下: table contains one or more LONG columns cannot export in sql format,user Pl/sql developer format instead)。 第三种方式导出.pde格式的文件,.pde为PL/SQL Developer自有的文件格式,只能用PL/SQL Developer工具导入导出,不能用文本编辑器查看。

2. 导入步骤(Tools→Import Tables...) 导入数据之前最好把以前的表删掉,当然导入另外的数据库数据除外。 2.1 Oracle Import 导入.dmp类型的oracle文件。 2.2 SQL Inserts

oracle导入导出dmp文件(详细步骤)

Oracle 10g 导入dmp文件 Oracle数据导入dmp文件可以是“某个用户下的数据库”,也可以是“某张表”,这里以导入数据库为例说明: <方法1:使用客户端Enterprise Manager Console> 1.用SYS用户名,以DBA的身份在ie中登入到数据库(ORACLE客户端Enterprise Manager Console) 2.在方案->用户与权限->用户 新建用户 同时给该用户授予“角色”: CONNECT,DBA,EXP_FULL_DATABASE,IMP_FULL_DATABASE,RESOURCE 授予系统权限: ALTER USER,COMMENT ANY TABLE,CREATE ANY VIEW,CREATESESSION, CREATE USER,DELETE ANY TABLE,DROP ANY VIEW,DROP USER, UNLIMITED TABLESPACE 3.在命令行下执行: 4.imp pg/pg@pgfs110 imp 用户名/口令回车 填写导入文件路径:EXPDAT.DMP>c:\a.dmp 输入插入缓冲区大小:默认不填回车 只列出导入文件的内容:回车 忽略创建错误:yes 导入权限:yes 导入表数据:yes 导入整个导出文件:yes 等待…… 成功终止导入,但出现警告。 例如:

5.打开PLSQL Developer,用新建的用户名和口令,以normal身份登录 6.在tables中可以查看导入到表 7.到此结束(这个问题折腾了我两天啊) <方法2: 使用pl/sql> 导出: exp username/password@服务名file=文件路径及文件名 例:我的数据库pcms的用户名和密码都是mmis,服务名为pcms 我要导出到D盘下的pcms.dmp文件,可以这样写: exp mmis/mmis@pcms file=d:\pcms.dmp 如下图所示:

Oracle 10g dmp文件的导入导出

Oracle 10g dmp文件的导入导出 Posted on 2012-07-21 19:18 Winoval阅读(91) 评论(0) 编辑收藏 Oracle数据导入dmp文件可以是“某个用户下的数据库”,也可以是“某张表”,这里以导入数据库为例说明: <方法1:使用客户端Enterprise Manager Console> 1.用SYS用户名,以DBA的身份在ie中登入到数据库(ORACLE客户端Enterprise Manager Console) 2.在方案->用户与权限->用户 新建用户 同时给该用户授予“角色”: CONNECT,DBA,EXP_FULL_DATABASE,IMP_FULL_DATABASE,RESOURCE 授予系统权限: ALTER USER,COMMENT ANY TABLE,CREATE ANY VIEW,CREATESESSION, CREATE USER,DELETE ANY TABLE,DROP ANY VIEW,DROP USER, UNLIMITED TABLESPACE 3.在命令行下执行:(一定完全按照以下步骤) $imp 用户名/口令回车 填写导入文件路径:EXPDAT.DMP>c:\a.dmp 输入插入缓冲区大小:默认不填回车

只列出导入文件的内容:回车 忽略创建错误:yes 导入权限:yes 导入表数据:yes 导入整个导出文件:yes 等待…… 成功终止导入,但出现警告。 例如: 4.打开PLSQL Developer,用新建的用户名和口令,以normal身份登录 5.在tables中可以查看导入到表 6.到此结束(这个问题折腾了一上午) 导出: exp username/password@服务名file=文件路径及文件名 服务器名的获取: 1、先登录conn 用户名/密码 2、执行下列sql命令:select name from v$database 例:我的数据库pcms的用户名和密码都是mmis,服务名为pcms 我要导出到D盘下的pcms.dmp 文件,可以这样写:

导入dmp文件到本地后,存储过程里中文显示问号乱

我先将源库3个用户导出的dmp分别导入目标库的3个用户,imp语句 --导入本地dmp,保证dmp的版本号较低 imp jd_a1/jd_a1@orcl full=y ignore=y file=E:\jd_a1.dmp log=E:\jd_a1.log grants=n imp edmadmin/edmadmin@orcl full=y ignore=y file=E:\jd_a2.dmp log=E:\jd_a2.log grants=n imp jd_synuser/jd_synuser@orcl full=y ignore=y file=E:\jd_synuser.dmp log=E:\jd_synuser.log grants=n 之后再执行了一个由sqlplus导出的sql脚本(断定是sqlplus导出因为内有spool标示当然也可能是用plsql导出,脚本也内含spool标示) 因为DMP文件不大,干脆用UE打开看了下exp版本号,V09.02,随后上oracle官网下了个win64_11g(本地是64位win7系统),版本号11.2.1.0,其imp版本当然也是11.2.1.0。 选择完全导入,通过看日志(不用担心,正式的导入肯定会提供给你完整准确的源库信息,包括exp版本、字符集、用户表空间等等),做了一些必要的创建,比如用户, 然后导入完毕,对于本次无需关注的如显式授权错误则忽略(grant select on to 不重要的用户)。 导入完毕后,开始浏览导入的内容,比如表,表数据,在查看自定义过程和函数的时候,发现里面的中文都是问号乱码,难道是plsql窗口的显示问题? 通过create table aa as select '乱码' as 乱码from dual验证了下,无论时候工作区还是结果集,中文都显示正常。 那会不会是字符集的问题呢?

Oracle导出导入dmp文件

前提:在CMD 命令下 导出命令:exp 用户名/密码@数据库 owner=用户名 file=文件存储路径(如:F:\abcd.dmp)测试截图:exp ZM/sql123@ORCL owner=ZM file=F\abcd.dmp

导入命令:imp 用户名/密码@数据库 fromuser=用户名touser=用户名file=d:\cu.dmp ignore=y imp:命令类型 cu/mycu@db:导入的数据库登陆(用户名/密码@数据库) fromuser:文件的指定用户

touser:指定导入到当前登录的数据库某个用户 file:需要导入的数据文件 ignore:是否忽略创建错误 测试截图:imp ZM/sql123@ORCL fromuser=ZM touser=SZZM file=F:\test.dmp ignore=y

本机示例: ::==================cmd下Oracle导出导入dmp文件例子================ ::===================导出 exp gxkj_mdm/mdm123456@127.0.0.1:1521/orcl file=E:\WtdWorkspace\mdm20170614_V2.0\db\系统管理模块表结构和数据0712_1456.dmp tables=(T_SYS_AREA, T_SYS_DATA_INTERFACE, T_SYS_DATA_INTERFACE_DETAIL, T_SYS_DICT, T_SYS_LOG, T_SYS_MDICT, T_SYS_MENU, T_SYS_OFFICE, T_SYS_OFFICE_USER, T_SYS_ROLE, T_SYS_ROLE_MENU, T_SYS_ROLE_OFFICE, T_SYS_USER, T_SYS_USER_ROLE); exp portal/gxkj123@192.168.1.12:1521/orcl file=E:\WtdWorkspace\统一门户产品\db\192_168_1_12导出内容管理表.dmp tables=(T_CMS_ARTICLE,T_CMS_ARTICLE_DATA,T_CMS_CATEGORY,T_CMS_COMMENT,T_CMS_G UESTBOOK,T_CMS_LINK,T_CMS_SITE); exp portal/gxkj123@192.168.1.12:1521/orcl owner=common file=E:\WtdWorkspace\统一门户产品\db\非本机的common用户的所有数据0721_1128.dmp; exp gxkj_mdm/mdm123456@127.0.0.1:1521/orcl file=E:\WtdWorkspace\mdm20170614_V2.0\db\代码生成模块表结构和数据0724_0947.dmp tables=(T_GEN_TEMPLATE, T_GEN_SCHEME, , T_GEN_TABLE, T_GEN_TABLE_COLUMN); ::====================导入 imp common/mdm123456@127.0.0.1:1521/orcl file=E:\WtdWorkspace\mdm20170614_V2.0\db\系统管理模块表结构和数据0712_1456.dmp full=y;

linux下oracle数据dmp导入

1.2用imp 工具进行数据库备份及恢复 导入模式可以分为full(全文件导入),owner(用户导入),table(表导入). full(全文件导入):指导入文件中的所有数据,并不是指全库导入,如果文件中只存在一个表的数据全文件,导入就只能导入一个表的数据. fromuser,touser( 用户导入):指导入文件中该用户的所有数据,如果文件中只存在一个表的数据,用户导入就只能导入一个表的数据. tables( 表导入):指导入文件中该表的数据. 能够导入何数据很大程度上依赖于导出文件,譬如需要导入某用户的所有数据.导出文件中必须存在该用户的所有数据,即导出时必须为全库导出或用户导出.在CAMS 系统中,如果为了备份整个cams 用户的数据应该在导出时选择全库或者用户导出. 说明: 导入时需要注意需要事先创建需要导入的用户以及该用户的所有权限,所以在执行以下导入的用例之前,都需要先创建cams用户.创建CAMS用户的脚本见附录 1.2.1 典型用法 1. 交互式-用户导入 [oracle@localhost script]$ imp Import: Release 8.1.7.4.0 - Production on Mon Feb 9 13:59:02 2004 (c) Copyright 2000 Oracle Corporation. All rights reserved. Username: cams --此处输入启动导入的用户 Password: --此处输入相应的密码 Connected to: Oracle8i Enterprise Edition Release 8.1.7.4.0 -Production JServer Release 8.1.7.4.0 - Production Import file: expdat.dmp > /tmp/2004020601.dmp --此处输入需要导入的文件名,如果导出数据时导出到了多个文件, 则imp 会提示用户需要输入下一个需要导入的文件名.交互式导入时大多数参数都有缺省值.如果选用缺省值,直接回车即可. Enter insert buffer size (minimum is 8192) 30720> --此处需要输入buffer值,此处通常选择缺省值,直接回车. Export file created by EXPORT:V08.01.07 via conventional path

oracle数据表导入导出例子

oracle数据表导入导出例子 November 21st, 2009 by ahuoo Leave a reply ? ————用户级————————————————————————————————————– 1.把(用户)数据库导出到C:\db_backup.dmp exp 用户名/密码file=C:\db_backup.dmp 2.通过文件xxxx.dmp 把数据库导入 imp 用户名/密码fromuser=原始用户touser=现在的用户file=xxxx.dmp ignore=y 注: 1、fromuser为所导出数据的owner。 2、exp时owner与tables不能同时使用,相互冲突。 3、imp时可以同时指定fromuser 和tables参数。 3.将数据库中system用户与sys用户的表导出 exp 用户名/密码@数据库file=d:\daochu.dmp owner=(system,sys) :前提该用户的权限比system,sys大. ————表级—————————————————————————————————————- 4.将某特定表导出 exp 用户名/密码@数据库file=d:\daochu.dmp tables=(system,sys) 5 将d:\daochu.dmp中的表table1 导入 imp user/pass@database file=d:\daoru.dmp tables=(table1) ————数据级————————————————————————————————————— 6.将数据库中的表table1中的字段filed1以”00″打头的数据导出 expo user/pass@database file=d:\daochu.dmp tables=(table1) query=\” where filed1 like ”00%”\” exp参数: 关键字说明(默认) ———————————————- userid 用户名/口令 full 导出整个文件 (n) buffer 数据缓冲区的大小 owner 所有者用户名列表

相关文档
相关文档 最新文档