文档库 最新最全的文档下载
当前位置:文档库 › Human conversation as a system framework Designing embodied conversational agents

Human conversation as a system framework Designing embodied conversational agents

Human conversation as a system framework Designing embodied conversational agents
Human conversation as a system framework Designing embodied conversational agents

Conversation as a System Framework: Designing Embodied Conversational Agents

Justine Cassell, Tim Bickmore, Lee Campbell, Hannes Vilhjálmsson, and Hao Yan

X.1 Introduction

Embodied conversational agents (ECAs) are not just computer interfaces represented by way of human or animal bodies. And they are not just interfaces where those human or animal bodies are lifelike or believable in their actions and their reactions to human users.

Embodied conversational agents are specifically conversational in their behaviors, and specifically humanlike in the way they use their bodies in conversation. That is, embodied conversational agents may be defined as those that have the same properties as humans in face-to-face conversation, including:

? the ability to recognize and respond to verbal and nonverbal input

? the ability to generate verbal and nonverbal output

? the ability to deal with conversational functions such as turn taking, feedback, and repair mechanisms

? the ability to give signals that indicate the state of the conversation, as well as to contribute new propositions to the discourse

The design of embodied conversational agents puts many demands on system architecture. In this chapter, we describe a conversational framework expressed as a list of conversational properties and abilities and then demonstrate how it can lead to a set of architectural design constraints. We describe an architecture that meets the constraints, and an implementation of the architecture that therefore exhibits many of the properties and abilities required for real-time natural conversation.

Research in computational linguistics, multimodal

interfaces, computer graphics, and autonomous agents has led to the development of increasingly sophisticated autonomous or semi-autonomous virtual humans over the last five years. Autonomous self-animating characters of this sort are important for use in production animation, interfaces, and computer games.

Increasingly, their autonomy comes from underlying models of behavior and intelligence rather than simple physical models of human motion. Intelligence also refers increasingly not just to the ability to reason, but also to “social smarts”—the ability to engage a human in an interesting, relevant conversation with appropriate speech and body behaviors. Our own research concentrates on social and linguistic intelligence—

“conversational smarts”—and how to implement the type of virtual human that has the social and linguistic abilities to carry on a face-to-face conversation. This is what we call embodied conversational agents.

Our current work grows out of experience developing two

prior systems—Animated Conversation (Cassell et al. 1994) and

Ymir (Thórisson 1996). Animated Conversation was the first system to produce automatically context-appropriate gestures, facial movements, and intonational patterns for animated agents based on deep semantic representations of information, but it did not provide for real-time interaction with a user. The Ymir system focused on integrating multimodal input from a human user, including gesture, gaze, speech, and intonation, but was only capable of limited multimodal output in real time.

We are currently developing an embodied conversational agent architecture that integrates the real-time multimodal aspects of Ymir with the deep semantic generation and multimodal synthesis capability of Animated Conversation. We believe the resulting system provides a reactive character with enough of the nuances

of human face-to-face conversation to make it both intuitive and robust. We also believe that such a system provides a strong platform on which to continue development of embodied conversational agents. And we believe that the conversational framework that we have developed as the underpinnings of this system is general enough to inform development of many different kinds of embodied conversational agents.

X.2 Motivation

A number of motivations exist for relying on research in human face-to-face conversation in developing embodied conversational agent interfaces. Our most general motivation arises from the

fact that conversation is a primary skill for humans, and a very early-learned skill (practiced, in fact, between infants and

mothers who take turns cooing and burbling at one another (Trevarthen 1986), and from the fact that the body is so well equipped to support conversation. These facts lead us to believe that embodied conversational agents may turn out to be powerful ways for humans to interact with their computers. However, an essential part of this belief is that in order for embodied conversational agents to live up to their promise, their implementations must be based on actual study of human-human conversation, and their architectures must reflect some of the intrinsic properties found there.

Our second motivation for basing the design of architectures for ECAs on the study of human-human conversation arises from an examination of some of the particular needs that are not met in current interfaces. For example, ways to make dialogue systems robust in the face of imperfect speech recognition, to increase bandwidth at low cost, and to support efficient collaboration between human and machines and between humans mediated by machines. We believe that it is likely that embodied

conversational agents will fulfill these needs because these functions are exactly what bodies bring to conversation. But

these functions, then, must be carefully modeled in the

interface.

Our motivations are expressed in the form of “beliefs”because, to date, no adequate embodied conversational agent platform has existed to test these claims. It is only now that implementations of “conversationally smart” ECAs exist that we can turn to the evaluation of their abilities (see, for example,

Nass, Isbister, and Lee, chap. X; Oviatt and Adams, chap. X; Sanders and Scholtz, chap. X).

In the remainder of this chapter, we first present our conversational framework. We then discuss how this framework can drive the design of an architecture to control an animated character who participates effectively in conversational

interaction with a human. We present an architecture that we have been developing to meet these design requirements and describe

our first conversational character constructed using the architecture—Rea. We end by outlining some of the future challenges that our endeavor faces, including the evaluation of this design claim.

X.3 Human Face-to-Face Conversation

To address the issues and motivations outlined above, we have developed the Functions, Modalities, Timing, Behaviors (FMTB) conversational framework for structuring conversational

interaction between an embodied conversational agent and a human user. In general terms, all conversational behaviors in the FMTB conversational framework must support conversational functions, and any conversational action in any modality may convey several communicative goals. In this section, we motivate and describe this framework with a discussion of human face-to-face conversation. Face-to-face conversation is about the exchange of information, but in order for that exchange to proceed in an orderly and efficient fashion, participants engage in an

elaborate social act that involves behaviors beyond mere recital

of information-bearing words. This spontaneous performance, which so seamlessly integrates a number of modalities, is given unselfconsciously and without much effort. Some of the key

features that allow conversation to function so well are

? the distinction between propositional and interactional

functions of conversation

? the use of several conversational modalities

? the importance of timing among conversational behaviors (and

the increasing co-temporality or synchrony among conversational participants)

? the distinction between conversational behaviors and conversational functions

X.3.1 Interactional and Propositional Functions of Conversation Although a good portion of what goes on in conversation can be said to represent the actual thought being conveyed, or propositional content, many behaviors serve the sole purpose of regulating the interaction (Goodwin 1981; Kendon 1990). We can refer to these two types of contribution to the conversation as behaviors that have a propositional function and behaviors that have an interactional function, respectively. Propositional information includes meaningful speech as well as hand gestures and intonation used to complement or elaborate upon the speech content. Interactional information, likewise, can include speech or non-speech behaviors.

Both the production and interpretation of propositional content rely on knowledge about what one wishes to say and on a dynamic model of the discourse context that includes the information previously conveyed and the kinds of reasons one has for conveying new information.Interactional content includes a number of cues that indicate the state of the conversation. They range from nonverbal behaviors such as head nods to regulatory speech such as "huh?" or "do go on."

One primary role of interactional information is to

negotiate speaking turns. Listeners can indicate that they would like to receive the turn, for example, by raising their hands into space in front of their bodies or by nodding excessively before a speaker reaches the end of a phrase. Speakers can indicate they want to keep the turn, for example, by keeping

their hands raised or by gazing away from the listener. These cues are particularly useful for the speaker when pauses in speech may tempt the listener to jump in.

Turn-taking behavior along with listener feedback, such as signs of agreement or simple "I am following" cues, are good examples of the kind of parallel activity that occurs during

face-to-face conversation. Speakers and listeners monitor each other’s behavior continuously throughout the interaction and are simultaneously producing and receiving information (Argyle and Cook 1976) and simultaneously conveying content and regulating the process of conveying content.

X.3.2 Multimodality

We can convey multiple communicative goals via the same communicative behaviors or by different communicative behaviors carried out at the same time. What makes this possible is the

fact that we have at our disposal a number of modalities that can overlap without disruption. For example, a speaker can add a certain tone to the voice while raising the eyebrows to elicit feedback in the form of a head nod from the listener, all without interrupting the production of content. The use of several

different modalities of communication—such as hand gestures,

facial displays, eye gaze, and so forth—is what allows us to pursue multiple goals in parallel, some of a propositional nature and some of an interactional nature. It is important to realize that even though speech is prominent in conveying content in

face-to-face conversation, spontaneous gesture is also integral

to conveying propositional content. In fact, speech and gesture are produced simultaneously and take on a form that arises from one underlying representation (Cassell, chap. X; McNeill 1992). What gets conveyed through speech and what gets conveyed through gesture are therefore a matter of a particular surface structure taking shape. For interactional communicative goals, the modality chosen may be more a function of what modality is free—for example, is the head currently engaged in looking at the task, or is it free to give a feedback nod?

X.3.3 Timing

The existence of such quick behaviors as head nods, which nonetheless have such an immediate effect on the other

conversational participant, emphasizes the range of time scales involved in conversation. While we have to be able to interpret full utterances to produce meaningful responses, we are also sensitive to instantaneous feedback that may modify our

production as we go.

A B GAZE:

SPEECH:

NODS:GAZE:

SPEECH:

NODS:t

Figure X.1.A wide variety of time scales in human face-to-face conversation.Circles indicate gaze moving toward other; lines indicate fixation on other; squares are withdrawal of gaze from other;question mark shows rising intonation (from Thorisson 1996,adapted from Goodwin 1981).

In addition, the synchrony among events, or lack thereof, is meaningful in conversation. Even the slightest delay in

responding to conversational events may be taken to indicate unwillingness to cooperate or a strong disagreement (Rosenfeld,1987). As demonstrated in figure X.1, speakers and listeners attend to and produce behaviors with a wide variety of time

scales. It is remarkable how over the course of a conversation, participants increasingly synchronize their behaviors to one another. This phenomenon, known as entrainment, ensures that conversation will proceed efficiently.

X.3.4 Conversational Functions Are Carried Out by Conversational Behaviors

Even though conversation is an orderly event, governed by rules, no two conversations look exactly the same and the set of behaviors exhibited differs from person to person and from conversation to conversation. It is the functions referred to above that guide a conversation. Typical discourse functions include conversation invitation, turn taking, providing feedback, contrast and emphasis, and breaking away. Therefore, to successfully build a model of how conversation works, one can not refer to surface features, or conversational behaviors alone. Instead, the emphasis has to be on identifying the fundamental phases and high-level structural elements that make up a conversation. These elements are then described in terms of their role or function in the exchange.

Table X.1 here

This is especially important because particular behaviors, such as the raising of eyebrows, can be employed in a variety of circumstances to produce different communicative effects, and the same communicative function may be realized through different

sets of behaviors. The form we give to a particular discourse function depends on, among other things, current availability of modalities such as the face and the hands, type of conversation, cultural patterns, and personal style. For example, feedback can be given by a head nod, but instead of nodding, one could also say "uh huh" or "I see," and in a different context a head nod can indicate emphasis or a salutation rather than feedback. Table X.1 shows some important conversational functions and the behaviors that realize them.

From the discussion above, it should be clear that we make extensive use of the body when engaged in face-to-face conversation. This is natural to us and has evolved along with language use and social competence. Given that this elaborate system of behaviors requires minimal conscious effort, and that no other type of real-time human-to-human interaction, such as phone conversation, can rival face-to-face interaction when it comes to “user satisfaction,” one has to conclude that the affordances of the body in conversation are unique.

The ability to handle natural conversational interaction is particularly critical for real-time embodied conversational agents. Our FMTB conversational framework, then, relies on the interaction among the four properties of conversation described above (co-pursuing of interactional and propositional functions, multimodality, timing, distinction between conversational behaviors and conversational functions). Below, we review some related work before turning to a demonstration of how this model

provides a natural design framework for embodied conversational architectures

X.4 Related Work

We have argued that embodied conversational agents must be designed from research on the use and function of the verbal and nonverbal modalities in human-human conversation. Other authors

in this volume adhere to this principle to a greater or lesser extent. Other work in interface design has also followed this

path in the past, in particular, work in the domain of multimodal interfaces. Research on multimodal interfaces has concentrated more on the question of understanding the verbal and nonverbal modalities, whereas embodied conversational agents must both understand and generate behaviors in different conversational modalities. In the sections that follow, we review some previous research in the fields of conversational interfaces and

multimodal interfaces before turning to other embodied conversational agent work that resembles our own.

X.4.1 Synthetic Multimodal Conversation

"Animated Conversation" (Cassell et al. 1994) was a system that automatically generated context-appropriate gestures, facial movements, and intonational patterns. In this case, the domain was an interaction between a bank teller and customer. In order

to avoid the issues involved with understanding human behavior, the interaction took place between two autonomous graphical

agents and the emphasis was on the production of nonverbal

behaviors that emphasized and reinforced the content of speech.

In “Animated Conversation,” although both turn-taking conversational behaviors and content-conveying conversational behaviors were implemented, no distinction was made between conversational behaviors and the functions they fulfilled. Each function was filled by only one behavior. Because there was no notion of conversational function, the interactional and propositional distinction could not be explicitly made. This was not a problem in for the system, since it did not run in real time, and there was no interaction with a real user, but it made it impossible to extend the work to actual human-computer interaction.

André et al. (chap. X) also implement a system for conversation between synthetic characters for the purpose of presenting information to a human, motivated by the engaging

effect of teams of newscasters or sportscasters. Two domains are explored: car sales and "RoboCup Soccer," with an emphasis on conveying character traits as well as domain information. In the car domain, they use goal decomposition to break a presentation into speech acts; and personality and interest profiles in combination with multi-attribute utility theory to organize the presentation of automotive features and values. The result is a sequence of questions, answers, and comments between a seller and one or two buyers. The modalities explored are primarily speech and intonation; although there are some pointing hand gestures. The conversational behaviors generated by this system either

fulfill a propositional goal, or convey personality or emotional traits; interactional goals are not considered.

X.4.2 Conversational Interfaces

Nickerson (1976) was one of the pioneers of modeling the computer interface on the basis of human conversation. He provided a list of features of face-to-face conversation that could be fruitfully applied to human-computer interaction, including mixed

initiative, nonverbal communication, sense of presence, and rules for transfer of control. His concern was not even necessarily systems that carried on conversations with humans, but rather a model that allowed management and explicit representation of turn taking so the user’s expectations could be harnessed in service

of clearer interaction with the computer.

Brennan (1990) argues that human-computer interaction literature promulgates a false dichotomy between direct manipulation and conversation. From observations of human-human conversation, Brennan develops guidelines for designers of both WIMP and conversational interfaces. Key guidelines include modeling shared understandings and provisions for feedback and

for repair sequences. The work of both Nickerson and Brennan were essential to our FMTB model.

Badler et al. (chap. X) present a conversational interface

to an avatar control task. Avatars interact in the Jack-MOO

virtual world, controlled by natural language commands such as “walk to the door and turn the handle slowly.” They developed a Parameterized Action Representation to map high-level action

labels into low-level sequences of avatar activity. Humans give orders to their avatars to act and speak, and the avatars may converse with some fully automated characters in the virtual world. Thus, the human interface is effectively command and control, while the multimodal conversation occurs between avatars and automatic characters. No interactional functions such as turn taking are considered in this system. In addition, there is a hard mapping between conversational behaviors and conversational functions, making the use of the different modalities somewhat inflexible.

X.4.3 Multimodal Interfaces

One of the first multimodal systems based on the study of nonverbal modalities in conversation was Put-That-There (1980). Put-That-There used speech recognition and a six-degree-of-freedom space-sensing device to gather user gestural input and allow the user to manipulate a wall-sized information display. Put-That-There used a simple architecture that combined speech and deictic gesture input into a single command that was then resolved by the system. For example, the system could understand the sentence "Move that over there" to mean move the sofa

depicted on the wall display to a position near the table by analyzing the position of the pointing gestures of the user. In each case, however, the speech drove the analysis of the user input. Spoken commands were recognized first, and the gesture input only used if the user’s command could not be resolved by speech analysis alone. Certain words in the speech grammar (such

as "that") were tagged to indicate that they usually co-occurred with a deictic (pointing) gesture. When these words were encountered, the system analyzed the user’s pointing gestures to resolve deictic references.

Koons extended this work by allowing users to maneuver objects around a two-dimensional map using spoken commands, deictic hand gestures, and eye gaze (Koons, Sparrel, and

Thórisson 1993). In his system, nested frames were employed to gather and combine information from the different modalities. As in Put-That-There, speech drove the analysis of gesture: if information was missing from speech, the system would search for the missing information in the gestures and/or gaze. Time stamps united the actions in the different modalities into a coherent picture. Wahlster used a similar method, depending on typed text input to guide the interpretation of pointing gestures (Wahlster 1991).

These examples exhibit several features common to command-and-control-type multimodal interfaces. They are speech-driven, so the other input modalities are only used when the speech recognition produces ambiguous or incomplete results. Input interpretation is not carried out until the user has finished an utterance, meaning that the phrase level is the shortest time scale at which events can occur. The interface only responds to complete, well-formed input, and there is no attempt to use nonverbal behavior as interactional information to control the pace of the user-computer interaction.

These limitations were partially overcome by Johnston (1998), who described an approach to understanding user input based on unification with strongly typed multimodal grammars. In his pen and speech interface, either gesture or voice could be used to produce input and either one could drive the recognition process. Multimodal input was represented in typecast semantic frames with empty slots for missing information. These slots were then filled by considering input events of the correct type that occurred about the same time.

On a different tack, Massaro et al. (chap. X) use nonverbal behavior in Baldi, an embodied character face, to increase the intelligiblity of synthetic speech; they prove efficacy by

testing speech readers’ recognition rate with Baldi mouthing monosyllables. The output demonstrates improved intelligibility when lip shapes are correct, and the authors have also shown the utility of such a system for teaching spoken conversation to deaf children.

Missing from all these systems, however, is a distinction between conversational behavior and conversational function. This means, in addition, that there can be no notion of why a

particular modality might be used rather than another, or what goals are achieved by the congruence of different modalities. The case of multiple communicative goals (propositional and interactional, for example) is not considered. Therefore, the role of gesture and voice input cannot be analyzed at more than a sentence-constituent replacement level.

X.4.4 Embodied Conversational Interfaces

Lester et al. (chap. X) do rely on a notion of semantic function (reference) in order to generate verbal and nonverbal behavior, producing deictic gestures and choosing referring expressions as a function of the potential ambiguity of objects referred to and the proximity of those objects to the animated agent. This system is based on an understanding of how reference is achieved to objects in the physical space around an animated agent and the utility of deictic gestures in reducing potential ambiguity of reference. However, the generation of gestures and the choice of referring expressions (from a library of voice clips) are accomplished in two entirely independent (additive) processes, without a description of the interaction between or function

filled by the two modalities.

Rickel and Johnson (1999; chap. X) have designed a pedagogical agent, Steve, that can travel about a virtual ship, guiding a student to equipment, and then using gaze and deictic gesture during a verbal lesson about that equipment. The agent handles verbal interruption and provides verbal and nonverbal feedback (in the form of nods and headshakes) of the student’s performance. Although Steve does use both verbal and nonverbal conversational behaviors, there is no way to time those behaviors to one another at the level of the word or syllable. Nonverbal behaviors are hardwired for function: Steve cannot reason about which modalities might be better suited to serve particular functions at particular places in the conversation.

In contrast to these other systems, our current approach handles both multimodal input and output and is based on conversational functions that may be either interactional or propositional in nature. The basic modules of the architecture described in the next section were developed in conjunction with Churchill et al. (chap. X). The architecture grows out of

previous work in our research group on the Ymir architecture

(Thórisson 1996). In this work, the main emphasis was on the development of a multilayer multimodal architecture that could support fluid face-to-face dialogue between a human and graphical agent. The agent, Gandalf, recognized and displayed interactional information such as gaze and simple gesture and also produced propositional information, in the form of canned speech events.

In this way, it was able to perceive and generate turn-taking and back-channel behaviors that lead to a very natural conversational interaction. This work provided a good first example of how

verbal and nonverbal function might be paired in a conversational multimodal interface. However, Gandalf had limited ability to recognize and generate propositional information, such as

providing correct intonation for speech emphasis on speech output, or a gesture co-occurring with speech. The approach we use with Rea combines lessons learned from both the Gandalf and Animated Conversation projects.

X.5 Embodied Conversational Agent Architecture

The FMTB model described above can be summarized as follows: multiple (interactional and propositional) communicative goals

are conveyed by conversational functions that are expressed by conversational behaviors in one or several modalities. This model, which also serves as a strong framework for system design, is lacking in other embodied conversational agents. We have therefore designed a generic architecture for ECAs that derives directly from the FMTB conversational framework described above. We feel that it is crucial that ECAs be capable of employing the same repertoire of conversational skills as their human interactants, both to obviate the need for users to learn how to interact with the agent and to maximize the naturalness and

fluidity of the interaction. We believe that in order to enable the use of conversational skills, even the very architecture of the system must be designed according to the affordances and necessities of conversation. Thus, in our design we draw directly from the rich literature in linguistics, sociology, and human ethnography described in the previous section to derive our requirements, based on our FMTB conversational framework.

In general terms, the conversational model that we have described leads to the following set of ECA architectural design requirements:

? Understanding and Synthesis of Propositional and Interactional Information. Dealing with both propositional and interactional functions of conversation requires models of the user's needs and knowledge and the user’s conversational process and states. Producing propositional information requires a planning module to plan how to present multisentence output and manage the order of presentation of interdependent facts. The

计算机术语及中文解释D

计算机术语与中文解释D 3C(China Compulsory Certification,中国强制性产品认证制度) 3D(Three Dimensional,三维) 3DCG(3D computer graphics,三维计算机图形) 3DNow!(3D no waiting,无须等待的3D处理) 3DPA(3D Positional Audio,3D定位音频) 3DS(3D SubSystem,三维子系统) 3GIO(Third Generation Input/Output,第三代输入输出技术) AA(Accuview Antialiasing,高精度抗锯齿) AAC(Advanced Audio Compression,高级音频压缩) AAM(AMD Analyst Meeting,AMD分析家会议) AAM(Automatic Acoustic Management,自动机械声学管理) AAS(Automatic Area Segments) AA T(Average access time,平均存取时间) ABB(Advanced Boot Block,高级启动块) ABP(Address Bit Permuting,地址位序列改变) ABP(Advanced Branch Prediction,高级分支预测) ABS(Auto Balance System,自动平衡系统) A-Buffer(Accumulation Buffer,积聚缓冲) AC(Acoustic Edge,声学边缘) AC(Audio Codec,音频多媒体数字信号编解码器) AC-3(Audio Coding 3,第三代音响编码) AC97(Audio Codec 97,多媒体数字信号解编码器1997年标准) ACCP(Applied Computing Platform Providers,应用计算平台提供商) ACG(Aggressive Clock Gating,主动时钟选择) ACIRC(Advanced Cross Interleave Reed - Solomon Code,高级交叉插入里德所罗门代码) ACOPS(Automatic CPU OverHeat Prevention System(CPU过热预防系统) ACPI(Advanced Configuration and Power Interface,先进设置和电源管理) ACR(Advanced Communications Riser,高级通讯升级卡) ACS(Access Control Software,存取控制软件) ACT(Action,动作类游戏) AD(Analog to Digitalg,模拟到数字转换) ADC(Analog to Digital Converter,模数传换器) ADC(Apple Display Connector,苹果专用显示器接口) ADI(Adaptive De-Interlacing,自适应交错化技术) ADIMM(advanced Dual In-line Memory Modules,高级双重内嵌式内存模块) ADIP(Address In Pre-Groove,预凹槽寻址) ADSL(Asymmetric Digital Subscriber Line,不对称数字订阅线路) ADT(Advanced DRAM Technology,高级内存技术) AE(Atmospheric Effects,大气雾化效果) AE(Auto Focus,自动测光) AES-OCB(Advanced Encryption Standard-Operation Cipher Block,高级加密标准-操作密

目前计算机上最常用的外存储器是()

信息技术试卷----难题 一、选择题 1、目前计算机上最常用的外存储器是()。 A.打印机 B.数据库 C.磁盘 D.数据库管理系统 2、计算机的系统软件与应用软件的相互作用是()。 A.前者以后者为基础 B.后者以前者为基础 C.互不为基础 D.互为基础 3、微机使用的内存RAM中存储的数据在断电后()丢失。 A.不会 B.部分 C.完全 D. 有时 4、通常,一个汉字和一个英文字符在计算机中存储所占字节数的比例为()。 A.4:1 B.2:1 C.1:1 D.1:2 5、计算机病毒对于操作计算机的人()。 A.只会感染,不会致病 B.会感染致病,但无严重危害 C.不会感染 D.产生的作用尚不清楚 6、计算机外存储器中存放的数据,在正常情况下,断电后()丢失。 A.不会 B.少量 C.完全 D.不一定 7、当软盘处于写保护时,()。 A.既能读又能写 B. 既不能读又不能写 C.只能读不能写 D.不能读但能写 8、()键的功能是取消当前操作。 A.Enter B.Alt C.Esc D.Ins 9、办公自动化是计算机的一项应用,它属于计算机的()方面的应用。A.数据处理 B.科学计算 C.实时控制 D.辅助设计 10、一只软盘只能进行读取操作,一般情况下()。 A.病毒不能侵入 B.病毒能侵入 C.能够向里面存入信息 D.能修改里面的文件 11、通常所说的内存容量主要是指()的容量。 A.CPU B.ROM C.RAM D.128MB 12、下列不属于操作系统的是()。 A.Unix B.Windows95 C.Word D.MS-DOS 13、对于计算机裸机来说,首先必须安装的软件是()。 A.画图软件 B.应用软件 C.文字处理软件 D.操作系统软件 14、若想关闭计算机,可以按()组合键。 A.Alt+F4 B.Ctrl+F4 C.Esc D.Ctrl+Alt+Del 15、在Windows98中,下列文件名不合法的是()。 A.练习题.DOC B.aBc C.How are you D.hello*.* 16、若要给一个文件夹重命名,可以先选中该文件,然后按()键。 A.F1 B.F2 C.F3 D.Del 17、对文件重命名后,文件的内容()。

计算机组成原理试题

1.已知x和y,用变形补码计算x+y,同时指出结果是否溢出(每题6分,共18分) (1)x=11011,y=00011 (2)x=11011,y=-10101 (3)x=-10110,y=-00001 2.指令格式结构如下所示,试分析指令格式及寻址方式特点。(10分) 31 25 24 23 20 19 0 3.CPU执行一段程序时,CACHE完成存取的次数为5000次,主存完成存取的次数为200次。已知CACHE存取周期为40ns,主存存取周期为160ns。分别求CACHE的命中率H、平均访问时间Ta和CACHE-主存系统的访问效率e (12分) 4. 有一个16K×16位的存储器,由1K×4位的DRAM芯片构成(芯片是64×64结构)。问:(每题5分,共15分) (1)共需要多少RAM芯片? (2)采用异步刷新方式,如单元刷新间隔不超过2ms,则刷新信号周期是多少 (3)如采用集中刷新方式,存储器刷新一遍最少用多少读/写周期?死时间率是多少?5.用512K*16位的FLASH存储器芯片组成一个2M*32的半导体只读存储器,试问:(每题5分,共20分) (1)数据寄存器多少位? (2)地址寄存器多少位? (3)共需要多少个这样的器件? (4)画出此存储器的组成框图. 6.设有一个cache的容量为2K字,每块16个字,问:(每题5分,共25分) (1)cache中可容纳多少个块? (2)若主存的容量是256K字,主存可分多少块? (3)主存地址有多少位,cache的地址有多少位? (4)在直接映射方式中,主存中第135块映射到cache中哪一块? (5)进行地址映射时,主存地址分为几段,各段有多少位? 答案 2.操作码:定长操作码,可表示128条指令;操作数:双操作数,可构成RS或SS型指令,有直接、寄存器、寄存器间接寻址方式,访存范围1M,可表示16个寄存器。 3. H=Nc/(Nc+Nm)=5000/5200≈0.96 Ta=Tc+(1-H)×Tm=40ns+(1-0.96) ×160ns=46.4ns E=Tc/Ta=40ns/46.4ns×100%=86.2% 4.(1)存储器的总容量为16K×16位=256K位,所以用DRAM芯片为1K×4位=4K位 故芯片总数为:256K位/4K位= 64片 (2)采用异步刷方式,在2ms时间内分散地把芯片64行刷新一遍,故刷新信号的时间间隔为2ms/64 = 31.25μs,即可取刷新信号周期为30μs。 (3)如采用集中刷新方式,假定T为读/写周期,如16组同时进行刷新,则所需刷新时间为64T。设T单位为μs,2ms=2000μs,则死时间率=(64T/2000)×100%。 5.(1)32;(2)21;(3)4*2=8;(4)

计算机基础知识试题及答案选择题

计算机基础知识试题及答案选择题(一) 基础 一、 选择题 1、 世界上首先实现存储程序的电子数字计算机是 —。 A 、ENIAC B 、UNIVA C C 、EDVAC D 、EDSAC 2、计算机科学的奠基人是 _。 A 、查尔斯.巴贝奇 B 、图灵 C 、阿塔诺索夫 D 、冯.诺依曼 2、 世界上首次提岀存储程序计算机体系结构的是 _。 A 、艾仑?图灵 B 、冯?诺依曼 C 、莫奇莱 D 、比尔?盖茨 3、 计算机所具有的存储程序和程序原理是 _ 提岀的。 A 、图灵 B 、布尔 C 、冯?诺依曼 D 、爱因斯坦 4、 电子计算机技术在半个世纪中虽有很大进步,但至今其运行仍遵循着一位科学家提出的基本 原理。他就是 ____ 。 A 、牛顿 B 、爱因斯坦 C 、爱迪生 D 、冯?诺依曼 5、 1946年世界上有了第一台电子数字计算机,奠定了至今仍然在使用的计算机 _________ 。 A 、外型结构 B 、总线结构 C 、存取结构 D 、体系结构 6、 在计算机应用领域里, _____ 是其最广泛的应用方面。 A 、 过程控制 B 、科学计算 C 、数据处理 D 、计算机辅助系统 7、 1946年第一台计算机问世以来,计算机的发展经历了 4个时代,它们是 _____ 。 A 、 低档计算机、中档计算机、高档计算机、手提计算机 B 、 微型计算机、小型计算机、中型计算机、大型计算机 C 、 组装机、兼容机、品牌机、原装机 D 、 电子管计算机、晶体管计算机、小规模集成电路计算机、大规模及超大规模集成电路计算机 8、 以下属于第四代微处理器的是 。 12、 计算机业界最初的硬件巨头 蓝色巨人”指的是 _。 A 、IBM B 、Microsoft C 、联想 D 、Sun 13、 第四媒体是指( )。 A 、报纸媒体 B 、网络媒体 C 、电视媒体 D 、广播媒体 14、 CAD 是计算机的主要应用领域,它的含义是 ______ 。 A 、计算机辅助教育 B 、计算机辅助测试 A 、Intel8008 B 、Intel8085 10、 11、 C 、Intel8086 Pentium IV A 、第一代 D 、Intel80386/486/586 处理器属于 B 、第三代 处理器。 D 、第五代 计算机能够自动、准确、快速地按照人们的意图进行运行的最基本思想是 A 、采用超大规模集成电路 B 、采用CPU 作为中央核心部件 C 、采用操作系统 D 、存储程序和程序控制 计算机工作最重要的特征是—。 A 、高速度 B 、高精度 C 、存储程序和程序控制 D 、记忆力强 C 、第四代

计算机应用基础知识习题带答案

精心整理计算机基础知识试题(答案及详细解释) 一、选择题 1.一个完整的计算机系统包括____。 A)主机、键盘、显示器B)计算机及其外部设备 ALU D)运算器的速度 解答:CPU的品质直接决定了微机的档次,在奔腾出现之前,微机名称中直接使用微机中的CPU型号,386机表示了它们使用的CPU芯片为80386。 本题的正确答案为A。 4.在微型计算机中,微处理器的主要功能是进行____。

A)算术逻辑运算及全机的控制B)逻辑运算 C)算术逻辑运算D)算术运算 解答:微处理器是计算机一切活动的核心,它的主要功能是实现算术逻辑运算及全机的控制。 本题正确答案为A。 5.反映计算机存储容量的基本单位是____。 A)二进制位B)字节C)字D)双字 的基本单位。 本题正确答案为B。 6 A)ASCII码B)BCD码C) 码。BCD码是二—十进制编码。汉字编 编码。 本题正确答案为A。 7.DRAM存储器的中文含义是____。 A)静态随机存储器B)动态只读存储器 C)静态只读存储器D)动态随机存储器 解答:动态随机存储器的原文是(DynamicRandomAccessMemory:DRAM)。随机存储器有静态随机存储器和动态随机存储器之分。半导体动态随机存储器DRAM

的存储速度快,存储容量大,价格比静态随机存储器便宜。通常所指的64MB 或128MB内存,多为动态随机存储器DRAM。 本题正确答案为D。 8.微型计算机的发展是以____的发展为表征的。 A)微处理器B)软件C)主机D)控制器 解答:微处理器是计算机一切活动的核心,因此微型计算机的发展是以微处理 器的发展为表征的。 本题正确答案为A。 9 A)1945年B)1946年C)1948年D)1952 解答:世界上公认的第一台电子计算机 本题正确答案为B。 10.个人计算机属于 A)小巨型机B)中型机C) 6大类。目前,国外还有一种比较流行的看法,根据计算机的性能指标及厂家生产的计算机的主要面向应用对象,把计算机分为巨型机、小巨型机、大型机、小型机、工作站和个人计算机6大类。其中,个人计算机(PersonalComputer),又称为微型计算机(MicroComputer)。 本题正确答案为D。 11.通常,在微机中所指的80486是____。

计算机组成原理(附答案)

计算机组成原理 第1章计算机系统概论 一.填空题 1. 计算机系统是由硬件和软件两大部分组成的,前者是计算机系统的物质基础,而后者则是计算机系统解题的灵魂,两者缺一不可。 2. 存储程序是指解题之前预先把程序存入存储器;程序控制是指控制器依据所存储的程序控制计算机自动协调地完成解题的任务,这两者合称为存储程序控制,它是冯·诺依曼型计算机的重要工作方式。 3.通常将控制器和运算器合称为中央处理器(CPU) ;而将控制器、运算器和内存储器合称为计算机的主机。 4.计算机系统的硬件包括控制器、运算器、存储器、I/O接口和I/O设备等五大部分。 二.选择题 1. 指令周期是指( C )。 A.CPU从主存取出一条指令的时间 B.CPU执行一条指令的时间 C. CPU从主存取出一条指令加上执行该指令的时间 三.问答题 1.存储程序控制是冯?诺依曼型计算机重要的工作方式,请解释何谓存储程序、程序控制? 答:存储程序是指将解题程序(连同原始数据)预先存入存储器; 程序控制是指控制器依据存储的程序,控制全机自动、协调的完成解题任务。 2.计算机系统按功能通常可划分为哪五个层次?画出其结构示意图加以说明。 答:.五级组成的计算机系统如图1.7 (课本P18) 1)微程序设计级:微指令直接由硬件执行。 2)一般机器级(机器语言级):由微程序解释机器指令系统,属硬件级。 3)操作系统级:由操作系统程序实现。 4)汇编语言级:由汇编程序支持执行。 5)高级语言级:由高级语言编译程序支持执行。 这五级的共同特点是各级均可编程。 四.计算题 1.设某计算机指令系统有4种基本类型的指令A、B、C和D,它们在程序中出现的频度(概率)分别为0.3、0.2、0.15和0.35,指令周期分别为5ns、5.5ns、8ns和10ns,求该计算机的平均运算速度是多少MIPS(百万条指令每秒)? 解:指令平均运算时间: T=5×0.3+5.5×0.2+8×0.15+10×0.35=7.3 (ns) 平均运算速度: V=1/T=1/(7.3×10-3)=137(MIPS) 第2章运算方法与运算器 一.填空题 1.若某计算机的字长是8位,已知二进制整数x=10100,y=–10100,则在补码的表示中,[x]补=00010100 ,[y]补=11101100 。 2. 若浮点数格式中阶码的基数已确定,而且尾数采用规格化表示法,则浮点数表示的数,其范围取决于浮点数阶码的位数,而精度则取决于尾数的位数。

计算机基础试题三及答案

计算机应用基础统考模拟试题三 一、单选题(1×25分) 1. 微型计算机中运算器的主要功能是进行()。 (A) 算术运算(B) 逻辑运算(C) 初等函数运算(D) 算术和逻辑运算 2. 下列设备中,既能向主机中输入数据又能接受由主机输出数据的是() (A) CD_ROM (B) 显示器(C) 软盘驱动器(D) 光笔 3. 在微机中,bit 的中文含义是()。 (A) 二进制位(B) 双字(C) 字节(D) 字 4. 在微机中,作为一个整体存储、传送和处理的数据信息单位是()。 (A) 二进制位(B) 机器字(C) 字节(D) 英文字母 5. 在WINDOWS中,若在某一文档中连续进行了多次剪切操作,当关闭该文档后,“剪贴板”中存放的是() (A)空白(B) 所有剪切过的内容 (C) 最后一次剪切的内容(D) 第一次剪切的内容 6. 用鼠标右键单击“我的电脑”,并在弹出的快捷菜单中选择“属性”,可以直接打开() (A) 系统属性(B) 控制面板(C) 硬盘信息(D) C盘信息 7. 在WINDOWS下,硬盘中被逻辑删除或暂时删除的文件被放在() (A)光驱(B) 根目录下(C) 回收站(D) 控制面板 8. word常用工具栏中的“格式刷”可用于复制文本或段落的格式,若要将选中的文本或段落格式重复应用多次,应()操作。 (A) 单击格式刷(B) 双击格式刷 (C) 右击格式刷(D) 拖动格式刷 9. 在word中,重新设定的字符格式应用于() (A) 插入点所在的段落(B) 所选定的文本 (C) 文档中的所有节(D) 插入点的所在节 10. 在word中,如果想删除插入光标之后的一个字符可以按()。 (A) 【Backspace】键(B) 【Del】键 (C)【Ins】键(D) 【Enter】键 11. word中,Ctrl + Home 操作可以将插入光标移动到()。 (A) 行首(B) 行尾(C) 文档的开头(D) 文档的结尾 12. ( )显示活动单元格的列标、行号,它也可用来定义单元格或区域的名称,或者根据名称来查找单元格或区域。 (A) 工具栏(B) 名称框(C) 状态栏(D) 编辑栏 13. 若要把一个数字作为文本(例如,邮政编码、电话号码、产品代号等),只要在输入时加上一个( ),Excel就会把该数字作为文本处理,将它沿单元格左边对齐。 (A) 双撇号(B) 单撇号(C) 分号(D)逗号 14. SUM函数用来对单元格或单元格区域所有数值求( )的运算。 (A) 和(B) 平均值(C) 平 方(D) 开方 15. 输入的字符数据默认状态下在单元格中()。 (A) 右对齐(B) 左对齐(C) 居中(D) 不确定16. PowerPoint中,要切换到幻灯片的黑白视图,请选择( ) 。 (A) 视图菜单的“幻灯片浏览“(B) 视图菜单的“幻灯片放映”

51单片机外部存储器的使用

纠结了这么久,现在总算有点儿头绪了,先把它整理到这里先,有几点还是j经常被弄糊涂:地址和数据,地址/数据复用,地址的计算,总线的概念,执行指令跟脉冲的关系,哎呀呀,看来计算机组成和原理不看不行啊,得找个时间瞧瞧,过把瘾了解了解。。。 使用ALE信号作为低8位地址的锁存控制信号。以PSEN信号作为扩展程序存储器的读选通信号,在读外部ROM是PSEN是低电平有效,以实现对ROM 的读操作。 由RD和WR信号作为扩展数据存储器和I/O口的读选通、写选通信号。 ALE/PROG: 当访问外部存储器时,地址锁存允许的输出电平用于锁存地址的地位字节。 在FLASH编程期间,此引脚用于输入编程脉冲。 在平时,ALE端以不变的频率周期输出正脉冲信号,此频率为振荡器频率的1/6。因此它可用作对外部输出的脉冲或用于定时目的。然而要注意的是:每当用作外部数据存储器时,将跳过一个ALE脉冲。如想禁止ALE的输出可在SFR8EH地址上置0。此时,ALE只有在执行MOVX,MOVC指令是ALE才起作用。另外,该引脚被略微拉高。如果微处理器在外部执行状态ALE禁止,置位无效。 当访问外部存储器时,ALE作为锁存扩展地址的低8位字节的控制信号。 当访问外部数据存储器时,ALE以十二分之一振荡频率输出正脉冲,同时这个引脚也是EPROM编程时的编程脉冲输入端。] 当非访问外部数据存储器时,ALE以六分之一振荡频率固定输出正脉冲,8051一个机器周期=6个状态周期=12个振荡周期,若采用6MHz的晶体振荡器,则ALE会发出1MHz的固定的正脉冲。因此它可以用来做外部时钟或定时。如果我们把这个功能应用与实际,可能给我们的设计带来简化,降低生产成本。 ALE脚是在使用MOVX、MOVC指令时才会变成有效(这些指令都使用到外部RAM或ROM 的地址。这些指令都有一个特点:地址和数据分时出现在P0口)。使用C写程序时,要使用它有效,可用访问内部RAM地址的方法。如:uVariable=*((char *)0x12C),把0x12C地址的内容给uVariable变量。这个过程有效的脚为ALE、RD。 这个信号线的信号生成是MCU硬件电路实现的,不可以人工控制。 在某些内置TOM的MCU里,可以关闭ALE信号输出,以降低EMI。

计算机组成原理_作业参考答案(1)

第1章计算机系统概论 5. ?诺依曼计算机的特点是什么? 解:?诺依曼计算机的特点是:P8 (1)计算机由运算器、控制器、存储器、输入设备、输出设备五大部件组成; (2)指令和数据以同同等地位存放于存储器,并可以按地址访问; (3)指令和数据均用二进制表示; (4)指令由操作码、地址码两大部分组成,操作码用来表示操作的性质,地址码 用来表示操作数在存储器中的位置; (5)指令在存储器中顺序存放,通常自动顺序取出执行; (6)机器以运算器为中心(原始?诺依曼机)。 7. 解释下列概念:主机、CPU、主存、存储单元、存储元件、存储基元、存储元、存储字、存储字长、存储容量、机器字长、指令字长。 解:课本P9-10 (1)主机:是计算机硬件的主体部分,由CPU和主存储器MM合成为主机。 (2)CPU:中央处理器,是计算机硬件的核心部件,由运算器和控制器组成;(早 期的运算器和控制器不在同一芯片上,现在的CPU除含有运算器和控制器外还集成了Cache)。 (3)主存:计算机中存放正在运行的程序和数据的存储器,为计算机的主要工作 存储器,可随机存取;由存储体、各种逻辑部件及控制电路组成。 (4)存储单元:可存放一个机器字并具有特定存储地址的存储单位。 (5)存储元件:存储一位二进制信息的物理元件,是存储器中最小的存储单位, 又叫存储基元或存储元,不能单独存取。 (6)存储字:一个存储单元所存二进制代码的逻辑单位。 (7)存储字长:一个存储单元所存储的二进制代码的总位数。 (8)存储容量:存储器中可存二进制代码的总量;(通常主、辅存容量分开描述)。 (9)机器字长:指CPU一次能处理的二进制数据的位数,通常与CPU的寄存器位 数有关。 (10)指令字长:机器指令中二进制代码的总位数。 8. 解释下列英文缩写的中文含义:CPU、PC、IR、CU、ALU、ACC、MQ、X、MAR、

计算机组成原理题(附答案)

计算机组成原理题解指南 第一部分:简答题 第一章计算机系统概论 1.说明计算机系统的层次结构。 计算机系统可分为:微程序机器级,一般机器级(或称机器语言级),操作系统级,汇编语言级,高级语言级。 第四章主存储器 1.主存储器的性能指标有哪些?含义是什么? 存储器的性能指标主要是存储容量. 存储时间、存储周期和存储器带宽。 在一个存储器中可以容纳的存储单元总数通常称为该存储器的存储容量。 存取时间又称存储访问时间,是指从启动一次存储器操作到完成该操作所经历的时间。 存储周期是指连续两次独立的存储器操作(如连续两次读操作)所需间隔的最小时间。 存储器带宽是指存储器在单位时间中的数据传输速率。 2.DRAM存储器为什么要刷新?DRAM存储器采用何种方式刷新?有哪几种常用的刷新方式? DRAM存储元是通过栅极电容存储电荷来暂存信息。由于存储的信息电荷终究是有泄漏的,电荷数又不能像SRAM存储元那样由电源经负载管来补充,时间一长,信息就会丢失。为此必须设法由外界按一定规律给栅极充电,按需要补给栅极电容的信息电荷,此过程叫“刷新”。 DRAM采用读出方式进行刷新。因为读出过程中恢复了存储单元的MOS栅极电容电荷,并保持原单元的内容,所以读出过程就是再生过程。 常用的刷新方式由三种:集中式、分散式、异步式。 3.什么是闪速存储器?它有哪些特点? 闪速存储器是高密度、非易失性的读/写半导体存储器。从原理上看,它属于ROM型存储器,但是它又可随机改写信息;从功能上看,它又相当于RAM,所以传统ROM与RAM的定义和划分已失去意义。因而它是一种全新的存储器技术。 闪速存储器的特点:(1)固有的非易失性,(2)廉价的高密度,(3)可直接执行,(4)固态性能。4.请说明SRAM的组成结构,与SRAM相比,DRAM在电路组成上有什么不同之处? SRAM存储器由存储体、读写电路、地址译码电路、控制电路组成,DRAM还需要有动态刷新电路。 第五章指令系统 1.在寄存器—寄存器型,寄存器—存储器型和存储器—存储器型三类指令中,哪类指令的执行时间最长?哪类指令的执行时间最短?为什么? 寄存器-寄存器型执行速度最快,存储器-存储器型执行速度最慢。因为前者操作数在寄存器中,后者操作数在存储器中,而访问一次存储器所需的时间一般比访问一次寄存器所需时间长。 2.一个较完整的指令系统应包括哪几类指令? 包括:数据传送指令、算术运算指令、逻辑运算指令、程序控制指令、输入输出指令、堆栈指令、字符串指令、特权指令等。 3.什么叫指令?什么叫指令系统? 指令就是要计算机执行某种操作的命令 一台计算机中所有机器指令的集合,称为这台计算机的指令系统。 第六章中央处理部件CPU 1.指令和数据均存放在内存中,计算机如何从时间和空间上区分它们是指令还是数据。 时间上讲,取指令事件发生在“取指周期”,取数据事件发生在“执行周期”。从空间上讲,从内存读出的指令流流向控制器(指令寄存器)。从内存读出的数据流流向运算器(通用寄存器)。 2.简述CPU的主要功能。 CPU主要有以下四方面的功能:(1)指令控制程序的顺序控制,称为指令控制。 (2)操作控制 CPU管理并产生由内存取出的每条指令的操作信号,把各种操作信号送往相应部件,从而 控制这些部件按指令的要求进行动作。 (3)时间控制对各种操作实施时间上的控制,称为时间控制。 (4)数据加工对数据进行算术运算和逻辑运算处理,完成数据的加工处理。 3.举出CPU中6个主要寄存器的名称及功能。 CPU有以下寄存器: (1)指令寄存器(IR):用来保存当前正在执行的一条指令。 (2)程序计数器(PC):用来确定下一条指令的地址。 (3)地址寄存器(AR):用来保存当前CPU所访问的内存单元的地址。

计算机组成原理题库

综合题 1. 设存储器容量为32字,分为M0-M3四个模块,每个模块存储8个字,地址分配方案分别如下图中图(a)和图(b)所示。 (1)(a)和(b)分别采用什么方式进行存储器地址编址? (2)设存储周期T=200ns,数据总线宽度为64位,总线传送周期τ=50ns。问(a)和(b)两种方式下所对应的存储器带宽分别是多少(以Mb/s为单位)? 2.假设某机器有80条指令,平均每条指令由4条微指令组成,其中有一条取指微指令是所有指令公用的,已知微指令长度为32位,请估算控制存储器的容量是多少字节? 3. (1)用16K×8位的SRAM芯片形成一个32K×16位的RAM区域,共需SRAM芯片多少片? (2)设CPU地址总线为A15~A0,数据总线为D15~D0,控制信号为R/W(读/写)、MREQ(允许访存)。SRAM芯片的控制信号有CS和WE。要求这32K×16位RAM 区域的起始地址为8000H,请画出RAM与CPU的连接逻辑框图。

*4 CPU执行一段程序时,Cache完成存取的次数为3800次,主存完成存取的次数为200次,已知Cache存取周期为50ns,主存为250ns, 求(1)Cache命中率。(2)平均访问时间(3)Cache/主存系统的效率。 5.已知某机采用微程序控制方式,其控制存储器容量为512*48(位)。微程序可在整个存储器中实现转移,可控制微程序转移的条件共4个,微指令采用水平型格式,后继微指令地址采用断定方式,如下图所示。 (1)微指令中的三个字段分别应为多少位? (2)画出围绕这种微指令格式的微程序控制器逻辑框图。 6.用2M×8位的SRAM芯片,设计4M×16位的SRAM存储器,试画出存储器芯片连接图。 *7.某计算机系统的内存储器由cache和主存构成,cache的存储周期为30ns,主存的存取周期为150ns。已知在一段给定的时间内,CPU共访问内存5000次,其中400次访问主存。问: ① cache的命中率是多少? ② CPU访问内存的平均时间是多少纳秒?

初中信息技术考试带答案--计算机基础知识

、计算机基础知识 一、单选题 1、具有多媒体功能的微型计算机系统中,常用的 A :只读型大容量U盘 B :只读型光盘 C :只读型硬盘 D :半导体只读存储器 2、把硬盘的数据传送到计算机的内存中,称为_______ < A :显示 B :读盘 C :输入 D :写盘 4、下列叙述中,正确的选项是_______ 。 A :计算机系统是由硬件系统和软件系统组成 B :程序语言处理系统是常用的应用软件 C : CP可以直接处理外部存储器中的数据 D :汉字的机内码与汉字的国标码是一种代码的两种名称 5、下列关于计算机系统硬件的说法中,正确的是_______ < A :键盘是计算机输入数据的唯一手段 B :显示器和打印机都是输出设备 C :计算机硬件由中央处理器和存储器组成 D:内存可以长期保存信息 6输入设备是指_______ 。 A :从磁盘上读取信息的电子线路 B :磁盘、光盘等 C :键盘、鼠标器和打印机等 D :从计算机外部获取信息的设备 7、计算机病毒的特征有 ____________________________ < A :传播性、潜伏性、安全性 B :传播性、破坏性、易读性 C :传播性、潜伏性、破坏性 答案:B CD-RO M 答案:B 答案:B 3、操作系统是计算机系统中的 ___________________________________ < A :核心系统软件 B :关键的硬件部件 答案:A 答案:A 答案:D 答案:C

D :潜伏性、破坏性、易读性 答案:C

答案:D 答案:D 8、在计算机领域中,通常用英文单词"Byte"来表示 _________ A :字 B :字长 C :二进制位 D :字节 9、磁盘目录采用的是 _______ A :表格型结构 B :图型结构 C :网型结构 D :树型结构 答案:D 10、一台完整的计算机硬件系统是由存储器、输入 /输出设备和 A :加法器 B :控制器 C :驱动器 D :中央处理器 答案:D 11、存储器可分为 _____ < A : RAM 口 ROM B :硬盘与软盘 C :内存储器和外存储器 D : ROMP EPROM 答案:C 12、在计算机中, ______ 字节称为1MB A : 10K B : 100K C : 1024K D : 1000K 答案:C 13、按计算机系统结构来划分,Ofice 2003属于 _________ 件 A :计算机 B :系统 C :应用 D :游戏 答案:C 14、PC M 的中文含义是 ___________________________ < A :通用计算机 B :小型计算机 C :专用计算机 D :个人计算机 15、输入#号时,应先按住 ______ ,再按#号键 A : ALT

计算机组成原理试卷及答案复习课程

计算机组成原理试卷 及答案

计算机组成原理试题及答案 一、单项选择题(从下列各题四个备选答案中选出一个正确答案,并将其代号写在题干前面的括号内。) 1.若十进制数据为137.5则其八进制数为(B )。 A、89.8 B、211.4 C、211.5 D、1011111.101 2.若x补=0.1101010,则x原=(A )。 A、1.0010101 B、1.0010110 C、0.0010110 D、0.1101010 3.若采用双符号位,则发生正溢的特征是:双符号位为( B)。 A、00 B、01 C、10 D、11 4.原码乘法是(A )。 A、先取操作数绝对值相乘,符号位单独处理 B、用原码表示操作数,然后直接相乘 C、被乘数用原码表示,乘数取绝对值,然后相乘 D、乘数用原码表示,被乘数取绝对值,然后相乘 5.为了缩短指令中某个地址段的位数,有效的方法是采取(C)。 A、立即寻址 B、变址寻址 C、间接寻址 D、寄存器寻址 6.下列数中,最小的数是(A)。 A.(101001)2B.(52)8C.(2B)16D.45 7.下列数中,最大的数是(D)。 A.(101001)2B.(52)8C.(2B)16D.45 8.下列数中,最小的数是(D)。 A.(111111)2B.(72)8C.(2F)16D.50 9.已知:X=-0.0011,Y= -0.0101。(X+Y)补= ( A)。 A.1.1100B.1.1010

C.1.0101D.1.1000 10.一个512KB的存储器,地址线和数据线的总和是(C )。 A.17 B.19C.27D.36 11.某计算机字长是16位它的存储容量是64KB,按字编址,它们寻址范围是(C )。 A.64K B.32KB C.32K D.16KB 12.某一RAM芯片其容量为512*8位,除电源和接地端外该芯片引线的最少数目是 (C )。 A.21 B.17 C.19 D.20 12.计算机内存储器可以采用(A)。 A.RAM和ROM B.只有ROM C.只有RAM D.RAM和SAM 13.单地址指令中为了完成两个数的算术操作,除地址码指明的一个操作数外,另一个数常需采用( C) 。 A.堆栈寻址方式 B.立即寻址方式 C.隐含寻址方式 D.间接寻址方式 14.零地址运算指令在指令格式中不给出操作数地址,因此它的操作数来自(B)。 A.立即数和栈顶 B.栈顶和次栈顶 C.暂存器和栈顶 D.寄存器和内存单元 15.指令系统中采用不同寻址方式的目的主要是( C)。 A.实现存储程序和程序控制 B.可以直接访问外存 C.缩短指令长度,扩大寻址空间,提高编程灵活性 D.提供扩展操作码的可能并降低指令译码难度 16.用于对某个寄存器中操作数的寻址方式称为( C)寻址。 A.直接 B.间接 C.寄存器直接 D.寄存器间接 17.寄存器间接寻址方式中,操作数处在( B )。 A.通用寄存器 B.贮存单元 C.程序计数器 D.堆栈 18.RISC是(A)的简称。

计算机组成原理存储器读写和总线控制实验实验报告

信息与管理科学学院计算机科学与技术 实验报告 课程名称:计算机组成原理 实验名称:存储器读写和总线控制实验 姓名:班级:指导教师:学号: 实验室:组成原理实验室 日期: 2013-11-22

一、实验目的 1、掌握半导体静态随机存储器RAM的特性和使用方法。 2、掌握地址和数据在计算机总线的传送关系。 3、了解运算器和存储器如何协同工作。 二、实验环境 EL-JY-II型计算机组成原理实验系统一套,排线若干。 三、实验内容 学习静态RAM的存储方式,往RAM的任意地址里存放数据,然后读出并检查结果是否正确。 四、实验操作过程 开关控制操作方式实验 注:为了避免总线冲突,首先将控制开关电路的所有开关拨到输出高电平“1”状态,所有对应的指示灯亮。 本实验中所有控制开关拨动,相应指示灯亮代表高电平“1”,指示灯灭代表低电平“0”。连线时应注意:对于横排座,应使排线插头上的箭头面向自己插在横排座上;对于竖排座,应使排线插头上的箭头面向左边插在竖排座上。 1、按图3-1接线图接线: 图3-1 实验三开关实验接线 2、拨动清零开关CLR,使其指示灯显示状态为亮—灭—亮。 3、往存储器写数据:

以往存储器的(FF ) 地址单元写入数据“AABB ”为例,操作过程如下: 4、按上述步骤按表3-2所列地址写入相应的数据 表3-2 5、从存储器里读数据: 以从存储器的(FF ) 地址单元读出数据“AABB ”为例,操作过程如下: (操作) (显示) (操作) (显示) (操作) (显6、按上述步骤读出表3-2数据,验证其正确性。 五、实验结果及结论 通过按照实验的要求以及具体步骤,对数据进行了严格的检验,结果是正确的,具体数据如图所示:

计算机组成原理期末复习知识要点

第一章 1)冯.诺依曼主要三个思想是什么? (1)计算机处理采用二进制或二进制代码 (2)存储程序 (3)硬件五大部分:输入设备、输出设备、存储器、运算器和控制器 2)计算机硬件由哪5部分组成? 输入设备、输出设备、存储器、运算器和控制器 3)VLSI中文的意思是什么? 超大规模集成电路 4)列举出三个计算机应用领域? 1.科学技术计算2.数据信息处理3.计算机控制 4.计算机辅助技术5.家庭电脑化 5)计算机系统分哪两大系统? 硬件和软件系统 6)计算机内部信息包括哪两大信息? 计算机中有两种信息流动:一是控制信息,即操作命令,其发源地为控制器;另一种是数据流,它受控制信息的控制,从一部件流向另一部件,边流动边加工处理。 7)计算机性能主要包括哪三个主要性能? (1)基本字长: 是参与运算的数的基本长度,用二进制数位的长短来衡量,取决寄存器、加法器、数据总线等部件的位数。 (2)主存容量:可以用字节,有的用字长,K、M、G、T (3)运算速度: 是每秒能执行的指令条数来表示,单位是条/秒。(MIPS) 8)现代计算机系统分为五个层次级别是如何划分的? 从功能上,可把现代计算机系统分为五个层次级别: 第一级是微程序设计级:是硬件级 第二级是一般机器级:机器语言级 第三级是操作系统级:是操作系统程序实现。(混合级) 第四级是汇编语言级:一种符号形式语言。 第五级是高级语言级 9)机器数是指什么?它主要是解决了数值的什么表示? 10)机器数有哪4种表示方法? 原码表示法、补码表示法、和移码表示法四种。 11)计算机数值有哪两种表示方式?它主要解决了数值的什么表示? 定点表示和浮点表示。主要解决数中小数点的位置的确定。 12)浮点数在计算机内部表示两种方式是如何安排的? 13)尾数是补码表示其规格化如何表示? 正数:0.1×…×的形式负数:1.0×…×的形式 14)解释计算机内部数值0和字符0有何不同? 数值0在计算机中为00H,而字符0为其ASCII码30H。 15)计算机如何判断加法溢出的? 当运算结果超出机器所能表示的数域范围时,称为溢出。 判别方法有:符号位判别法、进位判别法、双符号位判别法。 16)半加器与全加器有什么不同?

信息学奥赛――计算机基础知识试题选择题详解重点

计算机基础知识试题详解---选择题 ? 1.一个完整的计算机系统包括____。 A 主机、键盘、显示器 B计算机及其外部设备 C系统软件与应用软件 D计算机的硬件系统和软件系统解答:一个完整的计算机系统是由硬件系统和软件系统组成的。计算机的硬件是一个物质基础,而计算机软件是使硬件功能得以充分发挥的不可缺少的一部分。因此,对于一个完整的计算机系统,这两者缺一不可。本题的正确答案为D。 2.微型计算机的运算器、控制器及内存储器的总称是____。ACPU BALU CMPU D主机解答:CPU是中央处理器的简称,包括MPU和ALU;MPU是微处理器的简称;ALU是算术逻辑单元的简称;CPU和内存储器的总称为主机,它是微型机核心部分。本题正确答案为D。 3.“长城386微机”中的“386”指的是____。 ACPU的型号 BCPU的速度 C内存的容量 D运算器的速度解答:CPU的品质直接决定了微机的档次,在奔腾出现之前,微机名称中直接使用微机中的CPU型号,386机表示了它们使用的CPU芯片为80386。本题的正确答案为A。 4.在微型计算机中,微处理器的主要功能是进行____。 A算术逻辑运算及全机的控制 B逻辑运算 C算术逻辑运算 D算术运算解答:微处理器是计算机一切活动的核心,它的主要功能是实现算术逻辑运算及全机的控制。本题正确答案为A。 5.反映计算机存储容量的基本单位是____。 A二进制位 B字节 C字 D双字解答:存储容量大小是计算机的基本技术指标之一。通常不是以二进制位、字或双字来表示,因为这些表示不规范,一般约定以字节作为反映存储容量大小的基本单位。本题正确答案为B。 6.在微机中,应用最普遍的字符编码是____。AASCII码 BBCD码 C汉字编码 D补码解答:字符编码是指对英文字母、符号和数字的编码,应用最广泛的是美国国家信息交换标准字符码,简称为ASCII码。BCD码是二—十进制编码。汉字编码是对汉字不同表示方法的各种汉字编码的总称。补码是带符号数的机器数的编码。本题正确答案为A。 7.DRAM存储器的中文含义是____。 A静态随机存储器 B动态只读存储器 C静态只读存储器 D动态随机存储器解答:动态随机存储器的原文是(Dynamic Random Access Memory:DRAM。随机存储器有静态随机存储器和动态随机存储器之分。半导体动态随机存储器DRAM的存储速度快,存储容量大,价格比静态随机存储器便宜。通常所指的64MB或128MB内存,多为动态随机存储器DRAM

《计算机组成原理》总结完整版

《计算机组成原理》学科复习总结 ★第一章计算机系统概论 ?本章内容:本章主要讲述计算机系统的组成、计算机系统的分层结构、以及计算机的一些主要指标等 ?需要掌握的内容:计算机软硬件的概念,计算机系统的层次结构、体系结构和计算机组成的概念、冯.诺依曼的主要思想及其特点、计算机的主要指标 ?本章主要考点:概念 1、当前的CPU由那几部分组成组成? 控制器、运算器、寄存器、cache (高速缓冲存储器) 2、一个完整的计算机系统应包括那些部分? 配套的硬件设备和软件系统 3、什么是计算机硬件、计算机软件?各由哪几部分组成?它们之间有何联系? 计算机硬件是指计算机的实体部分,它由看得见摸得着的各种电子元器件,各类光、电、机设备的实物组成。主要包括运算器(ALU)、控制器(CU)、存储器、输入设备和输出设备五大组成部分。软件是计算机程序及其相关文档的总称,主要包括系统软件、应用软件和一些工具软件。软件是对硬件功能的完善与扩充,一部分软件又是以另一部分软件为基础的再扩充。 4、冯·诺依曼计算机的特点 ●计算机由运算器、存储器、控制器、输入设备和输出设备五大部件组成 ●指令和数据以同等地位存于存储器内,可按地址寻访 ●指令和数据用二进制表示 ●指令由操作码和地址码组成,操作码用来表示操作的性质,地址码用来表示操作数在存储 器中的位置 ●指令在存储器内按顺序存放 ●机器以运算器为中心,输入输出设备和存储器间的数据传送通过运算器完成 5、计算机硬件的主要技术指标 ●机器字长:CPU 一次能处理数据的位数,通常与CPU 中的寄存器位数有关 ●存储容量:存储容量= 存储单元个数×存储字长;MAR(存储器地址寄存器)的位数 反映存储单元的个数,MDR(存储器数据寄存器)反映存储字长 主频 吉普森法 ●运算速度MIPS 每秒执行百万条指令 CPI 执行一条指令所需的时钟周期数 FLOPS 每秒浮点运算次数 ◎第二章计算机的发展及应用 ?本章内容:本章主要讲述计算机系统、微型计算机系统的发展过程以及应用。 ?需要掌握的内容:计算机的发展的不同阶段区分的方法、微型计算机发展中的区分、摩尔定律 ?本章主要考点:概念 1、解释摩尔定律

相关文档
相关文档 最新文档