文档库 最新最全的文档下载
当前位置:文档库 › Proceedings of the 38th Hawaii International Conference on System Sciences- 2005 A User Con

Proceedings of the 38th Hawaii International Conference on System Sciences- 2005 A User Con

A User Controlled Approach to Adjustable Autonomy

N.E.Reed

Department of Information and Computer Sciences

University of Hawai’i at Manoa

Email:nreed@https://www.wendangku.net/doc/8913534231.html,

Abstract

This paper describes a framework for collaboration between a user and a multi-agent system to achieve adjustable autonomy.Adjustable autonomy(AA)is when the levels of autonomy of the agent system-its control over its reasoning-changes during execution due to interaction with a user or other systems.We describe a prototype agent development environment that allows users?exible on-line control over an oth-erwise completely autonomous agent system.

AA can improve productivity during system design, allow earlier deployment,and create a more?exible system.AA can reduce the load on instructors when used in training situations,and allow the user to aid the system when faced with unforseen situations or problems.Examples are given of AA in simulated pi-lots for?ghter aircraft.We found that the AA-the user’s ability to modify the behavior of the agents dur-ing execution,resulted in a better,more?exible sys-tem.

1Introduction

Virtual environments inhabited by intelligent ac-tors are proving to be useful tools with a variety of purposes,including training[22,9,6],testing[7,14] and entertainment[10].Intelligent actors often play the roles of humans or other intelligent entities in the virtual environment.In these complex environments, it is often di^cult or impossible to de?ne an agent’s behavior in advance for every situation that might oc-cur.Allowing the user to modify the behavior of the agent during execution is one strategy that can be used to overcome this problem.

The autonomy of a system can be described as the amount of control the system has over which goals it pursues and how it pursues those goals.Auton-omy can be described as a spectrum,ranging from completely autonomous at one end to completely com-mand driven at the other[2,3].Adjustable autonomy (AA)means that a system’s level of autonomy changes over time,i.e.,the system’s autonomy level increases or decreases for some goals it is trying to achieve.The change in autonomy can be due to interactions with human operator(s),the system itself,or other systems [13,8,12].

AA is often a desirable capability.It may be nec-essary to have some way for a user to“override”the a system after it is deployed.If an unforseen situation occurs that the system does not recognize,a disas-trous failure might result from the system’s action(or inaction).For example if a sensor malfunctions,the system might“blindly”trust the sensor’s value and not recognize a critical situation.

We describe an agent creation environment and the user’s?exible on-line control over the behavior de?ned by the system designer.Actors are created and de?ned to perform tasks within a simulation environment.

AA provides several bene?ts,including easier de-velopment of actors,facilitated debugging of actions,?exibility while using the system in training situations, and recovery from unforseen problems.Adjustable au-tonomy may also increase the usefulness or applicabil-ity of a system.

The rest of this paper is organized as follows.The next section introduces the simulated pilot application domain.Section3describes the EASE agent develop-ment https://www.wendangku.net/doc/8913534231.html,er interaction with agents in EASE is described in Section4,with speci?c exam-ples of adjustable autonomy in Section5.Section6 discusses the work.The last section gives a summary and directions for future work.

2Applications

Prototype actors have been created in two simula-tion environments,simulated pilots for military air-craft and soccer players.This section introduces the simulated pilot domain.

Saab Aerospace designed and produces the JAS Gripen?fth generation?ghter aircraft.In addition, Saab has developed TACSI[16],a commercial high-?delity military?ight simulator.Figure1shows TACSI’s display with three aircraft in view.Aircraft

Figure1:The TACSI simulator display showing one agent controlled aircraft(upper)and two enemy air-craft?ying over water(blue/dark)near an island.

in TACSI can be controlled by pilots or trainees sit-ting at cockpit consoles including a dome simulator or programmed completely in software(agents).Any combination of humans and agents can control the air-craft in each scenario.TACSI is used for pilot training, aircraft development and marketing.

Pilot actors need to appear to be intelligent and act realistically in this very complex environment[15]. If the simulated pilots are too“predictable”,pilot trainees don’t have a realistic experience in the simu-lator,and are thus less prepared for real situations. Using the online control enables instructors to cre-ate scenarios for several trainees,to perform the exact same maneuvers with each one,with less e&ort than controlling the entire plane.They may also customize scenarios and actions for each trainee,giving a much more realistic experience to each,with far less burden on the instructors.

3EASE overview

EASE(End-user Actor Speci?cation Environment) is an environment for creating actors in simulation en-vironments.EASE was designed for building intel-ligent actors for interactive simulation environments [17,19].EASE is written in Java with the aim of en-abling end users(domain experts)to specify the be-havior of actors without the need to continuously rely on expert programmers.

EASE interfaces with a simulation environment as shown in Figure2.EASE and the simulation envi-ronment may be on the same or multiple networked

Simulator

Interface

to simulator

(TACSI)

Agent

Runtime

Engine

Agent system

Debugger

User interface

Figure2:Interaction between EASE,the user and a simulation environment.EASE contains facilities for actor development as well as a run-time engine that interfaces with the user and the simulation environ-ment

machines(During development and testing,both ran concurrently on the same sun workstation).

An EASE actor is composed of a multi-agent sys-tem(MAS).The system designer speci?es the actor’s behavior using the tools in EASE.An actor speci?ca-tion consists of a hierarchy of agents where each agent is responsible for some aspect of the overall actor’s be-havior.Each agent tries to accomplish only one spe-ci?c task and is hence fairly simple.Agents at lower levels in the hierarchy perform parts of behaviors for agents above them.

At runtime an actor’s speci?cation is turned into a multi-agent system where overall actor behavior is de-termined by a continuous process of contract making and negotiation between agents.Agents at the bot-tom of the hierarchy negotiate amongst each other to select the actual action the actor should take.

An EASE actor’s MAS is a forest of trees,as shown in Figure3.Non-leaf agents are called managers,their job is to?nd and contract other agents to achieve their goals.Leaf agents are called engineers,as they ne-gotiate with each other in factories over the actions (behavior)of the actor.The contracts correspond to goal/sub-goal relationships.

As the current goals change during a scenario,con-tracts are made and broken,agents are added and deleted from the trees,and therefore the structure of the forest changes.At each point in time,the trees re-?ect the current task breakdown of the overall actor.

The sequence of behaviors within an agent(one node in a tree)is speci?ed with a state machines using an intuitive GUI in EASE.In each state,a manager may contract agents to accomplish its goals.During

Figure3:The structure of an EASE actor’s multi agent system.A forest of trees contains manager agents(non-leaf nodes)and engineer agents(leaf nodes).Lines between agents represent contracts. runtime,any newly created(contracted)agents are added to the MAS in the tree below the contracting agent.

For example,we might want to specify a simple patrol agent for a pilot in TACSI.Suppose we want a plan like the one shown in https://www.wendangku.net/doc/8913534231.html,ing EASE, this patrol mission would be created using the agent speci?cation interface resulting in a state machine as shown in Figure5.The“start”state of the agent is identi?ed with a short line(upper left circle in the ?gure).When the agent is created,execution of the state machine starts in that state.

During execution,each agent must keep its con-tractor informed of its progress.The agent reports whether its goal is currently being achieved,its goal cannot be achieved or normal progress is being made towards the achievement of its goal(i.e.,there is no reason to believe the goal cannot be achieved but it has not yet been achieved).The messages allow man-agers to make decisions based on the status of their sub-goals.The simplicity of these messages between agents makes adding to and removal of agents from the organization relatively easy,and is used for online control,as described below.

Agents can be selected for contracts speci?cally or based on their capabilities,using a capability matcher and descriptions of the capabilities of each type of agent.A pool of generic agents exists,each of which

Paths of other aircraft Intended flight path of the actor

waypoint B waypoint A

Start/C

Figure4:A simple mission?ight plan for our actor showing3waypoints to?y between.This corresponds to the state machine shown in Figure5.In the ex-ample to follow,2other(enemy)aircraft happen to appear during our patrol mission.

can be contracted to perform their specialty by an agent already part of the MAS.The generic pool ef-fectively represents the abilities of the actor.If there is an agent in the generic pool capable of achieving some particular goal then the actor can achieve that goal (barring unforseen problems in execution).If there is no agent with the capability to pursue a certain goal, then the actor cannot perform that goal.

EASE enforces a methodology for actor develop-ment that covers all stages of development,from de-sign through to reuse[19].The tools in EASE have been designed to make development easy by providing a completely graphical environment,and reuse easy by enforcing strict modularity.Integrated tool support exists for quickly inspecting and debugging actors at runtime.The development aids in EASE combined with an underlying powerful agent runtime engine al-low relatively inexperienced users to create useful ac-tors for complex simulation environments.

4Online Control

In addition to the behavior speci?cation tools used to design agents and their behavior,EASE includes a set of graphical tools that enable the user to mon-itor the state of and interact with an actor during simulations.The tools were designed to be readily understandable and usable to most users,even those

Figure5:The state machine speci?cation tool,show-ing a state machine for a simple patrol agent.The goal of the state is shown within each circle.The lines show transitions between states.This speci?-cation shows the agent repeatedly?ying a triangular pattern through the3waypoints.

without much training in agents or programming[20]. This is typically the case for domain experts-they are expert at their specialty,not at programming.

The interfaces implemented for this project are pro-totypes,built to demonstrate the ideas,not for practi-cal use.The current interfaces,albeit simple,provide the basic functionality required for the changes in au-tonomy desired by the user.

4.1The Boss

The run-time support environment allows the user to monitor and modify agent behavior at run-time without stopping the simulation or taking over com-plete control of the agent.The primary interaction window in EASE is called The Boss.

The Boss is a tool that(among other things)shows the status of the agent hierarchy within the actor.A snapshot of The Boss is shown in Figure6.The tool is nicknamed The Boss because it provides most of the functionality for controlling an actor.The hierarchy (or hierarchies)of agents are displayed using a col-lapsible tree structure.More information about each agent is shown in di&erent colors(type of agent-man-ager/engineer),its name(text)and its current

state Figure6:The BOSS display window showing the hier-archy of active agents(top)and control buttons(bot-tom).This screen shows an aircraft during a patrol mission,with agents to avoid any aircraft that are ob-served,avoid crashing into the ground(hard deck),?y in a smooth manner,and go to a waypoint.The 3control buttons in the upper row activate additional windows for the user to monitor the underlying cal-culations producing the pilot’s behavior.The lower4 buttons are used to add,remove,and suspend agents in the hierarchy.

(text in parentheses).

The visualization gives an accurate and comprehen-sive picture of the agent organization.The user can extract information such as“the actor is attempting to go to waypoint A(or do X)because the mission agent(goal Y)is currently active and in a state trying to reach A(X).

The information for The Boss comes directly from the structure of the organization which is explicitly represented in the actor.Notice that the interface makes virtually no translation from the underlying sit-uation to the view presented.Firstly,this means the interface was(more or less)trivial to build.Secondly, it means that under a very wide range of circumstances the interface presents an accurate picture of the un-derlying situation.A process that needs to do a non-trivial translation will sometimes lose or mis-interpret information.Hence,the simplicity of this visualization gives us increased con?dence in its accuracy.

Actor behavior can be monitored and dynamically modi?ed by adding,removing,or suspending agents in the multi-agent system controlling the pilot/actor as is explained in more detail below.The agent structure shown in The Boss(Figure6)re?ects the structure of

agents in the forest of trees described in Figure3.Each of the agents aligned to the leftmost are the top nodes of trees.The agents contracted by each top node are shown indented underneath.

When a scenario is started,an actor is created from the currently loaded speci?cation?le.The actor exe-cutes autonomously,unless the user makes changes.

4.2Adding agents

To add an agent to the organization the user clicks the“Create contract”button(Figure6),then uses a dialog box to select which of the existing agent types to add.If the agent type has parameters(for example waypoint coordinates),a dialog box will appear to ask the user to enter their values,then the parameters are instantiated in the agent’s contract.

A new agent added to the MAS by the user creates a new root node(tree)in the forest.Each added agent is an added goal for the actor to pursue.The newly added goals/agents are handled in the same way as any other goals of the actor.It is possible that a whole hierarchy of agents is eventually contracted to achieve the added goal,e.g.,if the user adds a manager to the organization,that manager will contract other agents in the normal manner.

In the usual case a newly added agent,or its con-tractees,will result in some change in actor behavior due to its involvement in negotiations.For example, if an agent for?ying at some particular altitude is added to the agent organization of a simulated pilot, the aircraft might climb(or dive)to that altitude while continuing to pursue whatever other goals it has.

If the added agent’s priority is low and there are agents with higher priority and con?icting goals,the addition of the agent may have no e&ect on the actions of the actor because the higher priority agents“win out”in the negotiation.

If an added agent(or its contractees)has su^-ciently high priority so that it has a signi?cant say in negotiations it is possible that the achievement of other goals already in the system is delayed or never accomplished.

4.3Removing Agents

To remove an agent from the organization,the user selects the agent in The Boss window and clicks ei-ther the“End Success”or“End Failure”button.The removal of agents from the actor has the opposite ef-fect of adding them,as expected.When an agent is deleted,the entire subtree rooted by that agent is re-moved-the goal it was pursuing is removed from the set of goals the actor is pursuing.

If the removed agent is not a top-level agent(i.e.it has a manager),it is necessary for the agent to inform its contractor that it will no longer be pursuing its assigned goal.The user decides which message is sent depending on how they want the contractor to react to the removal of the agent.In both the success and failure cases the messages that are sent are the same as would have been sent if the agent detected its own success or failure(rather than being removed by the user).The two options give the user the ability to choose the success or failure transitions of the manager (in cases where there is a di&erence).

The contracting agent uses the success/failure in-formation to guide its future actions.Clearly,success messages indicate that the contractor should act as-suming that the goal assigned to the(now defunct) contractee has been achieved.Conversely,failure mes-sages imply that the contractor should assume that the contractee has failed and act accordingly.Precisely what the contractor will do has been speci?ed by the designer via the use of success and failure transitions in the contractor’s state machine.

Analogous to the e&ect of adding of goals,the re-moval of goals could potentially allow previously fail-ing goals to succeed,i.e.,the removal of some engi-neers from a negotiation may allow other(previously lower priority)engineers to succeed where they were previously failing.

4.4Suspending agents

Agents can also be suspended.The result is as if they had been removed,with the exception that the entire sub-tree can be reactivated at a later time if the user so chooses.While suspended,the agents(and all agents contracted by them)are also suspended.Dur-ing the suspension,they do not participate in negoti-ations and thus have no e&ect on the behavior of the actor.Agents can be restarted,in which case they re-sume from the same state they were in when they were suspended.We will not discuss this facility in detail because it is functionally equivalent to removing then adding the same agent to the actor.

4.5Changing constants

Changing constants used in calculations such as an agent’s priority,can be achieved both before and dur-ing execution(online).One limitation of this type of change is that only those constants explicitly designed in at design time can be changed at run-time.For ex-ample,if there was no low fuel level constant de?ned, the user cannot use that constant to change an aspect of the behavior at runtime.The constant could,how-ever be included in a new agent speci?cation and then would be available at runtime in the future.

The condition speci?cation window,shown in Fig-ure7,allows the user to graphically specify constants

Figure7:A snapshot of the condition speci?cation sub-system in EASE.Functions and constants used in agent calculations can be speci?ed and modi?ed using this interface.

and complex functions used in the state machines to specify behavior.Additions,deletions and changes may be made online or o/ine and can be saved for later use.

5Examples of User Interaction In this section we present an extended example of an EASE actor for the simulated air-combat domain using Saab’s tactical air-combat simulator TACSI[16]. The actor’s behavior is fairly simple so we can focus on the adjustable autonomy capabilites available and the resulting changes in behavior.

First,a scenario?le is loaded into TACSI and an agent speci?cation?le is loaded into EASE.Next, the simulation and EASE actor(s)are started.Fig-ure8shows the central control panel in EASE with a start/stop(toggle)button.The button currently shows“Stop”since the actor has been started.The other EASE interface panels are launched by clicking the other buttons in this control panel.The name of the actor speci?cation being used is displayed in the text?eld at the bottom of the window.In this case the speci?cation is named Example.act.

Figure1shows the starting positions of the aircraft in this scenario-they are all in the air and at the same altitude.An EASE actor is controlling the aircraft labeled with a“1”in the upper left part of the screen. The other2(enemy)aircraft will?y from right(East) to left(West)at a?xed altitude and speed(they

are Figure8:A snapshot of the“Start control”tool from which the actor and other tools are started.

not controlled by EASE actors in this scenario).The scenario occurs over the water near an island o&the east coast of Sweden.

The actor in this example has control over the al-titude,heading and speed of the aircraft.The TACSI simulator includes sub-simulators to perform the very low level details of keeping the aircraft in the air.The dynamics model built into TACSI is realistic,restrict-ing what the aircraft can do.For example,if the actor asks for an immediate180degree turn,the aircraft would not be able to turn that quickly,instead it will turn as quickly as the?ight dynamics and fuel use allow.

The“Watch Agents”button in the start window launches“The Boss”control interface.Figure9shows the initial agent organization for this scenario,consist-ing of three agent hierarchies(trees).At the top is the AC Avoidance list manager.This manager will con-tract agents to avoid each aircraft the actor detects.

Second from the top is an agent called Hard deck which is responsible for ensuring that the aircraft stays above a certain altitude.This agent has a high sat-isfaction value when the aircraft is above a?xed alti-tude.The Hard deck agent’s priority is high.

The?nal hierarchy,headed by the Smooth Man-ager,is for maintaining a“smooth”trajectory for the aircraft i.e.ensuring that turns are not too tight and the aircraft does not try to change speed or altitude too quickly.This agent results in much smoother changes than are enforced by the low-level aircraft dy-namics routines in TACSI.

5.1Adding an Agent/Goal

The agents currently active(avoid aircraft,hard deck,and smooth)do not specify a mission for the actor,although they provide necessary safety-related

Figure9:The Boss display showing the starting agent organization of the actor in the scenario.

goals.Figure10shows the user selecting a Patrol Mission agent after clicking on the“Create Contract”button on The Boss.The mission implemented by the Patrol Mission agent is to?y a triangular pat-tern through3waypoints(as shown in Figure4).The patrol is?own inde?nitely.The new Patrol Mission (PM)agent contracts parameterized agents to get to each of the mission’s waypoints.When a waypoint is reached,a“success”message is sent to the PM agent, which then cancels the current contract and creates a new contract with a new waypoint agent for the next waypoint.

Figure11shows The Boss after the Patrol Mission agent has been created.Now the actor has a reason to change course-to reach the?rst waypoint.After some negotiation,the actor turns the aircraft towards the waypoint.

At this point,an enemy aircraft is detected and agents are created automatically within EASE that avoid a collision.This does not involve user interac-tion,so we will skip over that part of the scenario. 5.2Deleting an Agent/Goal

This section describes how an agent is deleted from an actor.Suppose the user now wants to take control of the mission,they can remove the existing Patrol Mission agent.This is achieved by selecting the agent and clicking on either“End Fail”or“End Success”buttons.In this case it does not matter which

but-Figure10:The user has selected a Patrol Mission agent for addition to the agent

organization.

Figure11:A snapshot of The Boss after the Patrol Mission agent has been contracted.

ton is used since the agent has no contractor to send a message to,anyway.Figure12shows a snapshot of The Boss after the Patrol Mission agent has been stopped.

We observe that the user did not need to plan the low level details of the aircraft’s behavior.The AC Avoidance,Hard Deck and Smooth agents kept per-forming their functionality so the user could focus only on the aspects of behavior that interested them. For example,if another aircraft had been detected it would have been avoided by the AC Avoidance man-ager without further involvement from the user.

5.3Suspending Agents

In this section we show the e&ect of a particular agent on the overall actor behavior by suspending that

Figure12:A snapshot of The Boss after the Patrol Mission agent is deleted.The Smooth Manager’s tree has been“collapsed”,thus its contractees are not di-visible.

agent.We will suspend the Smooth Manager.This is accomplished by selecting the Smooth Manager(SM) in The Boss window and clicking“Pause Agent”.The in?uence of the SM agent has then been removed.Fig-ure13shows that the whole hierarchy headed by the SM has been suspended.A trace of the aircraft,start-ing when the SM was active,then the SM is suspended and more turns are performed is shown in Figure14. The longer,smoother turn at the bottom was made while the SM was active while the tighter turns at the top were made when the SM was suspended.In this case the e&ect of the SM is evident but not extremely strong.The designer might choose to modify the spec-i?cation in the future by increasing the priority of the SM to make turns smoother.

The examples above demonstrate the primary ways that a user can change the actor’s behavior on-line-by adding,deleting,or suspending agents in the actor’s MAS.A user may also modify behavior by changing the constants and equations that are used in the nego-tiations and speci?cations.The interface for changing these is shown in Figure7.

6Discussion

EASE actors have been created and tested?ying aircraft in the TACSI environment as well as playing soccer in the RoboCup simulation league[21,18].

For Figure13:A snapshot of The Boss with the Smooth Manager agent

suspended.

Figure14:A trace of the aircraft’s path during the patrol.The turn at the bottom(one long turn)was made while the Smooth Manager was active and the tighter turn at the top was made while the Smooth Manager was paused.

a good overview of RoboCup research,see Burkhard, et al.[5].

During development,EASE controlled pilots were demonstrated in several scenarios to engineers and project managers at Saab.Their feedback was ex-tremely helpful and encouraging during both develop-ment and testing.Adjustable autonomy proved useful in this domain for debugging the behavior of pilots and also for use in training scenarios.

In EASE there is no change in autonomy within the agent organization,i.e.,a particular agent’s autonomy relative to the other agents is constant.The autonomy of the actor relative to the user changes dynamically. Other architectures have AA between agents,e.g.,[11, 1],but for EASE the AA is only between the actor and the user.Changing the priority of agents in EASE would be similar to changing the number of votes as in Barber,et al.[1].

The boss interface enables the user to modify the actor’s behavior during execution,by adding,remov-ing and suspending agents in the hierarchy.Several other information display windows are also available. It is the user’s responsibility to determine what infor-mation is relevant to them at the current time.For example,the Boss display shows the entire agent hi-erarchy,but only some parts will be important for any particular purpose and the user needs to deter-mine which are the important ones.Better interfaces could be designed that would call the user’s attention to more important items,e.g.,agents that are failing. User modeling could be used to enhance the interface and further assist the user.

The user’s ability to de?ne an agent’s behavior with state machines and goal hierarchies proved to be rel-atively intuitive for domain experts-e.g.pilots.The ability to re-use parts of actors and modify the agent’s goals during runtime proved to be useful for debugging behavior as well as creating more variety in scenarios. The online control in EASE was useful in both do-mains for debugging actor behavior and also for use in pilot training scenarios.

Several limitations of this approach to on-line con-trol are evident.EASE gives the user an intuitive way to specify a broad range of behaviors,but it is not necessarily easy to specify all types of behaviors in this way.In particular,the current approach does not include planning(long)sequences of actions-it is similar to a behavior-based approach.

The user must both understand the actor’s current behavior and be able to respond with modi?cations fast enough,for them to be of use during a simulation. Depending on the dynamic nature of the environment,and human response times being signi?cantly slower than software,this limits the applications.

7Summary and Future Work

This paper describes the on-line interaction and control available to the user of a multi-agent system acting within a complex,dynamic simulation environ-ment.The user has broad control over the autonomy of the agent at run-time.Experimental results with simulated pilots for military aircraft demonstrated the ideas.

Future work involves re?ning and enhancing the human-computer interaction in the system.Enhanced user interaction has the potential to enable users to more quickly understand the actor’s behavior and thus be able to modify it more easily and quickly.Ad-ditional testing with other simulations,for example search and rescue would give further insight into the approach.Increasing the modes of adjustable auton-omy,for example having an agent sometimes initiate a change in autonomy,would also be an interesting addition.

Acknowledgments

This work was supported by The Network for Real-Time Research and Education in Sweden(ARTES) project0055-22,Saab Corporation’s Operational Analysis Division,the Swedish National Board for In-dustrial and Technical Development(NUTEK)under grants97-09677,98-06280and99-6166,and the Cen-ter for Industrial Information Technology(CENIIT) under grant99.7.Tack s?a mycket to the engineers and managers at Saab Aerospace in Link¨o ping. References

[1]K.S.Barber,C.Martin,and R.Mckay.A com-

munication protocol supporting dynamic auton-omy agreements.In Proceedings of PRICAI2000 Workshop on Teams with Adjustable Autonomy, pages1—10,Melbourne,Australia,2000.

[2]K.S.Barber and C.E.Martin.Agent autonomy:

Speci?cation,measurement,and dynamic adjust-ment.In Autonomy Control Software Workshop, Autonomous Agents99,pages8—15,1999.

[3]K.Suzanne Barber,Cheryl E.Martin,Nancy E.

Reed,and David Kortenkamp.Dimensions of ad-justable autonomy.In Advances in Arti?cial In-telligence:PRICAI2000Workshop Reader,vol-ume2112of Lecture Notes in Arti?cial Intelli-gence,pages353—361.Springer Verlag,Berlin, 2001.

[4]Bruce Blumberg.Go with the?ow:Synthetic vi-

sion for autonomous animated creatures.In Pro-ceedings of the First International Conference on Autonomous Agents(Agents’97),pages538—539, Marina Del Ray,CA,Feb1997.

[5]H.-D.Burkhard,D.Duhaut,M.Fujita,P.Lima,

R.Murphy,and R.Rojas.The road to RoboCup 2050.IEEE Robotics and Automation Magazine, pages31—38,June2002.

[6]Paul Cohen,Michael Greenberg,David Hart,and

Adele Howe.Trial by?re:Understanding the de-sign requirements for agents in complex environ-ments.AI Magazine,10(3):32—48,1989.

[7]M.Craft and C.Karr.Testing future weapons

systems using cgf systems.In Proceedings of the sixth conference on computer generated forces and behavioral representation,pages141—150,Or-lando,Florida,July1996.

[8]G.A.Dorais,R.P.Bonasso, D.Kortenkamp,

B.Pell,and D.Schreckenghost.Adjustable au-

tonomy for human-centered autonomous systems.

In Workshop on Adjustable Autonomy Systems at IJCAI-99,pages16—35,August1999.

[9]R.D¨o rner,Paul Grimm,and Christian Seiler.

Agengts and virtual environments for communi-cation and decision training emergencies.In Pro-ceedings of the fourth international conference on Autonomous Agents,Agents2000,pages50—51, 2000.

[10]P.Doyle and B.Hayes-Roth.Agents in annotated

worlds.In Proceedings of the Second interna-tional conference on Autonomous Agents,pages 173—180,1998.

[11]Christian Gerber,J¨o rg Siekmann,and Gero

Vierke.Holonic multi-agent systems.Research Report RR-99-03,Deutsches Forschungszentrum f¨u r K¨u nstliche Intelligenz GmbH,1999.

[12]David Kortenkamp,Robert Burridge,Peter

Bonasso,Debra Schrenkenghoist,and Mary Beth Hudson.An intelligent software architecture for semi-autonomous robot control.In Autonomy Control Software Workshop,Autonomous Agents 99,pages36—43,1999.

[13]D.Musliner and K.Krebsbach.Adjustable au-

tonomy in procedural control for re?neries.In AAAI Spring Symposium on Agents with Ad-justable Autonomy,pages81—87,Stanford,Cal-ifornia,1999.[14]Richard Pew and Anne Mavor,editors.Modeling

Human and Organizational Behavior.National Academy Press,Washington, D.C.,1998.Na-tional Research Council.

[15]Nancy E.Reed and Paul Scerri.Adjustable au-

tonomy in simulated pilots.In International Joint Conference on Arti?cial Intelligence,Adjustable Autonomy Systems Workshop,pages56—59,Au-gust1999.

[16]Saab.TACSI-User Guide.Gripen,Operational

Analysis,Modeling and Simulation,5.2edition, September1998.in Swedish.

[17]Paul Scerri.Designing Agents for Systems with

Adjustable Autonomy.PhD thesis,Computer and Information Sciences Department,Link¨o ping University,December2001.

[18]Paul Scerri,Nancy Reed,Tobias Wiren,Kikael

L¨o nneberg,and Pelle Nilsson.Headless Chick-ens IV,volume2019of Lecture Notes in Arti?-cial Intelligence,pages493—496.Springer Verlag, Berlin,2001.

[19]Paul Scerri and Nancy E.Reed.Creating com-

plex actors with EASE.In Carles Sierra,Maria Gini,and Je&rey S.Rosenschein,editors,Fourth International Conference on Autonomous Agents (Agents2000),pages142—143.ACM Press,June 2000.

[20]Paul Scerri and Nancy E.Reed.Engineering char-

acteristics of autonomous agent architectures.

Journal of Experimental and Theoretical Arti?-cial Intelligence,12(2):191—212,April2000. [21]Paul Scerri and Nancy E.Reed.Online control

of agents using EASE:Implementing adjustable autonomy using teams.In Nancy E.Reed,edi-tor,Workshop on Teams with Adjustable Auton-omy,Sixth Paci?c Rim International Conference on Arti?cial Intelligence(PRICAI2000),pages 25—34,August2000.

[22]Milind Tambe,W.Lewis Johnson,Randolph

Jones,Frank Koss,John Laird,Paul Rosenbloom, and Karl Schwamb.Intelligent agents for inter-active simulation environments.AI Magazine, 16(1):15—39,Spring1995.

相关文档
相关文档 最新文档