文档库 最新最全的文档下载
当前位置:文档库 › Optical Versus Video See-Through Head-Mounted Displays

Optical Versus Video See-Through Head-Mounted Displays

Optical Versus Video See-Through Head-Mounted Displays
Optical Versus Video See-Through Head-Mounted Displays

Jannick P.Rolland

School of Optics/CREOL,and School of Electrical Engineering and Computer Science

University of Central Florida Orlando,FL32816–2700

Henry Fuchs

Department of Computer Science University of North Carolina Chapel Hill,NC27599-3175Optical Versus Video See-Through Head-Mounted Displays in Medical Visualization

Abstract

We compare two technological approaches to augmented reality for3-D medical visualization:optical and video see-through devices.We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re-search efforts driven by real needs in the medical eld,both in the United States and in Europe.We then discuss the issues for each approach,optical versus video,from both a technology and human-factor point of view.Finally,we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities,as well as hybrid optical/video technology.

1Introduction

One of the most promising and challenging future uses of head-mounted displays(HMDs)is in applications in which virtual environments enhance rather than replace real environments.This is referred to as augmented reality (Bajura,Fuchs,&Ohbuchi,1992).To obtain an enhanced view of the real en-vironment,users wear see-through HMDs to see3-D computer-generated ob-jects superimposed on their real-world view.This see-through capability can be accomplished using either an optical HMD,as shown in figure1,or a video see-through HMD,as shown in figure2.We shall discuss the tradeoffs between optical and video see-through HMDs with respect to technological and hu-man-factor issues,and discuss our experience designing,building,and testing these HMDs in medical visualization.

With optical see-through HMDs,the real world is seen through half-trans-parent mirrors placed in front of the user’s eyes,as shown in figure1.These mirrors are also used to reflect the computer-generated images into the user’s eyes,thereby optically combining the real-and virtual-world views.With a video see-through HMD,the real-world view is captured with two miniature video cameras mounted on the head gear,as shown in figure2,and the com-puter-generated images are electronically combined with the video representa-tion of the real world(Edwards,Rolland,&Keller,1993;State et al.,1994). See-through HMDs were first developed in the1960s.Ivan Sutherland’s 1965and1968optical see-through and stereo HMDs were the first computer graphics-based HMDs that used miniature CRTs for display devices,a me-chanical tracker to provide head position and orientation in real time,and a hand-tracking device(Sutherland,1965,1968).While most of the develop-ments in see-through HMDs aimed at military applications(Buchroeder,

Presence,Vol.9,No.3,June2000,287–309

r2000by the Massachusetts Institute of Technology

Rolland and Fuchs287

Seeley,&Vukobratovich,1981;Furness,1986;

Droessler &Rotier,1990;Barrette,1992;Kandebo,1988;Desplat,1997),developments in 3-D scientific and medical visualization were initiated in the 1980s at the University of North Carolina at Chapel Hill (Brooks,1992).

In this paper,we shall first review several medical visu-alization applications developed using optical and video see-through technologies.We shall then discuss techno-logical and human-factors and perceptual issues related to see-through devices,some of which are employed in

the various applications surveyed.Finally,we shall dis-cuss what the technology may evolve to become.

2

Some Past and Current Applications of Optical and Video See-Through HMDs

The need for accurate visualization and diagnosis in health care is crucial.One of the main developments of medical care has been imaging.Since the discovery of X-rays in 1895by Wilhelm Roentgen,and the first X-ray clinical application a year later by two Birmingham (UK)doctors,X-ray imaging and other medical imaging mo-dalities (such as CT,ultrasound,and NMR)have emerged.Medical imaging allows doctors to view as-pects of the interior architecture of living beings that were unseen before.With the advent of imaging tech-nologies,opportunities for minimally invasive surgical procedures have arisen.Imaging and visualization can be used to guide needle biopsy,laparoscopic,endoscopic,and catheter procedures.Such procedures do require additional training because the physicians cannot see the natural structures that are visible in open surgery.For example,the natural eye-hand coordination is not avail-able during laparoscopic surgery.Visualization tech-niques associated with see-through HMDs promise to help restore some of the lost benefits of open surgery (for example,by projecting a virtual image directly on the patient,eliminating the need for a remote monitor).The following paragraphs briefly discuss examples of recent and current research conducted with optical see-through HMDs at the University of North Carolina at Chapel Hill (UNC-CH),the University of Central Florida (UCF),and the United Medical and Dental Schools of Guy’s and Saint Thomas’s Hospitals in En-gland,video-see-through at UNC-CH,and hybrid opti-cal-video see-through at the University of Blaise Pascal in France.

A rigorous error-analysis for an optical see-through HMD targeted toward the application of optical see-through HMD to craniofacial reconstruction was con-ducted at UNC-CH (Holloway,1995).The superimpo-sition of CT skull data onto the head of the real patient would give the surgeons ‘‘X-ray vision.’’The premise of

Figure 1.Optical see-through head-mounted display (Photo courtesy of KaiserElectro-Optics).

Figure 2.A custom optics video see-through head-mounted display developed at UNC-CH.Edwards et al.(1993)designed the miniature video cameras.The viewer was a large FOV opaque HMD from Virtual Research.

288PRESENCE:VOLU ME 9,NUM BER 3

that system was that viewing the data in situ allows sur-geons to make better surgical plans because they will be able to see the complex relationships between the bone and soft tissue more clearly.Holloway found that the largest registration error between real and virtual objects in optical see-through HMDs was caused by delays in presenting updated information associated with track-ing.Extensive research in tracking has been pursued since at UNC-CH(Welch&Bishop,1997).

One of the authors and colleagues are currently devel-oping an augmented-reality tool for the visualization of human anatomical joints in motion(Wright et al.,1995; Kancherla et al.,1995;Rolland&Arthur,1997;Parsons &Rolland,1998;Baillot&Rolland,1998;Baillot et al., 1999).An illustration of the tool using an optical see-through HMD for visualization of anatomy is shown in figure3.In the first prototype,we have concentrated on the positioning of the leg around the knee joint.The joint is accurately tracked optically by using three infra-red video cameras to locate active infrared markers placed around the joint.Figure4shows the results of the optical superimposition of the graphical knee joint on a leg model,seen through one of the lenses of our stereoscopic bench prototype display.

An optical see-through HMD coupled with optical tracking devices positioned along the knee joint of a model patient is used to visualize the3-D computer-rendered anatomy directly superimposed on the real leg in motion.The user may further manipulate the joint and investigate the joint motions.From a technological aspect,the field of view(FOV)of the HMD should be sufficient to capture the knee-joint region,and the track-ing devices and image-generation system must be fast enough to track typical knee-joint motions during ma-

Figure3.(a)The VRDA tool will allow superimposition of

virtual anatomy on a model patient.(b)An illustration of the view of the HMD user(Courtesy of Andrei State).(c)A rendered frame of the knee-joint bone structures animated based on a kinematic model of motion developed by Baillot and Rolland that will be integrated in the tool(1998).Figure4.First demonstration of the superimposition of a graphical knee-joint superimposed on a leg model for use in the VRDA tool:(a)a picture of the benchprototype setup;a snapshot of the superimposition through one lens of the setup in(b)a diagonal view and(c)a side view (1999).

Rolland and Fuchs289

nipulation at interactive speed.The challenge of captur-ing accurate knee-joint motion using optical markers located on the external surface of the joint was addressed by Rolland and Arthur(1997).The application aims at developing a more advanced tool for teaching dynamic anatomy(advanced in the sense that the tool allows combination of the senses of touch and vision).We aim this tool to specifically impart better understanding of bone motions during radiographic positioning for the radiological science(Wright et al.,1995).

To support the need for accurate motions of the knee joint in the Virtual Reality Dynamic Anatomy(VRDA)

tool,an accurate kinematic model of joint motion based on the geometry of the bones and collision detection algorithms was developed(Baillot&Rolland,1998; Baillot et al.,1999).This component of the research is described in another paper of this special issue(Baillot et al.,2000).The dynamic registration of the leg with the simulated bones is reported elsewhere(Outters et al., 1999).High-accuracy optical tracking methods,care-fully designed and calibrated HMD technology,and ap-propriate computer graphics models for stereo pair gen-eration play an important role in achieving accurate registration(Vaissie and Rolland,2000;Rolland et al., 2000).

At the United Medical and Dental Schools of Guy’s and Saint Thomas’s Hospitals in England,researchers are projecting simple image features derived from preop-erative magnetic resonance and computer-tomography images into the light path of a stereo operating micro-scope,with the goal of eventually allowing surgeons to visualize underlying structures during surgery.The first prototype used low-contrast color displays(Edwards et al.,1995).The current prototype uses high-contrast monochrome displays.The microscope is tracked intra-operatively,and the optics are calibrated(including zoom and focus)using a pinhole camera model.The intraoperative coordinate frame is registered using ana-tomical features and fiducial markers.The image features used in the display are currently segmented by hand. These include the outline of a lesion,the track of key nerves and blood vessels,and bone landmarks.This computer-guided surgery system can be said to be equivalent to an optical see-through system operating on a microscopic scale.In this case,the real scene is now seen through magnifying optics,but the eye of the ob-server is still the direct detecting device as in optical see-through.

One of the authors and colleagues at the UNC-CH are currently developing techniques that merge video and graphical images for augmented reality.The goal is to develop a system displaying live,real-time,ultrasound data properly registered in3-D space on a scanned sub-ject.This would be a powerful and intuitive visualization tool as well.The first application developed was the visu-alization of a human fetus during ultrasound echogra-phy.Figure5shows the real-time ultrasound images which appear to be pasted in front of the patient’s body, rather than fixed within it(Bajura et al.,1992).Real-time imaging and visualization remains a challenge.Fig-ure6shows a more recent,non-real-time implementa-tion of the visualization in which the fetus is rendered more convincingly within the body(State et al.,1994). Recently,knowledge from this video and ultrasound technology has also been applied to developing a visual-ization method for ultrasound-guided biopsies of breast lesions that were detected during mammography screen-ing procedures(Figure7)(State et al.,1996).This ap-plication was motivated from the challenges we observed during a biopsy procedure while collaborating on re-search with Etta Pisano,head of the Mammography Re-search Group at UNC-CH.The goal was to be able to locate any tumor within the breast as quickly and accu-rately as possible.The technology of video see-through Figure5.Real-time acquisition and superimposition of ultrasound slice images on a pregnant woman(1992).

290PRESENCE:VOLU ME9,NUM BER3

already developed was thus applied to this problem.The conventional approach to biopsy is to follow the inser-tion of a needle in the breast tissue with a remote moni-tor displaying real-time,2-D,ultrasound depth images. Such a procedure typically requires five insertions of the needle to maximize the chances of biopsy of the lesion. In the case in which the lesion is located fairly deep in the breast tissue,the procedure is difficult and can be lengthy(one to two hours is not atypical for deep le-sions).Several challenges remain to be overcome before the technology developed can actually be tested in the clinic,including accurate and precise tracking and a technically reliable HMD.The technology may have applications in guiding laparoscopy,endoscopy,or cath-eterization as well.

At the University of Blaise Pascal in Clermont Fer-rand,France,researchers developed several augmented-reality visualization tools based on hybrid optical and video see-through to assist in surgery to correct scoliosis (abnormal curvature of the spine column)(Peuchot, Tanguy,&Eude,1994,1995).This application was de-veloped in collaboration with a surgeon of infantile sco-liosis.The visualization system shown in figure8is from an optics point of view,the simplest see-through system one may conceive.It is first of all fixed on a stand,and it is designed as a viewbox positioned above the patient.

Figure6.Improved rendering of fetus inside the abdomen(1994).

Figure7.Ultrasound guided biopsy(a)Laboratory setup during evaluation of the technology with Etta Pisano and Henry Fuchs(b)A view through the HMD(1996)https://www.wendangku.net/doc/ee18902259.html,boratory prototype of the

hybrid optical/video see-through AR tool

for guided scoliosis surgery developed by

Peuchot at the University of Blaise

Pascal,France(1995).

Rolland and Fuchs291

The surgeon positions himself above the viewbox to see the patient,and the graphical information is superim-posed on the patient as illustrated in figure 9.The sys-tem includes a large monitor where a stereo pair of im-ages is displayed,as well as half-silvered mirrors that allow the superimposition of the real and virtual objects.The monitor is optically imaged on a plane through the semi-transparent mirrors,and the spine under surgery is located within a small volume around that plane.An op-tical layout of the system is shown in figure 10.

In the above hybrid optical-video system,vertebrae are located in space by automatic analysis of the perspec-tive view from a single video camera of the vertebrae.A standard algorithm such as the inverse perspective algo-rithm is used to extract the 3-D information from the projections observed in the detector plane (Dhome et al.,1989).The method relies heavily on accurate video tracking of vertebral displacements.High-accuracy algo-rithms were developed to support the application includ-ing development of subpixel detectors and calibration techniques.The method has been validated on vertebral

specimens and accuracy of submillimeters in depth has been demonstrated (Peuchot,1993,1994).

The success of the method can be attributed to the fine calibration of the system,which,contrary to most systems,does not assume a pinhole camera model for the video camera.Moreover,having a fixed viewer with no optical magnification (contrary to typical HMDs)and a constant average plane of surgical operation re-duces the complexity of problems such as registration and visualization.It can be shown,for example,that ren-dered depth errors are minimized when the virtual im-age planes through the optics (a simple semi-transparent mirror in Peuchot’s case)is located in the average plane of the 3-D virtual object visualized (Rolland et al.,

1995).Furthermore,the system avoids the challenging problems of tracking,optical distortion compensation,and conflicts of accommodation and convergence re-lated to HMDs (Robinett &Rolland,1992;Rolland &Hopkins,1993).Some tracking and distortion issues will be further discussed in sections 3.1and 3.2,respec-tively.However,good registration of real and virtual objects in a static framework is a first step to good cali-bration in a dynamic framework,and Peuchot’s results are state of the art in this regard.

It is important to note that the method developed for this application employs a hybrid optical-video technol-ogy.In this case,video is essentially used to localize real objects in the surgical field,and optical see-through is used as the visualization tool for the surgeon.While the first system developed used one video camera,the meth-ods have been extended to include multiple cameras

Figure 9.Graphics illustration of current and future use of computer-guided surgery according to Bernard Peuchot.

Figure 10.Optical scheme of the hybrid

optical/video see-through AR tool shown in Fig.8.

292PRESENCE:VOLU ME 9,NUM BER 3

with demonstrated accuracy and precision of0.01mm (Peuchot,1998).Peuchot chose the hybrid system over a video see-through approach because‘‘it allows the op-erator to work in his real environment with a perception space that is real.’’Peuchot judged this point to be criti-cal in a medical application like surgery.

3A Comparison of Optical and Video See-Through Technology

As suggested in the description of the applications described,the main goal of augmented-reality systems is to merge virtual objects into the view of the real scene so that the user’s visual system suspends disbelief into per-ceiving the virtual objects as part of the real environ-ment.Current systems are far from perfect,and system designers typically end up making a number of applica-tion-dependent trade offs.We shall list and discuss these tradeoffs in order to guide the choice of technology de-pending upon the type of application considered.

Both systems,optical and video,have two image sources:the real world and the computer-generated world.These image sources are to be merged.Optical see-through HMDs take what might be called a‘‘mini-mally obtrusive’’approach;that is,they leave the view of the real world nearly intact and attempt to augment it by merging a reflected image of the computer-generated scene into the view of the real world.Video see-through HMDs are typically more obtrusive in the sense that they block out the real-world view in exchange for the ability to merge the two views more convincingly.In recent developments,narrow fields of view in video see-through HMDs have replaced large field-of-view HMDs,thus reducing the area where the real world (captured through video)and the computer-generated images are merged into a small part of the visual scene. In any case,a fundamental consideration is whether the additional features afforded by video see-through HMDs justify the loss of the unobstructed real-world view.

Our experience indicates that there are many tradeoffs between optical and video see-through HMDs with re-spect to technological and human-factors issues that af-fect designing,building,and assessing these HMDs.The specific issues are laid out in figure11.While most of these issues could be discussed from both a technologi-cal and human-factors-standpoint(because the two are closely interrelated in HMD systems),we have chosen to classify each issue where it is most adequately addressed at this time,given the present state of the technology. For example,delays in HMD systems are addressed un-der technology because technological improvements are actively being pursued to minimize delays.Delays also

Figure11.Outline of sections3.1and3.2of this paper.Rolland and Fuchs293

certainly have impact on various human-factor issues (such as the perceived location of objects in depth and user acceptance).Therefore,the multiple arrows shown in figure11indicate that the technological and human-factor-categories are highly interrelated.

3.1Technological Issues

The technological issues for HMDs include latency of the system,resolution and distortion of the real scene, field of view(FOV),eyepoint matching of the see-through device,and engineering and cost factors.While we shall discuss properties of both optical and video see-through HMDs,it must be noted that,contrary to opti-cal see-through HMDs,there are no commercially avail-able products for video see-through HMDs.Therefore, discussions of such systems should be considered care-fully as findings may be particular to only a few current systems.Nevertheless,we shall provide as much insight as possible into what we have learned with such systems as well.

3.1.1System Latency.An essential component of see-through HMDs is the capacity to properly register a user’s surroundings and the synthetic space.A geomet-ric calibration between the tracking devices and the HMD optics must be performed.The major impedi-ment to achieving registration is the gap in time,re-ferred to as lag,between the moment when the HMD position is measured and the moment when the syn-thetic image for that position is fully rendered and pre-sented to the user.

Lag is the largest source of registration error in most current HMD systems(Holloway,1995).This lag in typical systems is between60ms and180ms.The head of a user can move during such a period of time,and the discrepancy in perceived scene and superimposed scene can destroy the illusion of the synthetic objects being fixed in the environment.The synthetic objects can

‘‘swim’’around significantly in such a way that they may not even seem to be part of the real object to which they belong.For example,in the case of ultrasound-guided biopsy,the computer-generated tumor may appear to be located outside the breast while tracking the head of the user.This swimming effect has been demonstrated and minimized by predicting HMD position instead of sim-ply measuring positions(Azuma&Bishop,1994). Current HMD systems are lag limited as a conse-quence of tracker lag,the complexity of rendering,and displaying the images.Tracker lag is often not the limit-ing factor in performance.If displaying the image is the limiting factor,novel display architectures supporting frameless rendering can help solve the problem(Bishop et al.,1994).Frameless rendering is a procedure for con-tinuously updating a displayed image,as information becomes available instead of updating entire frames at a time.The tradeoffs between lag and image quality are currently being investigated(Scher-Zagier,1997).If we assume that we are limited by the speed of rendering an image,eye-tracking capability may be useful to quickly update information only around the gaze point of the user(Thomas et al.,1989;Rolland,Yoshida,et al., 1998;Vaissie&Rolland,1999).

One of the major advantages of video see-through HMDs is the potential capability of reducing the relative latencies between the2-D real and synthetic images as a consequence of both types of images being digital(Ja-cobs et al.,1997).Manipulation of the images in space and in time is applied to register them.Three-dimen-sional registration is computationally intensive,if at all robust,and challenging for interactive speed.The spatial approach to forcing registration in video see-through systems is to correct registration errors by imaging land-mark points in the real world and registering virtual ob-jects with respect to them(State et al.,1996).One ap-proach to eliminating temporal delays between the real and computer-generated images in such a case is to cap-ture a video image and draw the graphics on top of the video image.Then the buffer is swapped,and the com-bined image is presented to the HMD user.In such a configuration,no delay apparently exists between the real and computer-generated images.If the actual la-tency of the computer-generated image is large with re-spect to the video image,however,it may cause sensory conflicts between vision and proprioception because the video images no longer correspond to the real-world scene.Any manual interactions with real objects could suffer as a result.

294PRESENCE:VOLU ME9,NUM BER3

Another approach to minimizing delays in video see-through HMDs is to delay the video image until the computer-generated image is rendered.Bajura and Neu-mann(1995)applied chroma keying,for example,to dynamically image a pair of red LEDs placed on two real objects(one stream)and then registered two virtual ob-jects with respect to them(second stream).By tracking more landmarks,better registration of real and virtual objects may be achieved(Tomasi and Kanade,1991). The limitation of the approach taken is the attempt to register3-D scenes using2-D constraints.If the user rotates his head rapidly or if a real-world object moves, there may be no‘‘correct’’transformation for the virtual scene image.To align all the landmarks,one must either allow errors in registration of some of the landmarks or perform a nonlinear warping of the virtual scene that may create undesirable distortions of the virtual objects. The nontrivial solution to this problem is to increase the speed of the system until scene changes between frames are small and can be approximated with simple2-D transformations.

In a similar vein,it is also important to note that the video view of the real scene will normally have some lag due to the time it takes to acquire and display the video images.Thus,the image in a video see-through HMD will normally be slightly delayed with respect to the real world,even without adding delay to match the synthetic images.This delay may increase if an image-processing step is applied to either enforce registration or perform occlusion.The key issue is whether the delay in the sys-tem is too great for the user to adapt to it(Held& Durlach,1987).

Systems using optical see-through HMDs have no means of introducing artificial delays into the real scene. Therefore,the system may need to be optimized for low latency,perhaps less than60ms,where predictive track-ing can be effective(Azuma&Bishop,1994).For any remaining lag,the user may have to limit his actions to slow head motions.Applications in which speed of movement can be readily controlled,such as in the VRDA tool described earlier,can benefit from optical see-through technology(Rolland&Arthur,1997).The advantage of having no artificial delays is that real ob-jects will always be where they are perceived to be,and this may be crucial for a broad range of applications.

3.1.2Real-Scene Resolution and Distortion.If real-scene resolution refers to the resolution of the real-scene object,the best real-scene resolution that a see-through device can provide is that perceived with the naked eye under unit magnification of the real scene. Certainly under microscopic observation as described by Hill(Edwards et al.,1995),the best scene resolution goes beyond that obtained with a naked eye.It is also assumed that the see-through device has no image-pro-cessing capability.

A resolution extremely close to that obtained with the naked eye is easily achieved with a nonmicroscopic opti-cal see-through HMD,because the optical interface to the real world is simply a thin parallel plate(such as a glass plate)positioned between the eyes and the real scene.Such an interface typically introduces only very small amounts of optical aberrations to the real scene: For example,for a real-point object seen through a2 mm planar parallel plate placed in front of a4mm dia. eye pupil,the diffusion spot due to spherical aberration would subtend a2107arc-minute visual angle for a point object located500mm away.Spherical aberration is one of the most common and simple aberrations in optical systems that lead to blurring of the images.Such a degradation of image quality is negligible compared to the ability of the human eye to resolve a visual angle of1 minute of arc.Similarly,planar plates introduce low dis-tortion of the real scene,typically below1%.There is no distortion only for the chief rays that pass the plate paral-lel to its normal.1

In the case of a video see-through HMD,real-scene images are digitized by miniature cameras(Edwards et al.,1993)and converted into an analog signal that is fed to the HMD.The images are then viewed through the HMD viewing optics that typically use an eyepiece de-sign.The perceived resolution of the real scene can thus be limited by the resolution of the video cameras or the HMD viewing optics.Currently available miniature

1.A chief ray is defined as a ray that emanates from a point in the FOV and passes through the center of the pupils of the system.The exit pupil in an HMD is the entrance pupil of the human eye.

Rolland and Fuchs295

video cameras typically have a resolution of640480, which is also near the resolution limit of the miniature displays currently used in HMDs.2Depending upon the magnification and the field of view of the viewing optics, various effective visual resolutions may be reached. While the miniature displays and the video cameras seem to currently limit the resolution of most systems,such performance may improve with higher-resolution detec-tors and displays.

In assessing video see-through systems,one must dis-tinguish between narrow and wide FOV https://www.wendangku.net/doc/ee18902259.html,rge-FOV(50deg.)eyepiece designs are known to be ex-tremely limited in optical quality as a consequence of factors such as optical aberrations that accompany large FOVs,pixelization that may become more apparent un-der large magnification,and the exit pupil size that must accommodate the size of the pupils of a person’s eyes. Thus,even with higher-resolution cameras and displays, video see-through HMDs may remain limited in their ability to provide a real-scene view of high resolution if conventional eyepiece designs continue to be used.In the case of small to moderate FOV(10deg.to20deg.) video see-through HMDs,the resolution is still typically much less than the resolving power of the human eye.

A new technology,referred to as tiling,may overcome some of the current limitations of conventional eyepiece design for large FOVs(Kaiser,1994).The idea is to use multiple narrow-FOV eyepieces coupled with miniature displays to completely cover(or tile)the user’s FOV. Because the individual eyepieces have a fairly narrow FOV,higher resolution(nevertheless currently less than the human visual system)can be achieved.One of the few demonstrations of high-resolution,large-FOV dis-plays is the tiled displays.A challenge is the minimization of seams in assembling the tiles,and the rendering of multiple images at interactive speed.The tiled displays certainly bring new practical and computational chal-lenges that need to be confronted.If a see-through ca-pability is desired(for example,to display virtual furni-ture in an empty room),it is currently unclear whether the technical problems associated with providing overlay can be solved.

Theoretically,distortion is not a problem in video see-through systems because the cameras can be designed to compensate for the distortion of the optical viewer,as demonstrated by Edwards et al.(1993).However,if the goal is to merge real and virtual information,as in ultra-sound echography,having a warped real scene signifi-cantly increases the complexity of the synthetic-image generation(State et al.,1994).Real-time video correc-tion can be used at the expense of an additional delay in the image-generation sequence.An alternative is to use low-distortion video cameras at the expense of a nar-rower FOV,merge unprocessed real scenes with virtual scenes,and warp the merged images.Warping can be done using(for example)real-time texture mapping to compensate for the distortion of the HMD viewing op-tics as a last step(Rolland&Hopkins,1993;Watson& Hodges,1995).

The need for high,real-scene resolution is highly task dependent.Demanding tasks such as surgery or engi-neering training,for example,may not be able to toler-ate much loss in real-scene resolution.Because the large-FOV video see-through systems that we have experienced are seriously limited in terms of resolution, narrow-FOV video see-through HMDs are currently preferred.Independently of resolution,an additional critical issue in aiming towards narrow-FOV video see-through HMDs is the need to match the viewpoint of the video cameras with the viewpoint of the user.Match-ing is challenging with large-FOV systems.Also,meth-ods for matching video and real scenes for large-FOV tiled displays must be developed.At this time,consider-ing the growing availability of high-resolution flat-panel displays,we foresee that the resolution of see-though HMDs could gradually increase for both small-and large-FOV systems.The development and marketing of miniature high-resolution technology must be under-taken to achieve resolutions that match that of human vision.

3.1.3Field of View.A generally challenging issue of HMDs is providing the user with an adequate FOV for a given application.For most applications,increasing

2.The number of physical elements is typically640480.One can use signal processing to interpolate between lines to get higher resolu-tions.

296PRESENCE:VOLU ME9,NUM BER3

the binocular FOV means that fewer head movements are required to perceive an equivalently large scene.We believe that a large FOV is especially important for tasks that require grabbing and moving objects and that it provides increased situation awareness when compared to narrow-FOV devices(Slater&Wilbur,1997).The situation with see-through devices is somewhat different from that of fully opaque HMDs in that the aim of using the technology is different from that of immersing the user in a virtual environment.

3.1.3.1Overlay and Peripheral FOV.The term overlay FOV is defined as the region of the FOV where graphical information and real information are superim-posed.The peripheral FOV is the real-world FOV be-yond the overlay FOV.For immersive opaque HMDs, no such distinction is made;one refers simply to the FOV.It is important to note that the overlay FOV may need to be narrow only for certain augmented-reality applications.For example,in a visualization tool such as the VRDA tool,only the knee-joint region is needed in the overlay FOV.In the case of video HMD-guided breast biopsy,the overlay FOV could be as narrow as the synthesized tumor.The real scene need not necessarily be synthesized.The available peripheral FOV,however, is critical for situation awareness and is most often re-quired for various applications whether it is provided as part of the overlay or around the overlay.If provided around the overlay,the transition from real to virtual imagery must be made as seamless as possible.This is an investigation that has not yet been addressed in video see-through HMDs.

Optical see-through HMDs typically provide from20 deg.to60deg.overlay FOV via the half-transparent mirrors placed in front of the eyes,a characteristic that may seem somewhat limited but promising for a variety of medical applications whose working visualization dis-tance is within arm https://www.wendangku.net/doc/ee18902259.html,rger FOVs have been ob-tained,up to82.567deg.,at the expense of reduced brightness,increased complexity,and massive,expensive technology(Welch&Shenker,1984).Such FOVs may have been required for performing navigation tasks in real and virtual environments but are likely not required in most augmented-reality applications.Optical see-through HMDs,however,whether or not they have a large overlay FOV,have been typically designed open enough that users can use their peripheral vision around the device,thus increasing the total real-world FOV to closely match one’s natural FOV.An annulus of obstruc-tion usually results from the mounts of the thin see-through mirror similar to the way that our vision may be partially occluded by a frame when wearing eyeglasses. In the design of video see-through HMDs,a difficult engineering task is matching the frustum of the eye with that of the camera(as we shall discuss in section3.1.4). While such matching is not so critical for far-field view-ing,it is important for near-field visualization as in vari-ous medical visualizations.This difficult matching prob-lem has lead to the consideration of narrower-FOV systems.A compact,4030deg.FOV design,de-signed for optical see-through HMD but adaptable to video see-through,was proposed by Manhart,Malcolm, &Frazee(1993).Video see-through HMDs,on the other hand,can provide(in terms of a see-through FOV)the FOV displayed with the opaque type viewing optics that typically range from20deg.to90deg.In such systems where the peripheral FOV of the user is occluded,the effective real-world FOV is often smaller than in optical see-through systems.When using a video see-through HMD in a hand-eye coordination task,we found in a recent human-factor study that users needed to perform larger head movements to scan an active field of vision than when performing the task with the un-aided eye(Biocca&Rolland,1998).We predict that the need to make larger head movements would not arise as much with see-through HMDs with equivalent overlay FOVs but larger peripheral FOVs,because users are pro-vided with increased peripheral vision,and thus addi-tional information,to more naturally perform the task.

3.1.3.2Increasing Peripheral FOV in Video

See-Through HMDs.An increase in peripheral FOV in video see-through systems can be accomplished in two ways:in a folded optical design,as used for optical see-through HMDs,however with an opaque mirror instead of a half-transparent mirror,or in a nonfolded design but with nonenclosed mounts.The latter calls for inno-vative optomechanical design because heavier optics

Rolland and Fuchs297

have to be supported than in either optical or folded video see-through.Folded systems require only a thin mirror in front of the eyes,and the heavier optical com-ponents are placed around the head.However,the

tradeoff with folded systems is a significant reduction in the overlay FOV.

3.1.3.3Tradeoff Resolution and FOV.While the resolution of a display in an HMD is defined in the graphics community by the number of pixels,the rel-evant measure of resolution is the number of pixels per angular FOV,which is referred to as angular resolution.Indeed,what is of importance for usability is the angular subtends of a pixel at the eye of the HMD user.Most current high-resolution HMDs achieve higher resolu-tion at the expense of a reduced FOV.That is,they use the same miniature,high-resolution CRTs but with op-tics of less magnification in order to achieve higher an-gular resolution.This results in a FOV that is often nar-row.The approach that employs large high-resolution displays,or light valves,and transports the high-resolu-tion images to the eyes by imaging optics coupled to a bundle of optical fibers achieves high resolution at fairly large FOVs (Thomas et al.,1989).The current pro-posed solutions that improve resolution without trading FOV are either tiling techniques,high-resolution inset displays (Fernie,1995;Rolland,Yoshida,et al.,1998),or projection HMDs (Hua et al.,2000).

Projective HMDs differ from conventional HMDs in that projection optics are used instead of eyepiece optics to project real images of miniature displays in the envi-ronment.A screen placed in the environment reflects the images back to the eyes of the user.Projective HMDs have been designed and demonstrated,for example,by Kijima and Ojika (1997)and Parsons and Rolland

(1998).Kijima used a conventional projection screen in his prototype.Parsons and Rolland developed a first-prototype projection HMD system to demonstrate that an undistorted virtual 3-D image could be rendered when projecting a stereo pair of images on a bent sheet of microretroreflector cubes.The first proof-of-concept system is shown in figure 12.A comprehensive investiga-tion of the optical characteristics of projective HMDs is given by Hua et al.(2000).We are also developing the

next-generation prototypes of the technology using cus-tom-made miniature lightweight optics.The system presents various advantages over conventional HMDs,including distortion-free images,occluded virtual ob-jects from real-objects interposition,no image cross-talks for multiuser participants in the virtual world,and the potential for a wide FOV (up to 120deg.).

3.1.4Viewpoint Matching.In video see-through HMDs,the camera viewpoint (that is,the en-trance pupil)must be matched to the viewpoint of the observer (the entrance pupil of the eye).The viewpoint of a camera or eye is equivalent to the center of projec-tion used in the computer graphics model that computes the stereo images and is taken here to be the center of the entrance pupil of the eye or camera (Vaissie &Rolland,2000).In earlier video see-through designs,Edwards et al.(1993)investigated ways to mount the cameras to minimize errors in viewpoint matching.The error minimization versus exact matching was a conse-quence of working with wide-FOV systems.If the view-points of the cameras do not match the viewpoints of the eyes,the user experiences a spatial shift in the perceived scene that may lead to perceptual anomalies (as further

Figure 12.Proof of concept prototype of a projection head-mounted display with microre ector sheeting (1998).

298PRESENCE:VOLU ME 9,NUM BER 3

discussed under human-factors issues (Biocca &Rolland,1998).Error analysis should then be con-ducted in such a case to match the need of the applica-tion.

For cases in which the FOV is small (less than approxi-mately 20deg.),exact matching in viewpoints is pos-sible.Because the cameras cannot be physically placed at the actual eyepoints,mirrors can be used to fold the op-tical path (much like a periscope)to make the cameras’viewpoints correspond to the real eyepoints as shown in figure 13(Edwards et al.,1993).While such geometry solves the problem of the shift in viewpoint,it increases the length of the optical path,which reduces the field of view,for the same reason that optical see-through HMDs tend to have smaller fields of view.Thus,video see-through HMDs must either trade their large FOVs for correct real-world viewpoints or require the user to adapt to the shifted viewpoints as further discussed in section 3.2.3.

Finally,correctly mounting the video cameras in a video see-through HMD requires that the HMD has an interpupillary distance (IPD)adjustment.Given the IPD of a user,the lateral separation of the video cameras must then be adjusted to that value in order for the views ob-tained by the video cameras to match those that would have been obtained with naked eyes.If one were to ac-count for eye movements in video see-through HMDs,the level of complexity in slaving the camera viewpoint to the user viewpoint would be highly increased.To our knowledge,such complexity has not yet been consid-ered.

3.1.5Engineering and Cost Factors.HMD designs often suffer from fairly low resolution,limited FOV,poor ergonomic designs,and excessive weight.A good ergonomic design requires an HMD whose weight is similar to a pair of eyeglasses,or which folds around the user’s head so the device’s center of gravity falls near the center of rotation of the head (Rolland,1994).The goal here is maximum comfort and usability.Reasonably lightweight HMD designs currently suffer narrow FOVs,on the order of 20deg.To our knowledge,at present,no large-FOV stereo see-through HMDs of any type are comparable in weight to a pair of eyeglasses.Rolland predicts that it could be achieved with some emerging technology of projection HMDs (Rolland,Parsons,et al.,1998).However,it must be noted that such technology may not be well suited to all visualiza-tion schemes as it requires a projection screen some-where in front of the user that is not necessarily attached to the user’s head.

With optical see-through HMDs,the folding can be accomplished with either an on-axis or an off-axis de-sign.Off-axis designs are more elegant and also far more attractive because they elimate the ghost images that currently plague users of on-axis HMDs (Rolland,2000).Off-axis designs are not commercially available because very few prototypes have been built (and those that have been built are classified)(Shenker,1998).

Moreover,off-axis systems are difficult to design and are thus expensive to build as a result of off-axis components (Shenker,1994).A nonclassified,off-axis design has been designed by Rolland (1994,2000).Several

factors

Figure 13.A 10degree FOV video see-through HMD:Dglasses developed at UNC-CH.Lipstick cameras and a double fold mirror arrangement was used to match the viewpoints of the camera and user (1997).

Rolland and Fuchs 299

(including cost)have also hindered the construction of a first prototype as well.New generations of computer-controlled fabrication and testing are expected to change this trend.

Since their beginning,high-resolution HMDs have been CRT based.Early systems were even monochrome, but color CRTs using color wheels or frame-sequential color have been fabricated and incorporated into HMDs (Allen,1993).Five years ago,we may have thought that, today,high-resolution,color,flat-panel displays would be the first choice for HMDs.While this is slowly hap-pening,miniature CRTs are not fully obsolete.The cur-rent optimism,however,is prompted by new technolo-gies such as reflective LCDs,microelectromechanical systems(MEMS)-based displays,laser-based displays, and nanotechnology-based displays.

3.2Human-Factor and Perceptual

Issues

Assuming that many of the technological chal-lenges described have been addressed and high-perfor-mance HMDs can be built,a key human-factor issue for see-through HMDs is that of user acceptance and safety, which will be discussed first.We shall then discuss the technicalities of perception in such displays.The ulti-mate see-through display is one that provides quantita-tive and qualitative visual representations of scenes that conform to a predictive model(for example,conform to that given by the real world if that is the intention).Is-sues include the accuracy and precision of the rendered and perceived locations of objects in depth,the accuracy and precision of the rendered and perceived sizes of real and virtual objects in a scene,and the need of an unob-structed peripheral FOV(which is important for many tasks that require situation awareness and the simple ma-nipulation of objects and accessories.

3.2.1User Acceptance and Safety.A fair ques-tion for either type of technology is‘‘will anyone actually wear one of these devices for extended periods?’’The answer will doubtless be specific to the application and the technology included,but it will probably center upon whether the advanced capabilities afforded by the technology offset the problems induced by the encum-brance and sensory conflicts that are associated with it. In particular,one of us thinks that video see-through HMDs may be met with resistance in the workplace be-cause they remove the direct,real-world view in order to augment it.This issue of trust may be difficult to over-come for some users.If wide-angle FOV video see-through HMDs are used,this problem is exacerbated in safety-critical applications.A key difference in such appli-cations may turn out to be the failure mode of each technology.A technology failure in the case of optical see-through HMDs may leave the subject without any computer-generated images but still with the real-world view.In the case of video see-through,it may leave the user with the complete suppression of the real-world view,as well as the computer-generated view. However,it may be that the issue has been greatly lessened because the video view occupies such a small fraction(approximately10deg.visual angle)of the scene in recent developments of the technology.It is especially true of flip-up and flip-down devices such as that developed at UNC-CH and shown in figure13. Image quality and its tradeoffs are definitely critical issues related to user acceptance for all types of technol-ogy.In a personal communication,Martin Shenker,a senior optical engineer with more than twenty years of experience designing HMDs,pointed out that there are currently no standards of image quality and technology specifications for the design,calibration,and mainte-nance of HMDs.This is a current concern at a time when the technology may be adopted in various medical visualizations.

3.2.2Perceived Depth.3.2.2.1Occlusion.The ability to perform occlusion in see-through HMDs is an important issue of comparison between optical and video see-through HMDs.One of the most important differences between these two technologies is how they handle the depth cue known as occlusion(or interposi-tion).In real life,an opaque object can block the view of another object so that part or all of it is not visible. While there is no problem in making computer-gener-ated objects occlude each other in either system,it is considerably more difficult to make real objects occlude

300PRESENCE:VOLU ME9,NUM BER3

virtual objects(and vice versa)unless the real world for an application is predefined and has been modeled in the computer.Even then,one would need to know the exact location of a user with respect to that real environment. This is not the case in most augmented-reality applica-tions,in which the real world is constantly changing and on-the-fly acquisition is all the information one will ever have of the real world.Occlusion is a strong monocular cue to depth perception and may be required in certain applications(Cutting&Vishton,1995).

In both systems,computing occlusion between the real and virtual scenes requires a depth map of both scenes.A depth map of the virtual scene is usually avail-able(for z-buffered image generators),but a depth map of the real scene is a much more difficult problem.While one could create a depth map in advance from a static real environment,many applications require on-the-fly image acquisition of the real scene.Assuming the system has a depth map of the real environment,video see-through HMDs are perfectly positioned to take advan-tage of this information.They can,on a pixel-by-pixel basis,selectively block the view of either scene or even blend them to minimize edge artifacts.One of the chief advantages of video see-through HMDs is that they handle this problem so well.

The situation for optical see-through HMDs can be more complex.Existing optical see-through HMDs blend the two images with beam splitters,which blend the real and virtual images uniformly throughout the FOV.Normally,the only control the designer has is the amount of reflectance versus transmittance of the beam splitter,which can be chosen to match the brightness of the displays with the expected light levels in the real-world environment.If the system has a model of the real environment,it is possible to have real objects occlude virtual ones by simply not drawing the occluded parts of the virtual objects.The only light will then be from the real objects,giving the illusion that they are occluding the virtual ones.Such an effect requires a darkened room with light directed where it is needed.This tech-nique has been used by CAE Electronics in their flight simulator.When the pilots look out the window,they see computer-generated objects.If they look inside the cockpit,however,the appropriate pixels of the com-puter-generated image are masked so that they can see the real instruments.The room is kept fairly dark so that this technique will work(Barrette,1992).David Mizell (from Boeing Seattle)and Tom Caudell(University of New Mexico)are also using this technique;they refer to it as‘‘fused reality’’(Mizell,1998).

While optical see-through HMDs can allow real ob-jects to occlude virtual objects,the reverse is even more challenging because normal beam splitters have no way of selectively blocking out the real environment.This problem has at least two possible partial solutions.The first solution is to spatially control the light levels in the real environment and to use displays that are bright enough so that the virtual objects mask the real ones by reason of contrast.(This approach is used in the flight simulator just mentioned for creating the virtual instru-ments.)This may be a solution for a few applications.A possible second solution would be to locally attenuate the real-world view by using an addressable filter device placed on the see-through mirror.It is possible to gener-ate partial occlusion in this manner because the effective beam of light entering the eye from some point in the scene covers only a small area of the beam splitter,the eye pupil being typically2mm to4mm in photopic vi-sion.A problem with this approach is that the user does not focus on the beam splitter,but rather somewhere in the scene.A point in the scene maps to a disk on the beam splitter,and various points in the scene map to overlapping disks on the beam splitter.Thus,any block-ing done at the beam splitter may occlude more of the scene than expected,which might lead to odd visual ef-fects.A final possibility is that some applications may work acceptably without properly rendered occlusion cues.That is,in some cases,the user may be able to use other depth cues,such as head-motion parallax,to re-solve the ambiguity caused by the lack of occlusion cues.

3.2.2.2Rendered Locations of Objects in Depth.We shall distinguish between errors in the rendered and per-ceived locations of objects in depth.The former yields the latter.One can conceive,however,that errors in the perceived location of objects in depth can also occur

Rolland and Fuchs301

even in the absence of errors in rendered depths as a re-sult of an incorrect computational model for stereo pair generation or a suboptimal presentation of the stereo images.This is true both for optical and video see-through HMDs.Indeed,if the technology is adequate to support a computational model,and the model ac-counts for required technology and corresponding pa-rameters,the rendered locations of objects in depth—as well as the resulting perceived locations of objects in depth—will follow expectations.Vaissie recently showed some limitations of the choice of a static eyepoint in computational models for stereo pair generation for vir-tual environments that yield errors in rendered and thus perceived location of objects in depths(Vaissie and Rolland,2000).The ultimate goal is to derive a compu-tational model and develop the required technology that yield the desired perceived location of objects in depth. Errors in rendered depth typically result from inaccurate display calibration and parameter determination such as the FOV,the frame-buffer overscan,the eyepoints’loca-tions,conflicting or noncompatible cues to depth,and remaining optical aberrations including residual optical distortions.

3.2.2.3FOV and Frame-Buffer Overscan.Inaccu-racies of a few degrees in FOV are easily made if no cali-bration is conducted.Such inaccuracies can lead to sig-nificant errors in rendered depths depending on the imaging geometry.For some medical and computer-guided surgery applications,for example,errors of sev-eral millimeters are likely to be unacceptable.The FOV and the overscan of the frame buffer that must be mea-sured and accounted for to yield accurate rendered depths are critical parameters for stereo pair generation in HMDs(Rolland et al.,1995).These parameters must be set correctly regardless of whether the technology is optical or video see-through.

3.2.2.4Specification of Eyepoint Location.The specification of the locations of the user’s eyepoints (which are used to render the stereo images from the correct viewpoints)must be specified for accurate ren-dered depth.This applies to both optical and video see-through HMDs.In addition,for video see-through HMDs,the real-scene video images must be acquired from the correct viewpoint(Biocca&Rolland,1998). For the computer graphics-generation component, three choices of eyepoint locations within the human eye have been proposed:the nodal point of the eye3(Robi-nett&Rolland,1992;Deering,1992),the entrance pupil of the eye(Rolland,1994;Rolland et al.,1995), and the center of rotation of the eye(Holloway,1995). Rolland(1995)discusses that the choice of the nodal point would in fact yield errors in rendered depth in all cases whether the eyes are tracked or not.For a device with eye-tracking capability,the entrance pupil of the eye should be taken as the eyepoint.If eye movements are ignored,meaning that the computer-graphics eyepoints are fixed,then it was proposed that it is best to select the center of rotation of the eye as the eyepoint(Fry,1969; Holloway,1995).An in-depth analysis of this issue re-veals that while the center of rotation yields higher accu-racy in position,the center of the entrance pupil yields in fact higher angular accuracy(Vaissie&Rolland,2000). Therefore,depending on the task involved,and whether angular accuracy or position accuracy is most important, the centers of rotation or the centers of the entrance pu-pil may be selected as best eyepoints location in HMDs.

3.2.2.5Residual Optical Distortions.Optical dis-tortion is one of the few optical aberrations that do not affect image sharpness;rather,it introduces warping of the image.It occurs only for optics that include lenses.If the optics include only plane mirrors,there are no dis-tortions(Peuchot,1994).Warping of the images leads to errors in rendered depths.Distortion results from the locations of the user’s pupils away from the nodal points of the optics.Moreover,it varies as a function of where the user looks through the optics.However,if the optics are well calibrated to account for the user’s IPD,distor-tion will be fairly constant for typical eye movements behind the optics.Prewarping of the computer-gener-ated image can thus be conducted to compensate for the 3.Nodal points are conjugate points in an optical system that satisfy an angular magnification of1.Two points are considered to conjugate of each other if they are images of each other.

302PRESENCE:VOLU ME9,NUM BER3

optical residual distortions (Robinett &Rolland,1992;Rolland &Hopkins,1993;Watson &Hodges,1995).3.2.2.6Perceived Location of Objects in Depth.Once depths are accurately rendered according to a given computational model and the stereo images are presented according to the computational model,the perceived location and size of objects in depth become an important issue in the assessment of the technology and the model.Accuracy and precision can be defined only statistically.Given an ensemble of measured per-ceived location of objects in depths,the depth percept will be accurate if objects appear in average at the loca-tion predicted by the computational model.The per-ceived location of objects in depth will be precise if ob-jects appear within a small spatial zone around that average location.We shall distinguish between overlap-ping and nonoverlapping objects.In the case of nonoverlapping objects,one may resort to depth cues other than occlusion.These include famil-iar sizes,stereopsis,perspective,texture,and motion parallax.A psychophysical investigation of the perceived location of objects in depth in an optical see-through HMD using stereopsis and perspective as the visual cues to depth is given in Rolland et al.,(1995),and Rolland et al.(1997).The HMD shown in figure 14is mounted on a bench for calibration purpose and flexibility in vari-ous parameter settings.

In a first investigation,a systematic shift of 50mm in the perceived location of objects in depth versus pre-dicted values was found in this first set of study (Rolland et al.,1995).Moreover,the precision of the measures varied significantly across subjects.As we learn more about the interface optics and computational model used in the generation of the stereo image pairs and im-prove on the technology,we have demonstrated

errors

Figure 14.(a)Bench prototype head-mounted display with head-motion parallax developed in the VGILab at UCF (1997).(b)Schematic of the optical imaging from a top view of the setup.

Rolland and Fuchs 303

on the order of2mm.The technology is now ready to deploy for extensive testing in specific applications,and the VRDA tool is one of the applications we are cur-rently pursuing.

Studies of the perceived location of objects in depth for overlapping objects in an optical see-through HMD have been conducted by Ellis and Buchler(1994).They showed that the perceived location of virtual objects can be affected by the presence of a nearby opaque physical object.When a physical object was positioned in front of (or at)the initial perceived location of a3-D virtual ob-ject,the virtual object appeared to move closer to the observer.In the case in which the opaque physical object was positioned substantially in front of the virtual object, human subjects often perceived the opaque object to be transparent.In current investigations with the VRDA tool,the opaque leg model appears transparent when a virtual knee model is projected on the leg as seen in fig-ure4.The virtual anatomy subjectively appears to be inside the leg model(Baillot,1999;Outters et al.,1999; Baillot et al.,2000).

3.2.3Adaptation.When a system does not offer what the user ultimately wants,two paths may be taken: improving on the current technology,or first studying the ability of the human system to adapt to an imperfect technological unit and then developing adaptation train-ing when appropriate.This is possible because of the astonishing ability of the human visual and propriocep-tive systems to adapt to new environments,as has been shown in studies on adaptation(Rock,1966,for ex-ample).

Biocca and Rolland(1998)conducted a study of ad-aptation to visual displacement using a large-FOV video see-through https://www.wendangku.net/doc/ee18902259.html,ers see the real world through two cameras that are located62mm higher than and 165mm forward from their natural eyepoints as shown in figure2.Subjects showed evidence of perceptual ad-aptation to sensory disarrangement during the course of the study.This revealed itself as improvement in perfor-mance over time while wearing the see-through HMD and as negative aftereffects once they removed it.More precisely,the negative aftereffect manifested itself clearly as a large overshoot in a depth-pointing task,as well as an upward translation in a lateral pointing task after wearing the HMD.Moreover,some participants experi-enced some early signs of cybersickness.

The presence of negative aftereffects has some poten-tially disturbing practical implications for the diffusion of large-FOV video see-through HMDs(Kennedy&Stan-ney,1997).Some of the intended earlier users of these HMDs are surgeons and other individuals in the medical profession.Hand-eye sensory recalibration for highly skilled users(such as surgeons)could have potentially disturbing consequences if the surgeon were to enter surgery within some period after using an HMD.It is an empirical question how long the negative aftereffects might persist and whether a program of gradual adapta-tion(Welch,1994)or dual adaptation(Welch,1993) might minimize the effect altogether.In any case,any shift in the camera eyepoints need to be minimized as much as possible to facilitate the adaptation process that is taking place.As we learn more about these issues,we will build devices with less error and more similarity be-tween using these systems and a pair of eyeglasses(so that adaptation takes less time and aftereffects decrease as well).

A remaining issue is the conflict between accommoda-tion and convergence in such displays.The issue can be solved at some cost(Rolland,et al.,2000).For lower-end systems,a question to investigate is how users adapt to various settings of the technology.For high-end sys-tems,much research is still needed to understand the importance of perceptual conflicts and how to best mini-mize them.

3.2.4Peripheral FOV.Given that peripheral vision can be provided in both optical and video see-through systems,the next question is whether it is used effectively for both systems.In optical see-through, there is almost no transition or discrepancy between the real scene captured by the see-through device and the peripheral vision seen on the side of the device.

For video see-through,the peripheral FOV has been provided by letting the user see around the device,as with optical see-through.However,it remains to be seen whether the difference in presentation of the superim-posed real scene and the peripheral real scene will cause

304PRESENCE:VOLU ME9,NUM BER3

discomfort or provide conflicting cues to the user.The issue is that the virtual displays call for a different accom-modation for the user than the real scene in various cases.

3.2.5Depth of Field.One important property of optical systems,including the visual system,is depth of field.(Depth of field refers to the range of distances from the detector(such as the eye)in which an object appears to be in focus without the need for a change in the optics focus(such as eye accommodation).For the human visual system example,if an object is accurately focused monocularly,other objects somewhat nearer and farther away are also seen clearly without any change in accommodation.Still nearer or farther objects are blurred.Depth of field reduces the necessity for precise accommodation and is markedly influenced by the diam-eter of the pupil.The larger the pupil,the smaller the depth of field.For a2mm and4mm pupil,the depths of field are0.06and0.03diopters,respectively.For a4mm pupil,for example,such a depth of field trans-lates as a clear focus from0.94m to1.06m for an object 1m away,and from11m to33m for an object17m away(Campbell,1957;Moses,1970).An important point is that accommodation plays an important role only at close working distances,where depth of field is narrow.

With video see-through systems,the miniature cam-eras that acquire the real-scene images must provide a depth of field equivalent to the required working dis-tance for a task.For a large range of working distances, the camera may need to be focused at the middle work-ing distance.For closer distances,the small depth of field may require an autofocus instead of a fixed-focus cam-era.

With optical see-through systems,the available depth of field for the real scene is essentially that of the human visual system,but for a larger pupil than would be acces-sible with unaided eyes.This can be explained by the brightness attenuation of the real scene by the half-trans-parent mirror.As a result,the pupils are dilated(we as-sume here that the real and virtual scenes are matched in brightness).Therefore,the effective depth of field is slightly less than with unaided eyes.This is a problem only if the user is working with nearby objects and the virtual images are focused outside of the depth of field that is required for nearby objects.For the virtual images and no autofocus capability for the2-D virtual images, the depth of field is imposed by the human visual system around the location of the displayed virtual images. When the retinal images are not sharp following some discrepancy in accommodation,the visual system is con-stantly processing somewhat blurred images and tends to tolerate blur up to the point at which essential detail is obscured.This tolerance for blur considerably extends the apparent depth of field so that the eye may be as much as0.25diopters out of focus without stimulat-ing accommodative change(Moses,1970).

3.2.6Qualitative Aspects.The representation of virtual objects,and in some cases of real objects,is altered by see-through devices.Aspects of perceptual representation include the shape of objects,their color, brightness,contrast,shading,texture,and level of detail. In the case of optical see-through HMDs,folding the optical path by using a half-transparent mirror is neces-sary because it is the only configuration that leaves the real scene almost unaltered.A thin,folding mirror will introduce a small apparent shift in depth of real objects precisely equal to e(n1)/n,where e is the thickness of the plate and n is its index of refraction.This is in addi-tion to a small amount of distortion(1%)of the scene at the edges of a60deg.FOV.Consequently,real ob-jects are seen basically unaltered.

Virtual objects,on the other hand,are formed from the fusion of stereo images formed through magnifying optics.Each optical virtual image formed of the display associated with each eye is typically optically aberrated. For large-FOV optics such as HMDs,astigmatism and chromatic aberrations are often the limiting factors.Cus-tom-designed HMD optics can be analyzed from a visual performance point of view(Shenker,1994;Rolland, 2000).Such analysis allows the prediction of the ex-pected visual performance of HMD users.

It must be noted that real and virtual objects in such systems may be seen sharply by accommodating in dif-

Rolland and Fuchs305

ferent planes under most visualization settings.This yields conflicts in accommodation for real and virtual imagery.For applications in which the virtual objects are presented in a small working volume around some mean display distance(such as arm-length visualization),the 2-D optical images of the miniature displays can be lo-cated at that same distance to minimize conflicts in ac-commodation and convergence between real and virtual objects.Another approach to minimizing conflicts in accommodation and convergence is multifocal planes technology as described in Rolland et al.,2000). Beside brightness attenuation and distortion,other aspects of object representation are altered in video see-through HMDs.The authors’experience with at least one system is that the color and brightness of real ob-jects are altered along with the loss in texture and levels of detail due to the limited resolution of the miniature video cameras and the wide-angle optical viewer.This alteration includes spatial,luminance,and color resolu-tion.This is perhaps resolvable with improved technol-ogy,but it currently limits the ability of the HMD user to perceive real objects as they would appear with un-aided eyes.In wide-FOV video see-through HMDs, both real and virtual objects call for the same accommo-dation;however,conflicts of accommodation and con-vergence are also present.As with optical see-through HMDs,these conflicts can be minimized if objects are perceived at a relatively constant depth near the plane of the optical images.In narrow-FOV systems in which the real scene is seen in large part outside the overlay imag-ery,conflicts in accommodation can also result between the real and computer-generated scene.

For both technologies,a solution to these various conflicts in accommodation may be to allow autofocus of the2-D virtual images as a function of the location of the user gaze point in the virtual environment,or to implement multifocal planes(Rolland et al.,2000). Given eye-tracking capability,autofocus could be pro-vided because small displacements of the miniature dis-play near the focal plane of the optics would yield large axial displacements of the2-D virtual images in the pro-jected virtual space.The2-D virtual images would move in depth according to the user gaze point.Multifocal planes also allow autofocusing but with no need for eye tracking.

4Conclusion

We have discussed issues involving optical and video see-through HMDs.The most important issues are system latency,occlusion,the fidelity of the real-world view,and user acceptance.Optical see-through systems offer an essentially unhindered view of the real environment;they also provide an instantaneous real-world view that assures that visual and proprioception information is synchronized.Video systems forfeit the unhindered view in return for improved ability to see real and synthetic imagery simultaneously.

Some of us working with optical see-through devices strongly feel that providing the real scene through opti-cal means is important for applications such as medical visualization in which human lives are at stake.Others, working with video see-through devices feel that a

flip-up view is adequate for the safety of the patient. Also,how to render occlusion of the real scene at given spatial locations may be important.Video see-through systems can also guarantee registration of the real and virtual scenes at the expense of a mismatch between vi-sion and proprioception.This may or may not be per-ceived as a penalty if the human observer is able to adapt to such a mismatch.Hybrid solutions,such as that de-veloped by Peuchot(1994),including optical see-through technology for visualization and video technol-ogy for tracking objects in the real environment,may play a key role in future developments of technology for 3-D medical visualization.

Clearly,there is no‘‘right’’system for all applications: Each of the tradeoffs discussed in this paper must be ex-amined with respect to specific applications and available technology to determine which type of system is most appropriate.Furthermore,additional HMD features such as multiplane focusing and eye tracking are cur-rently investigated at various research and development sites and may provide solutions to current perceptual

306PRESENCE:VOLU ME9,NUM BER3

动物运动方式的多样性练习题

第16章第1节动物运动方式的多样性同步练习 一、选择题 1.下列有关动物运动的说法正确的是() A.动物运动方式的多样性是对不同生活环境的适应,虽然它们的运动器官不同,但它们的共同特点是:具有适应不同环境的特化的运动器官。 B.鸟类在不同的季节里都有迁徙行为。 C.能在空中飞行的动物只有鸟类和昆虫。 D.海蛇、草履虫、野鸭、河蚌的运动方式均是游泳。 2.世界上最大的鸟――鸵鸟的运动方式主要是() A.飞行 B.爬行 C.游泳 D.行走 3.选出运动方式相同的一组动物() A.河蟹、野鸭 B.袋鼠、鸵鸟 C.企鹅、麻雀 D.鳄鱼、蜥蜴 4.下列动物的运动方式主要为飞行的是: A 、鸵鸟 B、企鹅 C、蝙蝠 D、蚕蛹 5.动物通过运动提高了适应环境的能力,蜥蜴的主要运动方式为: A 、飞行 B、跳跃 C、爬行 D、奔跑 6.下列哪项不是鸟类迁徙的意义: A、获取足够的食物 B、寻找适宜的生活环境 C、产生有利变异 D、有利完成生殖活动 7.变形虫的运动依靠:

A、纤毛 B、鞭毛 C、伪足 D、伸缩泡 8.生活在亚洲丛林中的鼹鼠在伸展四肢的时候,可以看到其身体两侧皮肤的飞膜,由此可推测鼹鼠的运动方式是: A、滑翔 B、奔跑 C、爬行 D、飞翔 二、非选择题 1.阅读下面短文,回答问题: 一只可爱的小兔子在长满青草和开有小花的山坡上,一边沐浴着灿烂的阳光,一边品尝着清香可口的小草。它不时地移动着身体,走到最嫩的小草旁。突然,可爱的小精灵发现天空中一只老鹰迅速向下飞来,它飞快地向坡那边的林子里跑去。小白兔拼命地奔跑着,越过小沟,老鹰迅速地飞着、追着。小白兔终于躲进了林子,老鹰灰溜溜地飞走了。敌情解除后,小白兔发现林子那边的草长得更茂盛,花开的更艳丽,又自由地在这边草地上快活地舞蹈着。 (1)说出上面短文中描述了动物哪些运动方式_______________________________。 (2)试说出小白兔具有的运动本领对它有哪些意义。__________________________ ________________________________________________。 2.动物的运动因种类而不同,根据常见的类别填写下表:(填写对应序号)

动物的活动方式教案

动物的活动方式教案 【篇一:科学活动:动物的活动方式】 领域活动计划 科学活动:动物的活动方式(动植物) 一、活动目标: 1、发现动物的活动方式是多种多样的。 2、尝试用肢体动作模仿动作的活动方式。 二、活动准备: 经验准备:请家长带幼儿到动物园观察动物。 教材配套:教育挂图《领域活动—科学— 动物的活动方式》,操作材料:《送动物回家》,亲子手册:《领 域活动—谁会跑?》。 三、活动过程: 1、引导幼儿观察挂图,说说挂图上有哪些动物。 引导幼儿观察挂图上动物的活动方式,说说:挂图上的动物都有哪 些活动方式?哪些动物是用脚行走的?哪些动物主要是在天空飞行的?哪些动物是在水里游的? 2、说说动物都有哪些活动方式。 引导幼儿说说动物都有哪些活动方式?。 教师结合挂图,小结:小鸟有两只脚,能在地上行走,但主要是在 天上飞。河里的小鱼只能在水里游;而鳄鱼也有脚,既可以在水里游,也能在陆地上行走。猴子、老虎、小兔、斑马都是靠脚来行走的。蛇是靠身上的鳞片和地面摩擦来行走的。动物们的活动方式多 种多样。 3、送小动物回家。 引导幼儿完成操作材料《送动物回家》,按照动物的活动方式将贴 纸贴在合适的位置。 4、模仿游戏。 引导幼儿玩模仿动物活动的游戏。 玩法:有而模仿动物活动,边做动作边说:我是xxx,我会飞(跑、走、游、跳)。 5、引导幼儿放松,活动结束。 活动反思:

透过一个故事,遇到危险的长颈鹿,将孩子带进这个活动,长颈鹿在遇到危 险的时候,生活在不同地方的动物纷纷给出意见,以此来了解动物们分别生活的地方并有什么不同。孩子们兴趣很浓厚,特别是故事也很有吸引力。除了个别孩子讲话,大家都积极参与活动,有的积极发言。 【篇二:《动物运动的方式》教学设计(2)】 第一节动物的运动方式 教学过程 【篇三:《动物运动的方式》教学设计(1)】 动物的运动方式 ●教学目标 知识目标 1.列举生物多样性三个层次,概述它们之间的关系。 2.认识我国生物多样性的现状及其独特性。 3.说明保护多样性的重要意义。 能力目标 1.通过对课本资料分析,培养学生思考分析、归纳总结能力,学会收集和整理信息的方法。 2.培养学生在已有知识和生活经验的基础上获取新知识的能力、综合运用知识能力。 情感目标 1.通过本章学习,主要在同学心目中建立起生物(包括人)与环境相适应的辩证观点。 2.激发同学们保护生物栖息环境、保护生态系统的多样性的热情、渗透爱国主义教育。 ●教学重点 1.生物多样性的三个方面内容及它们之间的关系。 2.基因多样性。 3.说明保护多样性的意义。 4.培养学生收集和整理信息的能力。 ●教学难点 1.生物多样性的三个方面内容以及它们之间的关系。 2.基因多样性。

初二生物动物运动方式的多样性教案

初二生物动物运动方式的多样性教案 第16章第1节动物运动方式的多样性 知识与能力目标: 举例说出动物运动方式的多样性;举例说明动物运动的重要性。 过程与方法目标: 通过调查、观察等,收集有关动物运动的资料,培养学生应用资料分析问题、解决问题的能力;通过观察、讨论、交流等进一步培养学生自主学习、合作学习、探究学习的能力。情感态度与价值观目标: 通过合作讨论,培养爱护大自然的情感,建立起生物(包括人)与环境相适应的辩证观点。激发同学们保护生物栖息环境、保护生态系统的多样性的热情。 教学重点: 举例说出动物运动方式的多样性;举例说明动物运动的重要性。 教学难点: 举例说明动物运动的重要性。 课前准备: 学生准备: ①调查和观察生活在水中、空中、地面下、地面上动物的运动方式并记录在表格中以及收集动物运动方式与生活环境

的关系的有关资料。 主要活动区域动物运动方式 天空 陆地 水中 ②收集有关动物运动的图片和画册。 ③准备各种典型的易于捕捉的较小型的动物。 教师准备: 制作多媒体课件;准备观察实验的用的小动物。 教学进程 课堂流程教师活动学生活动 情境导入 多媒体展示视频 去年东南亚发生海啸时,在泰国的普吉岛,有头大象驮着许多的孩子快速奔跑逃离了危险的海滩。那么动物的运动对动物有什么意义呢? 人们常说:海阔凭鱼跃,天高任鸟飞。不同的动物有着不同的运动方式。鹰击长空、鱼翔浅坻、麋鹿奔跑、企鹅游弋等等构成了大自然动态的美丽画卷。 下面我们先来欣赏一段大自然中动物运动的片段。 自然界中各种动物的运动 通过刚才的欣赏你看到了什么?有什么感受? 激起学生的好

奇心和求知欲,激发学习的主动性,提高学习兴趣 感受自然界中动物运动方式的多样性。说出动物在运动并感受大自然的美。

动物运动方式的多样性

动物运动方式的多样性 课题 : 《动物运动方式的多样性》学案时间: 2019年10月20日 主备: 张成班级:八年级()班学生____________ 【学习目标】 1、举例说出动物运动方式的多样性。 2、举例说明动物运动的重要性。 【自学导航】 自主学习:阅读课文50-51页,完成以下练习。 1、自然环境 2、动物的运动方式多种多样,主要有 3、动物在长期的进化过程中,逐渐形成一系列通过运动能力。 4、动物通过运动能迅速迁移到更为适宜的和,从而有利于自身的和。 【合作探究】 小组展开讨论,交流展示 E.飞行或滑翔 F.旋转式运动 G.翻筋斗运动 H.奔跑探究二:举例说动物运动的重要 性 动物通过运动主动的适应______________ 1. ___________________________________________________________________ 如:___________________________________________________________________ 2. ____________________________________________________________________ 如:__________________________________________________________________ 【分层检测】 一、应知应会 1、下列动物的运动方式主要为飞行的是: A 、鸵鸟 B、企鹅 C、蝙蝠 D、蚕蛹

2、动物通过运动提高了适应环境的能力,蜥蜴的主要运动方式为: A 、飞行 B、跳跃 C、爬行 D、奔跑 3、下列哪项不是鸟类迁徙的意义: A、获取足够的食物 B、寻找适宜的生活环境 C、产生有利变异 D、有利完成生殖活动 二、达标测评 1.世界上最大的鸟――鸵鸟的运动方式主要是------------------------------------------------------------------() A.飞行 B.爬行 C.游泳 D.行走 2.选出运动方式相同的一组动物-------------------------------------------------------------------------------------() A.河蟹、野鸭 B.袋鼠、鸵鸟 C.企鹅、麻雀 D.鳄鱼、蜥蜴 3.下列动物的运动方式主要为飞行的是:------------------------------------------------() A 、鸵鸟 B、企鹅 C、蝙蝠 D、蚕蛹 4.动物通过运动提高了适应环境的能力,蜥蜴的主要运动方式为:--------------------------() A 、飞行 B、跳跃 C、爬行 D、奔跑 6.下列哪项不是鸟类迁徙的意义:------------------------------------------------------() A、获取足够的食物 B、寻找适宜的生活环境 C、产生有利变异 D、有利完成生殖活动 7.生活在亚洲丛林中的鼹鼠在伸展四肢的时候,可以看到其身体两侧皮肤的飞膜,由此可推测鼹鼠的运动方式是:------------------------------------------------------------------------------() A、滑翔 B、奔跑 C、爬行 D、飞翔 三、拓展提升 一些鸟类通过迁徙来度过严寒的冬天,而其他动物是怎样度过冬天的?

比较PageRank算法和HITS算法的优缺点

题目:请比较PageRank算法和HITS算法的优缺点,除此之外,请再介绍2种用于搜索引擎检索结果的排序算法,并举例说明。 答: 1998年,Sergey Brin和Lawrence Page[1]提出了PageRank算法。该算法基于“从许多优质的网页链接过来的网页,必定还是优质网页”的回归关系,来判定网页的重要性。该算法认为从网页A导向网页B的链接可以看作是页面A对页面B的支持投票,根据这个投票数来判断页面的重要性。当然,不仅仅只看投票数,还要对投票的页面进行重要性分析,越是重要的页面所投票的评价也就越高。根据这样的分析,得到了高评价的重要页面会被给予较高的PageRank值,在检索结果内的名次也会提高。PageRank是基于对“使用复杂的算法而得到的链接构造”的分析,从而得出的各网页本身的特性。 HITS 算法是由康奈尔大学( Cornell University ) 的JonKleinberg 博士于1998 年首先提出。Kleinberg认为既然搜索是开始于用户的检索提问,那么每个页面的重要性也就依赖于用户的检索提问。他将用户检索提问分为如下三种:特指主题检索提问(specific queries,也称窄主题检索提问)、泛指主题检索提问(Broad-topic queries,也称宽主题检索提问)和相似网页检索提问(Similar-page queries)。HITS 算法专注于改善泛指主题检索的结果。 Kleinberg将网页(或网站)分为两类,即hubs和authorities,而且每个页面也有两个级别,即hubs(中心级别)和authorities(权威级别)。Authorities 是具有较高价值的网页,依赖于指向它的页面;hubs为指向较多authorities的网页,依赖于它指向的页面。HITS算法的目标就是通过迭代计算得到针对某个检索提问的排名最高的authority的网页。 通常HITS算法是作用在一定范围的,例如一个以程序开发为主题的网页,指向另一个以程序开发为主题的网页,则另一个网页的重要性就可能比较高,但是指向另一个购物类的网页则不一定。在限定范围之后根据网页的出度和入度建立一个矩阵,通过矩阵的迭代运算和定义收敛的阈值不断对两个向量authority 和hub值进行更新直至收敛。 从上面的分析可见,PageRank算法和HITS算法都是基于链接分析的搜索引擎排序算法,并且在算法中两者都利用了特征向量作为理论基础和收敛性依据。

苏教版生物-八年级上册-八年级生物 动物运动方式的多样性精品教案

动物运动方式的多样性 知识与能力目标: 举例说出动物运动方式的多样性;举例说明动物运动的重要性。 过程与方法目标: 通过调查、观察等,收集有关动物运动的资料,培养学生应用资料分析问题、解决问题的能力;通过观察、讨论、交流等进一步培养学生自主学习、合作学习、探究学习的能力。情感态度与价值观目标: 通过合作讨论,培养爱护大自然的情感,建立起生物(包括人)与环境相适应的辩证观点。激发同学们保护生物栖息环境、保护生态系统的多样性的热情。 教学重点: 举例说出动物运动方式的多样性;举例说明动物运动的重要性。 教学难点: 举例说明动物运动的重要性。 课前准备: 学生准备: ①调查和观察生活在水中、空中、地面下、地面上动物的运动方式并记录在表格中以及收集动物运动方式与生活环境的关系的有关资料。 主要活动区域动物运动方式 天空 陆地 水中 ②收集有关动物运动的图片和画册。 ③准备各种典型的易于捕捉的较小型的动物。 教师准备: 制作多媒体课件;准备观察实验的用的小动物。 教学进程 课堂流程教师活动学生活动

动物运动方式的多样性 相互交流,探讨体验 多媒体展示图表 描述动物的运动 观察实验小结 大自然真是多姿多彩,令人赏心悦目。草 木荣枯,候鸟去来,蚂蚁搬家,蜻蜓低飞。动 物的运动是大自然最丰富也是最有韵律的动 态语言。 课前请同学们观察和调查了各种动物的 运动并将各种动物及运动方式按照活动区域 填入表格中,另外还请同学们收集动物运动的 图片。下面请同学们四人一组进行交流,①举 例说出各种动物的运动方式,并关注它们生活 的环境。②说出图片中各种动物的运动方式。 动物的运动方式多种多样,由于生活环境 的不同运动方式也不同。经过交流,引导学 生举例说出动物的运动方式有哪些? 主要活动区域运动方式 空中 陆地 水中 动物的运动让我们感受到了大自然无穷 的魅力。下面我请几个同学描述几种动物的 运动。 自然界中的各种动物的运动都有着自己 的韵律和美感。现在我们亲自观察几种动物 的运动。注意观察动物的运动方式?如何完成 运动的? 引导学生形象描述几种动物的运动方式 及如何完成运动的。 不同生活环境中的动物运动方式也不同, 课前观察各种动物的运动方式 并记录,将记录的信息交流, 说出图片中各种动物的运动方 式。 按照活动区域,说出动物的运 动方式:空中的有飞行、滑翔; 陆地上的有爬行、奔跑、跳跃、 攀缘、行走等;水中的有游泳、 漂浮等。 用图片和文字描述几种动物的 运动 将各种典型的易于捕捉的较小 型的动物带到学校进行观察探 究。各小组观察小螃蟹和小虾 子的运动。 学生描述动物的运动 描述所观察的现象。

动物运动方式的多样性 教案2(苏教版八年级下册)

动物运动方式的多样性教案2(苏教版八年 级下册) 《动物运动方式的多样性》的教学设计 一、教材结构与内容地位简析 《新生物课程标准》课程内容的设定是以“人与生物圈”为主线,精选了十个主题,即①科学探究;②生物与环境;③生物体的结构层次;④生物圈中的绿色植物;⑤生物圈中的人;⑥生物的生殖、发育与遗传;⑦动物的运动和行为;⑧生物的多样性;⑨生物技术;⑩健康的生活。而“动物运动方式的多样性”是新课标的第六大主题《动物的运动和行为》中的具体内容之一,它被编排在苏教版义务教育课程标准实验教科书生物八年级上册的第6单元《动物的运动和行为》的第16章《动物的运动》的首节。介绍了动物运动的各种方式及动物运动的意义。于前一章节学习了遗传和变异的有关知识,因此学生可以理解于自然环境的复杂多变,动物在长期的进化过程中,逐渐形成了独特的运动器官,从而扩大了活动范围,提高了适应环境的能力.同学们对动物的运动方式不会感到陌生,这方面的知识和经验积累应当很丰富,通过本节的学习之后,可以为后面《动物的行为》的学习起到了很好铺垫作用。 二、教学目标

根据上述教材结构与内容分析,考虑到学生已有的认知结构心理特征,我制定了如下教学目标: 1.基础知识目标:举例说出动物运动方式的多样性和动物运动的重要性。 2.能力目标:培养学生的观察能力,收集和处理信息的能力,分析和解决问题的能力以及交流与合作能力。 3.情感目标:通过学生对动物运动方式的了解,使同学们能更加关注自然界的动物,让他们能与之和谐相处,彼此成为永远的朋友,并深刻理解人与自然和谐发展的意义,提高环境保护意识。 三、教学重点、难点 本节是学生学习本章甚至是本单元的基础,动物运动知识对学生认识动物的本质特征非常重要,动物的运动依赖于一定的结构,动物的结构与功能是统一的。所以重点是要能举例说出动物运动方式的多样性。于动物的行为是一种本能的无意识的行为,是自然选择过程中长期进化的结果。因此,要理解动物运动的意义对学生来说是一个难点。 四、教法:在立足于课堂教学同时,要注意引导学生到大自然中去观察动物的运动,这样既满足了学生的好奇心又可激发学生的求知欲,同时培养了学生的观察能力、动手能力,更重要的是教给了学生探究生物世界的方法,同时增强了学生关爱生命、热爱大自然的意识。

pagerank算法实验报告

PageRank算法实验报告 一、算法介绍 PageRank是Google专有的算法,用于衡量特定网页相对于搜索引擎索引中的其他网页而言的重要程度。它由Larry Page 和Sergey Brin在20世纪90年代后期发明。PageRank实现了将链接价值概念作为排名因素。 PageRank的核心思想有2点: 1.如果一个网页被很多其他网页链接到的话说明这个网页比较重要,也就是pagerank值会相对较高; 2.如果一个pagerank值很高的网页链接到一个其他的网页,那么被链接到的网页的pagerank值会相应地因此而提高。 若页面表示有向图的顶点,有向边表示链接,w(i,j)=1表示页面i存在指向页面j的超链接,否则w(i,j)=0。如果页面A存在指向其他页面的超链接,就将A 的PageRank的份额平均地分给其所指向的所有页面,一次类推。虽然PageRank 会一直传递,但总的来说PageRank的计算是收敛的。 实际应用中可以采用幂法来计算PageRank,假如总共有m个页面,计算如公式所示: r=A*x 其中A=d*P+(1-d)*(e*e'/m) r表示当前迭代后的PageRank,它是一个m行的列向量,x是所有页面的PageRank初始值。 P由有向图的邻接矩阵变化而来,P'为邻接矩阵的每个元素除以每行元素之和得到。 e是m行的元素都为1的列向量。 二、算法代码实现

三、心得体会 在完成算法的过程中,我有以下几点体会: 1、在动手实现的过程中,先将算法的思想和思路理解清楚,对于后续动手实现 有很大帮助。 2、在实现之前,对于每步要做什么要有概念,然后对于不会实现的部分代码先 查找相应的用法,在进行整体编写。 3、在实现算法后,在寻找数据验证算法的过程中比较困难。作为初学者,对于 数据量大的数据的处理存在难度,但数据量的数据很难寻找,所以难以进行实例分析。

PageRank算法的核心思想

如何理解网页和网页之间的关系,特别是怎么从这些关系中提取网页中除文字以外的其他特性。这部分的一些核心算法曾是提高搜索引擎质量的重要推进力量。另外,我们这周要分享的算法也适用于其他能够把信息用结点与结点关系来表达的信息网络。 今天,我们先看一看用图来表达网页与网页之间的关系,并且计算网页重要性的经典算法:PageRank。 PageRank 的简要历史 时至今日,谢尔盖·布林(Sergey Brin)和拉里·佩奇(Larry Page)作为Google 这一雄厚科技帝国的创始人,已经耳熟能详。但在1995 年,他们两人还都是在斯坦福大学计算机系苦读的博士生。那个年代,互联网方兴未艾。雅虎作为信息时代的第一代巨人诞生了,布林和佩奇都希望能够创立属于自己的搜索引擎。1998 年夏天,两个人都暂时离开斯坦福大学的博士生项目,转而全职投入到Google 的研发工作中。他们把整个项目的一个总结发表在了1998 年的万维网国际会议上(WWW7,the seventh international conference on World Wide Web)(见参考文献[1])。这是PageRank 算法的第一次完整表述。 PageRank 一经提出就在学术界引起了很大反响,各类变形以及对PageRank 的各种解释和分析层出不穷。在这之后很长的一段时间里,PageRank 几乎成了网页链接分析的代名词。给你推荐一篇参考文献[2],作为进一步深入了解的阅读资料。

PageRank 的基本原理 我在这里先介绍一下PageRank 的最基本形式,这也是布林和佩奇最早发表PageRank 时的思路。 首先,我们来看一下每一个网页的周边结构。每一个网页都有一个“输出链接”(Outlink)的集合。这里,输出链接指的是从当前网页出发所指向的其他页面。比如,从页面A 有一个链接到页面B。那么B 就是A 的输出链接。根据这个定义,可以同样定义“输入链接”(Inlink),指的就是指向当前页面的其他页面。比如,页面C 指向页面A,那么C 就是A 的输入链接。 有了输入链接和输出链接的概念后,下面我们来定义一个页面的PageRank。我们假定每一个页面都有一个值,叫作PageRank,来衡量这个页面的重要程度。这个值是这么定义的,当前页面I 的PageRank 值,是I 的所有输入链接PageRank 值的加权和。 那么,权重是多少呢?对于I 的某一个输入链接J,假设其有N 个输出链接,那么这个权重就是N 分之一。也就是说,J 把自己的PageRank 的N 分之一分给I。从这个意义上来看,I 的PageRank,就是其所有输入链接把他们自身的PageRank 按照他们各自输出链接的比例分配给I。谁的输出链接多,谁分配的就少一些;反之,谁的输出链接少,谁分配的就多一些。这是一个非常形象直观的定义。

[初二理化生]第一节动物运动方式的多样性

[初二理化生]第一节动物运动方式的多样性

第一节动物运动方式的多样性 教材分析 本节内容介绍了动物运动方式的多样性和动物运动的意义。教学内容充分联系了学生的日常生活,但又不局限于日常生活中所见的动物类型。教学内容需要学生充分收集资料并加以合作讨论学习。经过学习,要求学生能掌握探究生物世界的方法,并培养学生热爱大自然,热爱生命的意识。并为后期学习动物运动的生理基础打下坚实的基础。 教学目标 1、知识与能力: 举例说出动物的运动方式的多样性; 举例说明动物运动的重要性。 2、过程与方法: 培养学生的观察能力; 提高学生的分析问题及表达交流的能力。 3、情感、态度与价值观: 通过活动增强学生的热爱大自然和关爱生命的意识。 学情分析 学生在日常生活中已经积累了一定的生物学知识,对常见动物的运动方式也已经比较了解。但是由于学

生个体差异的存在,学生之间知识了解的范围和情况也各自不同。所以充分利用讨论合作学习的方式来做到知识的共享。教师的重点是组织学生分析和整理资料,并注意在教学过程中充分渗透热爱生命,关爱大自然的思想。 课时分配 1课时 教学设计 教学准备 1、教师准备:教师收 集有关动物运动方式 的视频资料,并制作PPT等 2、学生准备:预习本 节课,收集有关动物运 动方式的资料 教学重、难点 重点:举例说出动物的 运 动 方 式 的 多 样 性 。举例说明动物 运 动 的 重 要 性 。 难点:说明动物运动的

重要性。 教学过程 (一)创设情境,走近课堂 师:动物是大家的朋友,我们在语文课中也学到了很多有关动物的成语,大家能不能来举出一些呢? 生:动如脱兔、呆若木鸡、守株待兔、一丘之貉、狼狈为奸、狐假虎威、莺歌燕舞、龙腾虎跃、鹬蚌相争渔翁得利、螳螂捕蝉黄雀在后、螳臂当车…… 师:这些成语中有很多都反映了动物的运动 方式。你能不能说出来呢? 生:有鸟在飞,老虎奔跑…… 师:大家说得不错,那么生活在不同环境中的动物在运动方式上又有什么特点呢? (二)讨论合作,进入课堂 过渡:不同的动物有着不同的运动方式,但是也有的会有一定的相同之处,这是由什么决定的呢? 首先,请同学们以小组为单位,分别讨论生活在陆地、水中、空中和能够生活在多种环境中动物的运动方式。 1、动物运动方式

第十六章_动物的运动方式教案

第十六章动物的运动和行为 第一节动物运动方式的多样性 一、教学目标 1、培养学生的观察能力,收集和处理信息的能力,分析和解决问题的能力以及交流与合作能力。 2、通过学生对动物运动方式的了解,使同学们能更加关注自然界的动物,让他们能与之和谐相处,彼此成为永远的朋友,并深刻理解人与自然和谐发展的意义,提高环境保护意识。 二、重点难点 举例说出动物运动方式的多样性;举例说明动物运动的重要性。 三、教学资源 多媒体 四、教学设计 在开始时,首先激发学生的学习兴趣,利用精美的引人入胜的画面让学生去感悟,去发现。然后通过观察,合作交流等活动,让学生认知动物运动方式的多样性,理解动物运动的意义,增强保护动物的意识。 五、教学过程 【导入新课】 人们常说:“海阔凭鱼跃,天高任鸟飞。”不同的动物有着不同的运动方式。鹰击长空、鱼翔浅坻、麋鹿奔跑、企鹅游弋等等构成了大自然动态的美丽画卷。 下面我们先来欣赏一段大自然中动物运动的片段。 多媒体展示视频 自然界中各种动物的运动 通过刚才的欣赏你看到了什么?有什么感受? 激起学生的好奇心和求知欲,激发学习的主动性,提高学习兴趣。 感受自然界中动物运动方式的多样性。说出动物在运动并感受大自然的美。 【授课】 一、动物运动方式的多样性 大自然真是多姿多彩,令人赏心悦目。草木荣枯,候鸟去来,蚂蚁搬家,蜻蜓低飞。动物的运动是大自然最丰富也是最有韵律的动态语言。 动物的运动方式多种多样,由于生活环境的不同运动方式也不同。经过交流,引导学生举

例说出动物的运动方式有哪些?(按照活动区域,说出动物的运动方式:空中的有飞行、滑翔;陆地上的有爬行、奔跑、跳跃、攀缘、行走等;水中的有游泳、漂浮等。) 主要活动区域运动方式 空中 陆地 水中 二、描述动物的运动 动物的运动让我们感受到了大自然无穷的魅力,下面请几个同学描述几种动物的运动。 自然界中的各种动物的运动都有着自己的韵律和美感。现在我们亲自观察几种动物的运动。注意观察动物的运动方式?如何完成运动的? 引导学生形象描述几种动物的运动方式及如何完成运动的。 不同生活环境中的动物运动方式也不同,水中生活的动物主要依靠游泳、漂浮,陆地上的动物可以有行走、爬行、奔跑、跳跃、攀援等多种运动方式,空中生活的动物主要是飞行、滑翔。不同的运动方式不仅适应于不同的生活环境,而且在动物的身体内也有不同的结构与之相适应。 三、多媒体展示视频 让我们来欣赏大自然中丰富多彩的动物运动。请你说出各种动物的运动方式。 引导学生观看视频:狐狸的跳跃、海牛的游泳、水母的漂浮、鱼的游泳、豹子的奔跑、松鼠的攀缘、袋鼠的跳跃、乌贼的游泳、鹦鹉螺的漂浮、蜗牛的爬行、天鹅的水上奔跑等。 四、动物通过运动主动地适应环境 人也是动物,当今大规模的运动会,运动项目不下数十种。例如游泳,便有自由泳、蛙泳、仰泳等,有些显然是模仿动物。人虽然不会飞,但人类用自己的聪明才智发明了飞机、火箭,比任何动物都能远走高飞。 动物的运动对动物的生存有什么意义呢?我们先来欣赏一段视频。 引导学生观看视频:视频一:热带雨林中的蜥蜴在水面上奔跑 视频二:水獭的游泳、滑行、奔跑、行走 视频三:斑马的迁徙 分析讨论: 分析蜥蜴能在水面上奔跑对它有什么意义;

最新动物运动方式的多样性

动物运动方式的多样 性

第一节动物的运动方式的多样性 资料16-1-1 动物三种运动方式的比较 资料16-1-2 各种动物的运动方式 资料16-1-3 鸟儿为什么会飞 资料16-1-4 鸟类的飞行 资料16-1-5 “天空王子”的飞行器——鸟翅 资料16-1-6 鸟的迁徙 资料16-1-7 动物的迁徙 资料16-1-1动物三种运动方式的比较 奔跑、飞行和游泳是动物最常采用的三种运动方式。大小不同的动物,其最小移动能耗速度及其能耗率是不一样的,通过测定各种不同大小的动物在最小移动能耗速度下的能耗率,我们就可以对三种运动方式的能量消耗情况进行比较。动物的游泳、飞行和奔跑三种运动方式在移动相同距离时的能量消耗是不一样的。 游泳是消耗能量最少的一种运动方式,因为动物在水中浮沉几乎不需要消耗能量,主要是依靠浮力调节。其次是飞行,飞行是借助于高速度下的高能耗率来获得其运动效果的。相比之下,在陆地上移动是最消耗能量的一种运动方式。在这里应当注意的是:对于每一种运动方式来说,虽然每克体重的最小移动能耗都随着动物体重的增加而减少,但就每只动物的最小移动能耗总值来

说,却随着动物体重的增加而增加。这其中的道理是很明显的,想必读者都十分清楚。 资料16-1-2 各种动物的运动方式 兽类最大的特点是行走和奔跑: 一般四肢动物的行动规律有这样的方式,以马为例,开始起步时如果是右前足先向前开步,对角线的左足就会跟着向前走,接着是左前足向前走再就是右足跟着向前走,这样就完成一个循环。接着又是另一次右前足向前,左后足跟着向前,左前足向前,右后足跟着,继续循环下去,就形成一个行走的运动。马除了走步外,还有小跑、快跑、奔跑等方式,各种跑的方式都有一定的运动规律的。 青蛙: 青蛙和鱼不一样,它是既能生活在水里,又能生活在陆地上的动物。当它在水中游水时,用长而有蹼的强大后肢划水游泳;当它在陆地上时,用肌肉发达的强大后肢跳跃。 跳跃是青蛙最主要的活动方式,身体结构也朝向适应跳跃的方向发展。青蛙的后肢比前肢长很多,修长的后肢是名副其实的弹簧腿产生往前冲的力量,比较短的前肢则能减轻落地后的冲击力。跳跃的原理如同压扁的弹簧放松之后往外弹跳出去,而后肢的大腿、小腿及足部平常坐叠在一起就具有压扁的弹簧功能。为了跳更远,腰部的肠骨特别延长和相接并形成可动关节,这样子青蛙跳出去以后,身体拉长更有冲力。长而有蹼的后肢也有助于游泳,让他们能够悠游于水陆两种环境。

[初二理化生]第一节动物运动方式的多样性

第一节动物运动方式的多样性 教材分析 本节内容介绍了动物运动方式的多样性和动物运动的意义。教学内容充分联系了学生的日常生活,但又不局限于日常生活中所见的动物类型。教学内容需要学生充分收集资料并加以合作讨论学习。经过学习,要求学生能掌握探究生物世界的方法,并培养学生热爱大自然,热爱生命的意识。并为后期学习动物运动的生理基础打下坚实的基础。 教学目标 1、知识与能力: 举例说出动物的运动方式的多样性; 举例说明动物运动的重要性。 2、过程与方法: 培养学生的观察能力; 提高学生的分析问题及表达交流的能力。 3、情感、态度与价值观: 通过活动增强学生的热爱大自然和关爱生命的意识。 学情分析 学生在日常生活中已经积累了一定的生物学知识,对常见动物的运动方式也已经比较了解。但是由于学生个体差异的存在,学生之间知识了解的范围和情况也各自不同。所以充分利用讨论合作学习的方式来做到知识的共享。教师的重点是组织学生分析和整理资料,并注意在教学过程中充分渗透热爱生命,关爱大自然的思想。 课时分配 1课时 教学设计 教学准备 1、教师准备:教师收集有关动物运动方式的视频资料,并制作PPT等 2、学生准备:预习本节课,收集有关动物运动方式的资料 教学重、难点 重点:举例说出动物的运动方式的多样性。 举例说明动物运动的重要性。 难点:说明动物运动的重要性。 教学过程 (一)创设情境,走近课堂 师:动物是大家的朋友,我们在语文课中也学到了很多有关动物的成语,大家能不能来举出一些呢? 生:动如脱兔、呆若木鸡、守株待兔、一丘之貉、狼狈为奸、狐假虎威、莺歌燕舞、龙腾虎跃、鹬蚌相争渔翁得利、螳螂捕蝉黄雀在后、螳臂当车…… 师:这些成语中有很多都反映了动物的运动方式。你能不能说出来呢?生:有鸟在飞,老虎奔跑…… 师:大家说得不错,那么生活在不同环境中的动物在运动方式上又有什么特点呢?(二)讨论合作,进入课堂 过渡:不同的动物有着不同的运动方式,但是也有的会有一定的相同之处,这是由什么决定的呢? 首先,请同学们以小组为单位,分别讨论生活在陆地、水中、空中和能够生活在多种环境中动物的运动方式。 1、动物运动方式的多样性 学生活动:四人为一小组,一个大组讨论生活在一种环境中的动物。在讨论的过程中,尽量说出有代表性的生物,并选出本小组的发言代表。时间为五分钟。 教师活动:在学生讨论的同时,参与各个小组的活动,指导学生

动物的运动教学设计

青岛版小学科学六年级上册 13、《动物的运动》教学设计 张春梅 济宁市洸河路小学

13、《动物的运动》教学设计 一、教学目标: 科学知识目标: 认识水生动物的主要运动方式是游泳,陆生动物的主要运动方式是爬行、行走、跳跃和奔跑,空中生活的动物的主要运动方式是飞行。 科学探究目标: 1、认识物体运动方式的多样性; 2、能说出常见物体的运动方式,观察分析器运动规律; 3、能够准确地比较常见物体运动速度的快慢; 4、分析探究动物的运动对于动物个体和种族的生存具有怎样的重要意义; 5、能用各种感官对物体的运动进行观察,能用图或文字表述;会查阅书刊及其他信息源;能选择自己擅长的方式表述研究过程和结果。 情感、态度和价值观目标: 1、引导学生自觉运用合作与交流的学习方法; 2、培养学生注意观察、善于观察和分析推理的能力; 3、意识到人与自然要和谐相处,愿意合作与交流。 二、教学重难点: 重点:认识不同动物的运动方式的不同特点。 难点:知道动物运动方式具有与其生活环境相适应的特点; 三、教学方法: 讲述、合作探究相结合 四、教学准备: 教师:多种动物的运动图片资料 学生:搜集与动物运动有关的资料。 五、课时安排: 1课时 六、教学过程:

课前谈话: 同学们,你们喜欢小动物吗?你能说一说它们是怎样运动的吗?激发学生去思考、回忆不同动物的各种运动方式。 (一)创设情境,引入新课 1、同学们,老师也很喜欢小动物,课前老师还搜集了一些小动物运动的视频资料,我们一起来看一看。(出示课件:从电视节目《动物世界》下载的视频资料) 2、同学们,在地球上,生活着很多很多的小动物,今天,我们学习《动物的运动》这节课,一起来探讨不同的动物具有的不同运动方式和规律。 (二)小组自行探究 1.陆地动物的运动方式。 师:生活中我们经常见到运动。你曾经见过生活在陆地的动物都有哪些运动方式? 小组内先自行交流,然后全班汇报。 师:老师也搜集了一部分在陆地上生活的动物的运动方式,我们一起来看大屏幕。(课件出示:孩子们很少见到的陆地动物的运动方式) 这些动物的运动方式有什么共同特点? 学生讨论后交流,教师小结并板书:爬行、行走、奔跑、跳跃 2. 水中动物的运动方式。 师:你曾经见过的生活在水中的动物都有哪些运动方式? 生:游泳 教师随机板书:游泳 师:对,但是它们游泳的方式也各不相同。谁能模仿几种鱼类的游泳方式?学生交流 3. 空中飞行动物的运动方式。 师:你又曾经见过生活在空中的动物都有哪些运动方式? 生:飞行

动物运动方式的多样性

第一节动物的运动方式的多样性 资料16-1-1 动物三种运动方式的比较 资料16-1-2 各种动物的运动方式 资料16-1-3 鸟儿为什么会飞 资料16-1-4 鸟类的飞行 资料16-1-5 “天空王子”的飞行器——鸟翅 资料16-1-6 鸟的迁徙 资料16-1-7 动物的迁徙 资料16-1-1动物三种运动方式的比较 奔跑、飞行和游泳是动物最常采用的三种运动方式。大小不同的动物,其最小移动能耗速度及其能耗率是不一样的,通过测定各种不同大小的动物在最小移动能耗速度下的能耗率,我们就可以对三种运动方式的能量消耗情况进行比较。动物的游泳、飞行和奔跑三种运动方式在移动相同距离时的能量消耗是不一样的。 游泳是消耗能量最少的一种运动方式,因为动物在水中浮沉几乎不需要消耗能量,主要是依靠浮力调节。其次是飞行,飞行是借助于高速度下的高能耗率来获得其运动效果的。相比之下,在陆地上移动是最消耗能量的一种运动方式。在这里应当注意的是:对于每一种运动方式来说,虽然每克体重的最小移动能耗都随着动物体重的增加而减少,但就每只动物的最小移动能耗总值来说,却随着动物体重的增加而增加。这其中的道理是很明显的,想必读者都十分清楚。 资料16-1-2 各种动物的运动方式 兽类最大的特点是行走和奔跑: 一般四肢动物的行动规律有这样的方式,以马为例,开始起步时如果是右前足先向前开步,对角线的左足就会跟着向前走,接着是左前足向前走再就是右

足跟着向前走,这样就完成一个循环。接着又是另一次右前足向前,左后足跟着向前,左前足向前,右后足跟着,继续循环下去,就形成一个行走的运动。马除了走步外,还有小跑、快跑、奔跑等方式,各种跑的方式都有一定的运动规律的。

第一节《动物运动方式的多样性》教案(苏教版初二上)1

第一节《动物运动方式的多样性》教案(苏教版初二 上)1 ●教学目标 知识目标 1.列举生物多样性三个层次,概述它们之间的关系。 2.认识我国生物多样性的现状及其专门性。 3.讲明爱护多样性的重要意义。 能力目标 1.通过对课本资料分析,培养学生摸索分析、归纳总结能力,学会收集和整理信息的方法。 2.培养学生在已有知识和生活体会的基础上猎取新知识的能力、综合运用知识能力。 情感目标 1.通过本章学习,要紧在同学心目中建立起生物〔包括人〕与环境相适应的辩证观点。 2.激发同学们爱护生物栖息环境、爱护生态系统的多样性的热情、渗透爱国主义教育。 ●教学重点 1.生物多样性的三个方面内容及它们之间的关系。 2.基因多样性。 3.讲明爱护多样性的意义。 4.培养学生收集和整理信息的能力。 ●教学难点 1.生物多样性的三个方面内容以及它们之间的关系。 2.基因多样性。 ●教学方法 多媒体演示、引导启发、对比分析归纳等多种教学方法。 ●教具预备

1.教师预备: 有关生物多样性三方面内容的多媒体片断或图片。 2.学生预备: 〔1〕复习已学过的植物、动物及微生物的种类。 〔2〕生物的性状是基因操纵的内容收集。 〔3〕生态系统的概念及种类。 ●课时安排 1课时 ●教学过程 [创设咨询题情境,抓住重点,直截了当导课] 教师:请咨询〝生物的多样性〞那个词侧重于哪个字? 学生:〝多〞字。 教师:那自然界里的生物是如何样一个〝多〞法呢?有几个层次内容呢?〝多〞的意义何在呢?好,先请同学们看一段多媒体片断。请注意收集信息及时记录。投放多媒体课件,有关内容如下: 画外音:〝自然界存在着形形色色的生物。〞画面中显现:游戈的鱼群、翱翔的海鸥、扑腾入水的企鹅、跳跃的猴群、飞跑的麋鹿、追逐的猎豹、悠然的丹顶鹤、开屏的蓝孔雀、快速旋转运动的草履虫、成片的森林〔配有节奏较快的背景音乐随画面一幅幅闪过〕 学生:积极思维,收集信息,及时记录。 教师:从刚才的片断中,你收集到多少种生物? 学生:近十种。 〔注意:可适当的增加播放次数〕 教师:自然界里的生物远不止这些,到底有多少,请同学们看书P90页图表。学生:分析图表。 [概述生物多样性三方面内容及我国生物多样性现状及专门性] 教师:试分析生物多样性,从种类上看,多在什么地点? 学生:生物种类繁多,植物约有50多万种,动物约有150多万种,微生物约10多万种。

《第一节 动物运动的形式和能量供应》教案2

《第一节动物运动的形式和能量供应》教案 教学目标: 1、列举动物多种多样的运动形式。 2、举例说出动物通过运动适应环境。 3、举例说出动物运动的能量供应。 教学重难点: 1、列举动物多种多样的运动形式。 2、举例说出动物通过运动适应环境。 3、举例说出动物运动的能量供应。 教学过程: (一)创设情境,走近课堂 师:动物是大家的朋友,我们在语文课中也学到了很多有关动物的成语,大家能不能来举出一些呢? 生:动如脱兔、呆若木鸡、守株待兔、一丘之貉、狼狈为奸、狐假虎威、莺歌燕舞、龙腾虎跃、鹬蚌相争渔翁得利、螳螂捕蝉黄雀在后、螳臂当车…… 师:这些成语中有很多都反映了动物的运动方式。你能不能说出来呢? 生:有鸟在飞,老虎奔跑…… 师:大家说得不错,那么生活在不同环境中的动物在运动方式上又有什么特点呢? (二)讨论合作,进入课堂 过渡:不同的动物有着不同的运动方式,但是也有的会有一定的相同之处,这是由什么决定的呢? 首先,请同学们以小组为单位,分别讨论生活在陆地、水中、空中和能够生活在多种环境中动物的运动方式。 1、动物的运动形式 学生活动:四人为一小组,一个大组讨论生活在一种环境中的动物。在讨论的过程中,尽量说出有代表性的生物,并选出本小组的发言代表。时间为五分钟。 教师活动:在学生讨论的同时,参与各个小组的活动,指导学生合理利用所收集的资料。

讨论时间结束后,由各个小组代表上台发言,说出本组所了解的在某种环境中生活的动物的运动方式。 学生在这个讨论学习过程中,每个人都能发表出自己的观点,从而做到了集体合作学习。而结论部分的表达正是各个小组整理资料的结果。 最后师生共同归纳:填写表格 动物运动方式的多样性 在表格的填写过程中,尤其注意有的动物具有多样的运动方式,不能束缚了学生的思维发展。师:大家可能都注意到,虽然动物的运动方式可能各自不同,但是生活在同一种环境中的动物的运动方式往往会有一定的相似性,这其实也是和长期的进化过程所分不开的。 其实不仅仅我们今天在探究这一问题,古人也很早之前就写到:“海阔凭鱼跃,天高任鸟飞”,希望大家能在课后继续注意观察身边的动物,充分地了解它们。 2、动物运动的能量供应 师:如果我们观察动物,会发现动物除了休息时间,几乎一直在运动着,这是明显区别于植物的,你能举例说明运动的作用吗? 生:我举个例子:羊吃草,草跑不了,只能被羊吃了,但是狼吃羊的时候,却不一定每次都能吃到,因为羊可能会跑了。 师:大家举的例子非常生动,也就是说动物可以利用运动来有目的的,主动的改变自己的位置。羊可能不会被狼吃了,也就能生存下来了。除此之外,运动还能有什么意义呢? 生: 我觉得动物还可以通过运动来找到食物,比如,鱼在水里游来游去,就可以吃到很大范围的食物,而不是只在一小块地方。 师:很好,大家还发现运动还可以带给动物更多的食物,总的而言,运动可以让动物躲避敌害,还可以带来食物,那么动物为了种群的延续,还有什么非常重要的行为呢?

动物运动的方式详细教案

第1节动物运动的方式 教学目标 1、说出动物运动的主要方式; 2、说明动物运动方式与其生活环境的适应; 3、学会利用各种媒体收集资料的方法; 4、观察动物的不同运动方式,提高观察生物运动现象的能力。 教学重点 不同环境中动物的运动方式。 教学难点 1、如何理解运动方式与环境相一致; 2、如何理解运动的意义。 课时安排 1课时。 教学方法 思考、讨论、练习 教学环节: 一:导入新课 同学们,在我们初一学习生物多样性的时候,就知道了在自然界有一类数目非常庞大的生物,这类生物与其他生物不同的是他们可以迅速的改变自身的空间位置。这是哪一类生物呢?那地球上的动物有多少种,知道吗?(150多万种/200多万种) 动物最大的一个特点就是可以运动,那它为什么要运动呢? 首先它要寻找和摄取食物,迁移到适宜自身生活的栖息场所。(例1:狼吃羊、羊吃草。 例2:最初的猴子在树上寻找果子吃,当树上的果子都被吃完了的时候,部分就只能迁移到陆地上寻找吃的。) 其次逃避天敌(例1:当遇到猎豹的时候,羚羊就要拼命地奔跑。当然一般羚羊遇到猎豹只有死路一条了,猎豹是世界上奔跑最快的动物,羚羊位居第二。) 总结:动物通过运动可以主动地出击去获取动物,可以逃避敌害和迁移到适宜的栖息场所,还可以完成求偶和交配等,有利于动物的存活,从而有利于自身生存和种族繁衍。这就是它运动的意义! 不同种类的动物它运动方式又是不一样的(鱼类和鸟类),而同种动物有些又可以有多种运动方式(蝗虫),那动物运动的方式多种多样,如果是你,你怎样对它进行归类呢? (例:百货超市:吃的、用的、穿的,你总不会把拖鞋和面包放在一起吧), 学1:快慢、、、、、 学1:动物体型、、、、 学3:环境; 如果这样分类是不是就显得很混乱???,而且不利于我们掌握每种动物它运动的方式。那我们看一下科学家是怎样分类的呢?{根据动物的栖息环境来进行分类} 相比较哪一种更好?(按环境)

相关文档
相关文档 最新文档