文档库 最新最全的文档下载
当前位置:文档库 › A visibility algorithm for hybrid geometry- and image-based modeling and rendering

A visibility algorithm for hybrid geometry- and image-based modeling and rendering

A Visibility Algorithm for

Hybrid Geometry-and Image-Based

Modeling and Rendering

Thomas A.Funkhouser

Princeton University

Abstract

Hybrid geometry-and image-based modeling and rendering systems use photographs taken of a real-world environment and mapped onto the surfaces of a3D model to achieve photoreal-

ism and visual complexity in synthetic images rendered from arbitrary viewpoints.A primary

challenge in these systems is to develop algorithms that map the pixels of each photograph

ef?ciently onto the appropriate surfaces of a3D model,a classical visible surface determina-

tion problem.This paper describes an object-space algorithm for computing a visibility map

for a set of polygons for a given camera viewpoint.The algorithm traces pyramidal beams

from each camera viewpoint through a spatial data structure representing a polyhedral convex

decomposition of space containing cell,face,edge,and vertex adjacencies.Beam intersections

are computed only for the polygonal faces on the boundary of each traversed cell,and thus the

algorithm is output-sensitive.The algorithm also supports ef?cient determination of silhouette

edges,which allows an image-based modeling and rendering system to avoid mapping pixels

along edges whose colors are the result of averaging over several disjoint surfaces.Results

reported for several3D models indicate the method is well-suited for large,densely-occluded

virtual environments,such as building interiors.

1Introduction

Hybrid geometry-and image-based rendering(GIBR)methods are useful for synthesizing photo-realistic images of real-life environments(e.g.,buildings and cities).Rather than modeling geo-metric details and simulating global illumination effects,as in traditional computer graphics,only a coarsely detailed3D model is constructed,and photographs are taken from a discrete set of viewpoints within the environment.The calibrated photographic images are mapped onto the sur-faces of the3D model to construct a representation of the visual appearance of geometric details and complex illumination effects on each surface.Then,novel images can be rendered for arbi-trary viewpoints during an interactive visualization session by reconstruction from these hybrid

geometry-and image-based representations.This method allows photorealistic images to be gen-erated of visually rich and large environments over a wide range of viewpoints,while avoiding the dif?culties of modeling detailed geometry and simulating complex illumination effects.

Applications for GIBR systems include education,commerce,training,telepresence,and en-tertainment.For example,grammar school students can use such a system to“visit”historical buildings,temples,and museums.Real estate agents can show a potential buyer the interior of a home for sale interactively via the internet[20].Distributed interactive simulation systems can train teams of soldiers,?re?ghters,and other people whose missions are too dangerous or too expensive to re-create in the real world[3,26].Entertainment applications can synthesize photo-realistic imagery of real world environments to generate immersive walkthrough experiences for virtual travel and multi-player3D games[21].

The research challenges in implementing an effective GIBR system are to construct,store,and re-sample a4D representation for the radiance emanating from each surface of the3D model. Previous related methods date back to the movie-map system by Lippman[25].Greene[18]pro-posed a system based on environment maps[2]in which images captured from a discrete set of viewpoints are projected onto the faces of a cube.Chen and Williams[6]described a method in which reference images are used with pixel correspondence information to interpolate views of a scene.Debevec et al.[9,10]developed a system in which photographs were used to construct a3D model made from parameterized building blocks and to construct a“view-dependent texture map”for each surface.Gortler et al.[17]used approximate3D geometry to facilitate image reconstruc-tion from their4D Lumigraph representation.Coorg[7]constructed vertical facades from multiple photographs and mapped diffuse imagery onto the surfaces for interactive visualization.Several walkthrough applications and video games apply2D view-independent photographic textures to coarse3D geometry[22].Other related image-based representations are surveyed in[8],including cylindrical panorama[5,28],Light Fields[24],and layered depth images[33].

An important step in constructing a GIBR representation is to map pixel samples from every photograph onto the surfaces of a3D model.The challenge is to develop algorithms that determine

which parts of which3D surfaces are visible from the camera viewpoint of every photograph.This is a classic hidden surface removal(HSR)problem,but with a unique combination of requirements motivated by GIBR systems.First,unlike algorithms commonly used in computer graphics,the HSR algorithm must resolve visible surfaces with object-space precision.Image-space algorithms may cause small surfaces to be missed or radiance samples to be misaligned on a surface,causing artifacts and blurring in reconstructed images.

Second,the algorithm should compute a complete visibility map for each photograph,encoding not only the visible surfaces but also the visible edges and vertices with their connectivities on the view plane.From the visibility map,a GIBR system can detect pixels covering multiple disjoint surfaces(e.g.,along silhouette edges)and avoid mapping them onto any single surface,which causes noticeable artifacts in resampled images.As an example,consider the situation shown in Figure1.The image on the left shows a“photograph”taken with a synthetic camera of a simple 3D model comprising two rooms connected by a https://www.wendangku.net/doc/b77850105.html,ing a standard HSR algorithm(e.g., z-buffering),pixels along the silhouette edge on the left side of the doorway might be mapped onto the wall,?oor,and ceiling of the smaller room,even though their colors partially represent contributions from the edge of the door frame.The result is a“line”of incorrectly colored pixels in resampled images(shown in the image on the right).The HSR algorithm must detect silhouette edges so that these artifacts can be avoided.

Third,the HSR algorithm must scale to support very large3D models.Typical models of interesting real-world environments contain many polygons,most of which are occluded for any given camera viewpoint.For example,consider the building shown in Figure2.The left image shows a building(Soda Hall)from the exterior,while the right one shows the?oorplan for one of seven?oors.The entire3D model contains around10,000polygons.Yet,for most camera viewpoints,more than95%of them are hidden from view.To be effective for such large and densely occluded3D models,a hidden surface removal algorithm must be output sensitive.That is,the expected case running time should depend only on the complexity of the visible portion of the model,not on the size of the entire model.

Original

image Reconstructed image

Figure 1:Image (on left)mapped onto surfaces of simple 3D model without detection of silhouette edges leads to artifacts appearing as a “line”of partially yellow pixels on wall,?oor,and ceiling in a reconstructed image (on

right).

Figure 2:A building (left)and a ?oorplan (right).The visibility for several camera viewpoints are shown in cross-hatch patterns in the ?oorplan.

Finally,the algorithm should be tuned to accelerate visible surface determination for multiple camera viewpoints in a single execution.Since GIBR systems use many photographs for construct-ing a GIBR representation,the cost of precomputing a spatial data structure to accelerate hidden surface removal can generally be amortized over many computations for different viewpoints.

Despite the long history of prior work in hidden surface and hidden line removal in com-puter graphics[34]and computational geometry[13],there are not currently algorithms that meet all the requirements of a GIBR system.Research in computer graphics has focused mostly on HSR algorithms for image synthesis at screen resolution.Example methods include priority or-dering[15,31],recursive subdivision[36,38],and depth buffering[4].Meanwhile,research in computational geometry has focused mostly on proving asymptotic complexities of object-space algorithms.Lower and upper complexity bounds for the hidden surface and hidden line removal problems have been proven to be quadratic in the number of polygon boundaries,and algorithms have been described with optimal performance[11,27,32].Yet,there is a dearth of practical object-space algorithms with attractive expected-case performance.

Debevec et al.[10]recently described a visibility algorithm for a GIBR system using both image-space and object-space methods.First,for each camera viewpoint,polygon IDs are ren-dered with a z-buffer into an item buffer[37]forming an image-space representation of the visible surfaces.Then,for every front-facing polygon P in the3D model,a uniform sampling of3D points is projected onto the image plane and checked against the corresponding entries in the item buffer to form a list of polygons occluding P.For each such occluder,P is clipped in object-space against planes formed by the camera viewpoint and the occluder’s boundary.The problems with this method are that it is not object-space precise,potentially missing occluders not found by a discrete set of samples;it is not output-sensitive,clipping every front-facing polygon against all its occluders;and,it does not detect silhouette edges,potentially leading to visibility artifacts in reconstructed images.

The contribution of this paper is an algorithm that computes a visibility map for an arbitrary camera viewpoint.The basic idea is to trace pyramidal beams[19]recursively from a camera

viewpoint through a precomputed spatial subdivision of cells (convex polyhedra)connected by “portals”(transparent boundaries between cells).Jones [23]has used this method to solve a hid-den line removal problem for image generation,Teller [35]has used it for occlusion culling in an interactive walkthrough system,and Funkhouser et al.[16]have used it for acoustic modeling.A similar method has recently been developed by Fortune [14]for simulation of radio frequency wave propagation.We extend the recursive beam tracing method to compute a complete visibil-ity map for a set of camera viewpoints and apply it to mapping photographs onto surfaces in a GIBR system.The algorithm executes at object-space precision,it is able to ?nd silhouette edges ef?ciently,and its execution time is output-sensitive for each camera viewpoint after an initial precomputation.

2System Organization

The organization of our complete GIBR system is shown in Figure 3.The input to the system is:

1)a 3D polygonal model of the environment,and 2)a set of photographic images calibrated with 3D camera viewpoints.The output is a sequence of synthetic images generated in real-time as a simulated observer moves through the environment.

Polygonal

3D Model Synthetic Images

Calibrated Images Simulated Observer Viewpoint O f f ?L i n e P r e c o m p u t a t i o n Figure 3:Organization of GIBR system.

The system is divided into four phases,three of which perform off-line computations to pre-process the input data.During the?rst preprocessing phase,a spatial subdivision data structure is constructed to represent the topology and geometry of the3D model.Next,beams are traced through the spatial subdivision to compute a visibility map for each camera viewpoint.Then, during the third and?nal preprocessing phase,the visibility maps are used to map regions of the calibrated images onto3D polygons to create a set of radiance maps representing the4D radiance emanating from each surface of the3D model.Finally,during the fourth phase,synthetic images are generated from the radiance maps at interactive rates for an arbitrary viewpoint moving through the3D model under user control.

In this paper,we focus on the?rst two phases of the system:spatial subdivision and beam tracing.The goal of these two phases is to compute a visibility map for each camera viewpoint so that photographic radiance samples can be mapped onto the surfaces of the3D model.Detailed descriptions of the last two phases are purposely omitted in this discussion.Although we currently use a view-dependent texture map to store the radiance map for each surface(as in[10]),the reader can imagine use of any GIBR representation of the4D radiance emanating from each surface of a3D model that can be constructed from a set of photographs augmented with corresponding visibility maps,e.g.,Light Fields[24]or Lumigraphs[17].

The remainder of this paper provides detailed descriptions of our spatial subdivision and beam tracing data structures and algorithms,followed by results of experiments with several3D virtual environments and a brief conclusion.

3Spatial Subdivision

During the?rst preprocessing phase,our system builds a spatial subdivision representing a de-composition of3D space,which we call a winged-pair data structure.The goal of this phase is to partition space into convex polyhedral cells whose boundaries are aligned with polygons of the 3D input model and encode the topological adjacencies of the cells in a data structure enabling output-sensitive traversals of sightlines through3D space.

The winged-pair data structure is motivated by the well-known winged-edge data structure described by Baumgart[1].The difference is that the winged-pair describes topological structures one dimension higher than the winged-edge.While the winged-edge represents a2-manifold,the winged-pair represents a3-manifold.The reader can think of the winged-pair data structure as a set of“glued together”winged-edge data structures,each representing the2-manifold boundary of a single cell.

The winged-pair representation is also similar to the facet-edge representation described by Dobkin and Laszlo[12].Both encode face-edge,face-face,and edge-edge adjacency relationships in a set of structures corresponding to face-edge pairs.The difference is that Dobkin and Laszlo store a separate structure for each unique pair of face and edge orientations.For most practical purposes,the difference is insigni?cant,similar to the one between2D quad-edge and winged-edge structures.Although storing separate structures for different orientations can eliminate a few conditional checks during traversal operations,it requires extra storage–a straight-forward trade-off between time and space.We choose to store exactly one structure for each face-edge pair in our winged-pair representation to simplify code extensions and debugging.

Pseudocode declarations for the winged-pair structure are shown in Figure4.Topological adja-cencies are encoded in?xed-size records associated with vertices,edges,faces,cells,and face-edge pairs.Every vertex stores its3D location and a reference to one attached edge,every edge stores references to its two vertices and one attached face-edge pair,each face stores references to its two cells and one attached face-edge pair,and every cell stores a reference to one attached face.The face-edge pairs store references to one edge E and to one face F along with adjacency relationships required for topological traversals.Speci?cally,they store references(spin)to the two face-edge pairs reached by spinning F around E clockwise and counter-clockwise(see Figure5)and to the two face-edge pairs(clock)reached by moving around F in clockwise and counter-clockwise di-rections from E(see Figure5).The face-edge pair also stores a bit(direction)indicating whether the orientation of the vertices on the edge is clockwise or counter-clockwise with respect to the face within the pair.

class Vertex{ Point position;

Edge*edge;

};

class Edge{

Vertex*vertex[2];

Pair*pair;

};

class Face{

Cell*cell[2];

Pair*pair;

};

class Cell{

Face*face;

};

class Pair{

Face*face;

Edge*edge;

Pair*clock[2];

Pair*spin[2];

Bit direction;

};

Figure4:Winged-pair declarations.

2

Clock(Pair(F,E),CW) = F,E1 Clock(Pair(F,E),CCW) = F,E2 Spin(Pair(F,E),CW) = F1,E Spin(Pair(F,E),CCW) = F2,E

Figure5:Pair-pair references.

These simple,?xed-size structures make it possible to execute output-sensitive topological traversals through cell,face,edge,and vertex adjacency relationships.For instance,?nding all faces on the boundary of a given cell requires O(C f+C e)time,where C f and C e are the numbers of faces and edges attached to the cell,respectively.As in winged-edge traversals in2D,simple conditional statements are often required to check the orientation of each structure before moving to the next.For instance,to?nd the cell adjacent to another across a given face,the C++code looks like this:

Cell*CellAcrossFace(Face*face,Cell*cell){

return(cell==face→cell[0])?face→cell[1]:face→cell[0];

}

We build the winged-pair data structure for any3D model using a Binary Space Partition (BSP)[15],a recursive binary split of3D space into convex polyhedral regions(cells)separated by planes.To construct the BSP,we recursively split cells by candidate planes selected by the method described in[29].As BSP cells are split by a polygon P,the corresponding winged-pair cells are split along the plane supporting P,and the faces and edges on the boundary of each split cell are updated to maintain a3-manifold in which every face is convex and entirely inside or outside every input polygon.As faces are created,they are labeled according to whether they are opaque(coin-cide with an input polygon)or transparent(split free space).The binary splitting process continues until no input polygon intersects the interior of any BSP cell,leading to a set of convex polyhedral cells whose faces are all convex and cumulatively contain all the input polygons.The resulting winged-pair is converted to an ASCII representation and written to a?le for use by later phases of the GIBR system.

4Beam Tracing

In the second phase of our GIBR system,we use a beam tracing algorithm to compute a visibility map for every photograph.Beams containing feasible sightlines from each camera viewpoint are

traced via a depth-?rst traveral through the winged-pair structure,while corresponding convex re-gions of the visibility map are partitioned recursively.The algorithm is based on recursive beam tracing methods[23,16,35],and it is related to recursive convex decompositions[30].The key feature is that topological information stored in the winged-pair data structure(edge-face adja-cencies)is used to construct a visibility map with topological information and explicit silhouette edges.

Psuedocode for the beam tracing algorithm is shown in Figure6.During the traversal for each camera viewpoint,the algorithm maintains:a visibility map M(a2-manifold representing the geometry and topology of faces and edges visible to the camera),a current region R(a convex 2D region of the visibility map),a winged-pair W(as described in Section3),a current cell C (a reference to a cell in the winged-pair structure)and a current beam B(an in?nite convex3D pyramidal beam emanating from the camera viewpoint containing all sightlines passing through R).Initially,M are R are initialized to one rectangular region enclosing the visible portion of the view plane,W is constructed from the3D model as described in Section3,C is set to be the cell of W containing the camera,and B is set to the four-sided pyramid corresponding to the view frustum of the camera.

During each recursive step,the function called TraceBeams partitions the current region of the visibility map into multiple convex subregions corresponding to intersections of the current beam with faces on the boundary of the current cell.For each face F i on the current cell and intersecting the current beam,a convex region R i is inserted into the visibility map.If F i is transparent,R i is recursively re?ned with a call to TraceBeams in which the current region of the visibility map is set to R i,the new current cell C i is set to be the cell adjacent to C across face F i,and the new current beam B i is formed by trimming B to include only rays intersecting F i.Otherwise,F i is opaque,the recursive search along this path terminates,and R i is marked as a?nal region of the visibility map associated with face F i.Contiguous regions of the visibility map associated with the same opaque winged-pair face are marked as one face during the process.

A nice feature of this algorithm is that it constructs a representation of the visibility map with

void TraceBeams()

begin

//Initialization

V=Viewpoint of camera;

M=Visibility map;

R=Region of M initially enclosing viewable area;

W=Winged-pair structure;

C=Cell of W containing camera;

B=Beam enclosing camera view frustum;

TraceBeams(V,M,R,W,C,B);

end

void TraceBeams(V,M,R,W,C,B)

begin

//Consider each face on cell boundary

foreach face F i on boundary of C do

//Check if beam intersects face

if(Intersects(B,F i)then

//Create new region in visibility map

R i=InsertRegion(M,F i);

//Recurse along path through transparent face

if(Transparent(F i))then

C i=NeighborCell(W,C,F i);

B i=NewBeam(B,F i);

TraceBeams(V,M,R i,W,C i,B i);

endif

endif

endfor

end

Figure6:Pseudocode for the beam tracing algorithm.

both topological and geometric information.With the exception of silhouette edges,the topology of the visibility graph matches the topology of corresponding vertices,edges,and faces in the winged-pair structure exactly.Silhouette edges can be found explicitly by checking the orientations of the faces attached to visible edges,which are readily available by traversing spin references stored in the winged-pair data structure.

5Experimental Results

We have implemented the algorithms described in the previous sections,and they run on SGI/Irix and PC/Windows computers.To test the effectiveness of our methods,we executed a series of tests with several 3D models (shown in Figure 7).For each one,we computed a winged-pair data structure and a visibility map for at least 100camera viewpoints along a typical walkthrough path within the model.All tests were run on a SGI Onyx2Workstation with a 195MHz R10000

processor.

Rooms 2rooms connected by door

(20

polygons)City

Small city with box-shaped buildings

(1,125

polygons)Maze Maze with off-axis walls (310

polygons)Floor One ?oor of Soda Hall (1,772

polygons)Arena Video game environment (665

polygons)

Building Five ?oors of Soda Hall 10,057polygons)

Figure 7:Test models.

Results of the spatial subdivision phase are shown in Table1.For each test model,we list how many input polygons it has(#Polys),along with the numbers of cells,faces,edges,and vertices in the resulting winged-pair structure and the wall-clock time(in seconds)taken by the spatial subdivision algorithm.We note that the complexity of the winged-pair grows linearly with the numbers of polygons for these models.Also,the spatial subdivision processing times are reasonably quick(less than one minute),even for complex3D models such as a small city or a large building.

Test##

Model Faces Verts Time(s)

20666

3103152,113

6652943,625

1,1258297,568

1,7728089,623

10,0574,50454,459

the dense occlusions of walls.Finally,we note that the maximum time required to compute the visibility map in all our tests was a little more than one second.

Test

Model Avg Avg Avg Avg Rooms210203 31021004828163260 Arena5781160108 1,125151,68043828748151,092 Floor236713231 10,05766713183263611097

Table2:Beam tracing statistics.

Figure8shows visualizations of our algorithms captured from an interactive program comput-ing the visibility map for viewpoints in the‘City’model.The top two images((a)and(b))show the winged-pair structure constructed by our system.In these images,every face is drawn with a unique color,edges are drawn as solid white lines,and vertices are drawn as green dots.Note how few input polygons are split by binary space partitioning planes.The next three images((c),(d), and(e))show views from one camera?ying over the city.As before,every face is drawn with a unique color,but computed silhouette edges are also shown as wide white lines in image(d),and intersections between beams and winged-pair faces are shown as yellow lines in image(e).The bottom-right image(f)shows a birds-eye view of the set of surfaces(blue polygons)visible to the viewpoint(looking from the bottom-left corner of the image towards the top-right corner)overlaid with edges of the winged-pair structure(white lines).

The images in the bottom row of Figure8illustrate the most signi?cant problem with the recursive beam tracing approach:beams get fragmented by cell boundaries as they are traced through free space[14,35].In theory,the number of beams can be exponential in the number of winged-pair faces traversed.In practice,the number of beams traced depends on the complexity of the visible region of the model.As a result,these methods are best-suited for use in densely occluded environments,such as building interiors.In future work,we plan to pursue topological beam tracing methods in which beams are split only at silhouette edges(as in[14]).

(a)Winged-pair faces (birds-eye view).

(every face is drawn with a unique

color)(c)Winged-pair faces (interior view).

(every face is drawn with a unique

color)(e)Visibility map computation (interior view).

(yellow lines show beam-face

intersections)(b)Winged-pair edges and vertices (birds-eye view).(edges are white,and vertices are

green)

(d)Silhouette edges (interior view).(silhouette edges are thick white

lines)

surfaces (birds-eye view).(winged-pair edges are white,visible surfaces are blue)Figure 8:Visualization of winged-pair structure and visibility map computed for ‘City’model.

6Conclusion

This paper presents data structures and algorithms useful for mapping images onto surfaces in a hybrid geometry-and image-based rendering system.Our method uses a preprocessing phase to construct a winged-pair data structure encoding the topology and geometry of the3D input model.A second phase traces beams through the winged-pair structure to?nd visible surfaces. The beam tracing algorithm computes visible surfaces with object-space precision,it is able to?nd silhouette edges ef?ciently,and its execution time depends only on the complexity of the visible region for each camera viewpoint.Topics for future work include investigation of how topological relationships can be used to construct,store,and sample radiance maps more ef?ciently.

Acknowledgements

The author would like to acknowledge Steve Fortune and David Dobkin for their helpful discus-sions.Thanks also go to Seth Teller for use of the‘Maze’test model,and Bruce Naylor for the ‘Arena’test model.Finally,I would like to thank Wilmot Li and Wagner Correa for their contribu-tions to the beam tracing implementation.

References

[1]Bruce G.Baumgart.Geometric modeling for computer vision.AIM-249,STA-CS-74-463,

CS Dept,Stanford U.,October1974.

[2]J.F.Blinn and M.E.Newell.Texture and re?ection in computer generated https://www.wendangku.net/doc/b77850105.html,mu-

nications of the ACM,19:542–546,1976.

[3]James Calvin,Alan Dickens,Bob Gaines,Paul Metzger,Dale Miller,and Dan Owen.The

simnet virtual world architecture.In Proceedings of the IEEE Virtual Reality Annual Inter-national Symposium,pages450–455,September1993.

[4]Edwin E.Catmull.A Subdivision Algorithm for Computer Display of Curved Surfaces.PhD

thesis,Dept.of CS,U.of Utah,December1974.

[5]Shenchang Eric Chen.Quicktime VR-an image-based approach to virtual environment nav-

igation.In Robert Cook,editor,SIGGRAPH95Conference Proceedings,Annual Conference Series,pages29–38.ACM SIGGRAPH,Addison Wesley,August1995.held in Los Angeles, California,06-11August1995.

[6]Shenchang Eric Chen and Lance Williams.View interpolation for image synthesis.In

James T.Kajiya,editor,Computer Graphics(SIGGRAPH’93Proceedings),volume27, pages279–288,August1993.

[7]Satyan Coorg.Pose Imagery and Automated3-D Modeling of Urban Environments.PhD

thesis,Dept.of Electrical Engineering and Computer Science,Massachusetts Institute of Technology,September1998.

[8]Paul Debevec and Steven Gortler.Image-based modeling and rendering.In SIGGRAPH98

Course Notes.ACM SIGGRAPH,Addison Wesley,July1998.

[9]Paul E.Debevec,Camillo J.Taylor,and Jitendra Malik.Modeling and rendering architecture

from photographs:A hybrid geometry-and image-based approach.In Holly Rushmeier, editor,SIGGRAPH96Conference Proceedings,Annual Conference Series,pages11–20.

ACM SIGGRAPH,Addison Wesley,August1996.held in New Orleans,Louisiana,04-09 August1996.

[10]Paul E.Debevec,Yizhou Yu,and George D.Borshukov.Ef?cient view-dependent image-

based rendering with projective texture-mapping.In Eurographics Rendering Workshop, pages105–116,June1998.

[11]F.D′e vai.Quadratic bounds for hidden line elimination.In Proc.2nd Annu.ACM Sympos.

Comput.Geom.,pages269–275,1986.

[12]D.P.Dobkin and https://www.wendangku.net/doc/b77850105.html,szlo.Primitives for the manipulation of three-dimensional subdi-

visions.Algorithmica,4:3–32,1989.

[13]S.E.Dorward.A survey of object-space hidden surface https://www.wendangku.net/doc/b77850105.html,put.Geom.

Appl.,4:325–362,1994.

[14]Steven Fortune.Topological beam tracing.In to appear in Proc.ACM Symposium on Com-

putational Geometry,1999.

[15]H.Fuchs,Z.M.Kedem,and B.F.Naylor.On visible surface generation by a priori tree

structures.volume14,pages124–133,July1980.

[16]Thomas A.Funkhouser,Ingrid Carlbom,Gary Elko,Gopal Pingali,Mohan Sondhi,and Jim

West.A beam tracing approach to acoustic modeling for interactive virtual environments.

In SIGGRAPH98Conference Proceedings,Annual Conference Series,pages21–32.ACM SIGGRAPH,Addison Wesley,July1998.

[17]Steven J.Gortler,Radek Grzeszczuk,Richard Szeliski,and Michael F.Cohen.The lumi-

graph.In Holly Rushmeier,editor,SIGGRAPH96Conference Proceedings,Annual Confer-ence Series,pages43–54.ACM SIGGRAPH,Addison Wesley,August1996.held in New Orleans,Louisiana,04-09August1996.

[18]Ned Greene.Environment mapping and other applications of world projections.IEEE Com-

puter Graphics and Applications,6(11),November1986.

[19]Paul S.Heckbert and Pat Hanrahan.Beam tracing polygonal objects.In Hank Christiansen,

editor,Computer Graphics(SIGGRAPH’84Proceedings),volume18,pages119–127,July 1984.

[20]Modern https://www.wendangku.net/doc/b77850105.html,/demo.html,1999.

[21]id Software.Quake,1996.

[22]William Jepson,Robin Liggett,and Scott Friedman.An environment for real-time urban

simulation.In Pat Hanrahan and Jim Winget,editors,1995Symposium on Interactive3D Graphics,pages165–166.ACM SIGGRAPH,April1995.ISBN0-89791-736-7.

[23]C.B.Jones.A new approach to the‘hidden line’https://www.wendangku.net/doc/b77850105.html,puter Journal,14(3):232–237,

August1971.

[24]Marc Levoy and Pat Hanrahan.Light?eld rendering.In Holly Rushmeier,editor,SIGGRAPH

96Conference Proceedings,Annual Conference Series,pages31–42.ACM SIGGRAPH, Addison Wesley,August1996.held in New Orleans,Louisiana,04-09August1996. [25]A.Lippman.Movie-maps:an application of the optical videodisc to computer graphics.

Computer Graphics,14(3):32–42,July1980.

[26]Michael R.Macedonia,Michael J.Zyda,David R.Pratt,Donald P.Brutzman,and Paul T.

Barham.Exploiting reality with multicast groups.IEEE Computer Graphics and Applica-tions,15(5):38–45,September1995.

[27]M.McKenna.Worst-case optimal hidden-surface removal.ACM Trans.Graph.,6:19–28,

1987.

[28]Leonard McMillan and Gary Bishop.Plenoptic modeling:An image-based rendering sys-

tem.In Robert Cook,editor,SIGGRAPH95Conference Proceedings,Annual Conference Series,pages39–46.ACM SIGGRAPH,Addison Wesley,August1995.held in Los Angeles, California,06-11August1995.

[29]Bruce Naylor.Constructing good partition trees.In Proceedings of Graphics Interface’93,

pages181–191,Toronto,Ontario,Canada,May1993.Canadian Information Processing So-ciety.

[30]Bruce F.Naylor.Partitioning tree image representation and generation from3D geometric

models.In Proceedings of Graphics Interface’92,pages201–212,May1992.

相关文档