文档库 最新最全的文档下载
当前位置:文档库 › KinectFusion-- real-time 3D reconstruction and interaction using a moving depth camera

KinectFusion-- real-time 3D reconstruction and interaction using a moving depth camera

KinectFusion-- real-time 3D reconstruction and interaction using a moving depth camera
KinectFusion-- real-time 3D reconstruction and interaction using a moving depth camera

KinectFusion:Real-time 3D Reconstruction and Interaction

Using a Moving Depth Camera*

Shahram Izadi 1,David Kim 1,3,Otmar Hilliges 1,David Molyneaux 1,4,Richard Newcombe 2,

Pushmeet Kohli 1,Jamie Shotton 1,Steve Hodges 1,Dustin Freeman 1,5,

Andrew Davison 2,Andrew Fitzgibbon 1

1

Microsoft Research Cambridge,UK 2

Imperial College London,UK

3Newcastle University,UK 4Lancaster University,UK 5University of Toronto,

Canada

Figure 1:KinectFusion enables real-time detailed 3D reconstructions of indoor scenes using only the depth data from a

standard Kinect camera.A)user points Kinect at coffee table scene.B)Phong shaded reconstructed 3D model (the wireframe frustum shows current tracked 3D pose of Kinect).C)3D model texture mapped using Kinect RGB data with real-time particles simulated on the 3D model as reconstruction occurs.D)Multi-touch interactions performed on any reconstructed surface.E)Real-time segmentation and 3D tracking of a physical object.ABSTRACT

KinectFusion enables a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene.Only the depth data from Kinect is used to track the 3D pose of the sensor and reconstruct ,geomet-rically precise,3D models of the physical scene in real-time .The capabilities of KinectFusion,as well as the novel GPU-based pipeline are described in full.We show uses of the core system for low-cost handheld scanning,and geometry-aware augmented reality and physics-based interactions.Novel ex-tensions to the core GPU pipeline demonstrate object seg-mentation and user interaction directly in front of the sensor,without degrading camera tracking or reconstruction.These extensions are used to enable real-time multi-touch interac-tions anywhere ,allowing any planar or non-planar recon-structed physical surface to be appropriated for touch.ACM Classi?cation:H5.2[Information Interfaces and Pre-sentation]:User Interfaces.I4.5[Image Processing and Com-puter Vision]:Reconstruction.I3.7[Computer Graphics]:Three-Dimensional Graphics and Realism.

General terms:Algorithms,Design,Human Factors.

Keywords:

3D,GPU,Surface Reconstruction,Tracking,

Depth Cameras,AR,Physics,Geometry-Aware Interactions

*Research conducted at Microsoft Research Cambridge,UK

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro?t or commercial advantage and that copies bear this notice and the full citation on the ?rst page.To copy otherwise,to republish,to post on servers or to redistribute to lists,requires prior speci?c permission and/or a fee.

UIST’11,October 16-19,2011,Santa Barbara,CA,USA.Copyright 2011ACM 978-1-4503-0716-1/11/10...$10.00.

INTRODUCTION

While depth cameras are not conceptually new,Kinect has made such sensors accessible to all.The quality of the depth sensing,given the low-cost and real-time nature of the de-vice,is compelling,and has made the sensor instantly popu-lar with researchers and enthusiasts alike.

The Kinect camera uses a structured light technique [8]to generate real-time depth maps containing discrete range mea-surements of the physical scene.This data can be repro-jected as a set of discrete 3D points (or point cloud ).Even though the Kinect depth data is compelling,particularly com-pared to other commercially available depth cameras,it is still inherently noisy (see Figures 2B and 3left).Depth mea-surements often ?uctuate and depth maps contain numerous ‘holes’where no readings were obtained.

To generate 3D models for use in applications such as gam-ing,physics,or CAD,higher-level surface geometry needs to be inferred from this noisy point-based data.One simple approach makes strong assumptions about the connectivity of neighboring points within the Kinect depth map to gen-erate a mesh representation.This,however,leads to noisy and low-quality meshes as shown in Figure 2C.As impor-tantly,this approach creates an incomplete mesh,from only a single,?xed viewpoint.To create a complete (or even wa-tertight)3D model,different viewpoints of the physical scene must be captured and fused into a single representation.This paper presents a novel interactive reconstruction sys-tem called KinectFusion (see Figure 1).The system takes live depth data from a moving Kinect camera and,in real-time,creates a single high-quality,geometrically accurate,3D model.A user holding a standard Kinect camera can move within any indoor space,and reconstruct a 3D model of the physical scene within seconds.The system contin-

Figure 2:A)RGB image of scene.B)Normals extracted from raw Kinect depth map.C)3D Mesh created from a single depth map.D and E)3D model generated from KinectFusion showing surface normals (D)and rendered with Phong shading (E).

uously tracks the 6degrees-of-freedom (DOF)pose of the camera and fuses new viewpoints of the scene into a global surface-based representation.A novel GPU pipeline allows for accurate camera tracking and surface reconstruction at in-teractive real-time rates.This paper details the capabilities of our novel system,as well as the implementation of the GPU pipeline in full.

We demonstrate core uses of KinectFusion as a low-cost handheld scanner,and present novel interactive methods for segmenting physical objects of interest from the reconstructed scene.We show how a real-time 3D model can be leveraged for geometry-aware augmented reality (AR)and physics-based interactions,where virtual worlds more realistically merge and interact with the real.

Placing such systems into an interaction context,where users need to dynamically interact in front of the sensor,reveals a fundamental challenge –no longer can we assume a static scene for camera tracking or reconstruction.We illustrate failure cases caused by a user moving in front of the sensor.We describe new methods to overcome these limitations,al-lowing camera tracking and reconstruction of a static back-ground scene,while simultaneously segmenting,reconstruct-ing and tracking foreground objects,including the user.We use this approach to demonstrate real-time multi-touch inter-actions anywhere ,allowing a user to appropriate any physical surface,be it planar or non-planar,for touch.

RELATED WORK

Reconstructing geometry using active sensors [16],passive cameras [11,18],online images [7],or from unordered 3D points [14,29]are well-studied areas of research in com-puter graphics and vision.There is also extensive literature within the AR and robotics community on Simultaneous Lo-calization and Mapping (SLAM),aimed at tracking a user or robot while creating a map of the surrounding physical envi-ronment (see [25]).Given this broad topic,and our desire to build a system for interaction ,this section is structured around speci?c design goals that differentiate KinectFusion from prior work.The combination of these features makes our interactive reconstruction system unique.

Interactive rates

Our primary goal with KinectFusion is to

achieve real-time interactive rates for both camera tracking and 3D reconstruction.This speed is critical for permitting direct feedback and user interaction.This differentiates us from many existing reconstruction systems that support only of?ine reconstructions [7],real-time but non-interactive rates (e.g.the Kinect-based system of [12]reconstructs at ~2Hz),

or support real-time camera tracking but non real-time recon-struction or mapping phases [15,19,20].

No explicit feature detection Unlike structure from mo-tion (SfM)systems (e.g.[15])or RGB plus depth (RGBD)techniques (e.g.[12,13]),which need to robustly and con-tinuously detect sparse scene features,our approach to cam-era tracking avoids an explicit detection step,and directly works on the full depth maps acquired from the Kinect sen-sor.Our system also avoids the reliance on RGB (used in recent Kinect RGBD systems e.g.[12])allowing use in in-door spaces with variable lighting conditions.

High-quality reconstruction of geometry A core goal of our work is to capture detailed (or dense )3D models of the real scene.Many SLAM systems (e.g.[15])focus on real-time tracking,using sparse maps for localization rather than reconstruction.Others have used simple point-based representations (such as surfels [12]or aligned point-clouds [13])for reconstruction.KinectFusion goes beyond these point-based representations by reconstructing surfaces ,which more accurately approximate real-world geometry.Dynamic interaction assumed We explore tracking and reconstruction in the context of user interaction .Given this requirement,it is critical that the representation we use can deal with dynamically changing scenes,where users directly interact in front of the camera.While there has been work on using mesh-based representations for live reconstruction from passive RGB [18,19,20]or active Time-of-Flight (ToF)cameras [4,28],these do not readily deal with changing,dy-namic scenes.

Infrastructure-less We aim to allow users to explore and reconstruct arbitrary indoor spaces.This suggests a level of mobility,and contrasts with systems that use ?xed or large sensors (e.g.[16,23])or are fully embedded in the envi-ronment (e.g.[26]).We also aim to perform camera track-ing without the need for prior augmentation of the space,whether this is the use of infrastructure-heavy tracking sys-tems (e.g.[2])or ?ducial markers (e.g.[27]).

Room scale One ?nal requirement is to support whole room reconstructions and interaction.This differentiates KinectFusion from prior dense reconstruction systems,which have either focused on smaller desktop scenes [19,20]or scanning of small physical objects [4,28].

The remainder of this paper is structured into two parts:The ?rst provides a high-level description of the capabilities of KinectFusion.The second describes the technical aspects of the system,focusing on our novel GPU pipeline.

Figure 3:Left:Raw Kinect data (shown as surface nor-mals).Right:Reconstruction shows hole ?lling and high-quality details such as keys on keyboard,phone number pad,wires,and even a DELL logo on the side of a PC (an engraving less than 1mm

deep).

Figure 4:A)User rotating object in front of ?xed Kinect.B)360?3D reconstruction.C)3D model imported into SolidWorks.D)3D printout from reconstruction.KINECTFUSION

Our system allows a user to pickup a standard Kinect camera and move rapidly within a room to reconstruct a high-quality,geometrically precise 3D model of the scene.To achieve this,our system continually tracks the 6DOF pose of the camera and fuses live depth data from the camera into a single global 3D model in real-time.As the user explores the space,new views of the physical scene are revealed and these are fused into the same model.The reconstruction therefore grows in detail as new depth measurements are added.Holes are ?lled,and the model becomes more complete and re?ned over time (see Figure 3).

Even small motions,caused for example by camera shake,result in new viewpoints of the scene and hence re?nements to the model.This creates an effect similar to image super-resolution [6]–adding greater detail than appears visible in the raw signal (see Figure 3).As illustrated in Figures 2and 3,the reconstructions are high-quality,particularly given the noisy input data and speed of reconstruction.The recon-structed model can also be texture mapped using the Kinect RGB camera (see Figures 1C,5B and 6A).

Low-cost Handheld Scanning

A basic and yet compelling

use for KinectFusion is as a low-cost object scanner.Al-though there is a body of research focusing on object scan-ning using passive and active cameras [4,28],the speed,quality,and scale of reconstructions have not been demon-strated previously with such low-cost commodity hardware.The mobile and real-time nature of our system allows users to rapidly capture an object from different viewpoints,

and

Figure 5:Fast and direct object segmentation.First entire scene is scanned including object of interest –the teapot.3D reconstruction shows surface normals (A)and texture mapped model (B).Bottom left to right:T eapot physically removed.System monitors real-time changes in recon-struction and colors large changes yellow.C)This achieves accurate segmentation of teapot 3D model from initial scan.

see onscreen feedback immediately.Reconstructed 3D mod-els can be imported into CAD or other 3D modeling applica-tions,or even 3D printed (see Figure 4C and D).

As also shown in Figure 4,our system can also be used in ‘reverse’–without any code changes –whereby the system tracks the 6DOF pose of a handheld rigid object that is ro-tated in front of a ?xed Kinect camera (as long as the object occupies the majority of the depth map).While ?ngers and hands may initially form part of the 3D reconstruction,they are gradually integrated out of the 3D model,because they naturally move as a process of rotating the object.

Object Segmentation through Direct Interaction Users may also wish to scan a speci?c smaller physical object rather than the entire scene.To support this,KinectFusion allows a user to ?rst reconstruct the entire scene,and then accurately segment the desired object by moving it physi-cally.The system continuously monitors the 3D reconstruc-tion and observes changes over time.If an object is physi-cally removed from view or moved within the scene by the user,rapid and large changes in the 3D model are observed.Such changes are detected in real-time,allowing the reposi-tioned object to be cleanly segmented from the background model.This approach allows a user to perform segmenta-tion rapidly and without any explicit GUI input,simply by moving the object directly (see Figure 5).

Geometry-Aware Augmented Reality Beyond scanning,KinectFusion enables more realistic forms of AR,where a 3D virtual world is overlaid onto and interacts with the real-world representation.Figure 6(top row)shows a virtual metallic sphere composited directly onto the 3D model,as well as the registered live RGB data from Kinect.The vir-tual sphere can be rendered from the same perspective as the tracked physical camera,enabling it to be spatially regis-tered as the Kinect moves.As shown in Figure 6(B,C and D),the live 3D model allows composited virtual graphics to be precisely occluded by the real-world,including geometri-cally complex objects.This quality of occlusion handling is not possible with the raw depth map (Figure 6E),especially around the edges of objects due to signi?cant noise along depth discontinuities.Precise occlusions are critical for truly immersive AR experiences,and have not been achieved in sparsely mapped real-time AR systems (e.g.[15]).

Figure 6:Virtual sphere composited onto texture mapped 3D model (A)and calibrated live Kinect RGB (B,C and D).Real-time 3D model used to handle precise occlusions of the virtual by complex physical geometries (B and C).Com-paring occlusion handling using live depth map (E)versus 3D reconstruction (F).Note noise at depth edges,shadows and incomplete data (e.g.book)in live depth map.Virtual sphere casts shadows on physical (D)and re?ects parts of the real scene (B and

D).

Figure 7:Interactive simulation of physics directly on 3D model even during reconstruction.Thousands of particles interact with reconstructed scene.Reconstruction,camera tracking,and physics simulation all performed in real-time.

Raytraced rendering effects can be calculated in real-time to create realistic shadows,lighting and re?ections that consider both virtual and real geometries.For example,Figure 6(B and D)shows how the virtual can cast shadows onto the real geometry,as well as re?ect parts of the real scene onto the virtual.The latter can even be parts of the scene that are occluded from the current perspective of the physical camera.Taking Physics Beyond the ‘Surface’

Taking this ability

of combining and spatially aligning real and virtual worlds one step further,the virtual can also begin to interact dy-namically with the reconstructed scene by simulating aspects of real-world physics.Many types of applications such as gaming and robotics can bene?t from such physically precise real-time simulation.Rigid body collisions can be simulated live even as the 3D model is being reconstructed.Figure 7shows thousands of particles interacting with the 3D model as it is reconstructed,all in real-time.This ability to both

re-Figure 8:A user moves freely in front of a ?xed Kinect.Live raw data (top)and shaded reconstruction (bottom).Left:Scene without user.Middle:User enters scene,but is only partially reconstructed due to motion.Right:Continued scene motions cause tracking failure.

construct a scene and simultaneously perform physics com-putations on the model is unique ,and opens up a potential for more physically realistic AR applications.

Reaching into the Scene

It is important to note that,like most of the related literature on SLAM and camera-based reconstruction,our core sys-tem described so far makes a fundamental assumption –that camera tracking will be performed on a static scene.Once we switch focus from reconstructing the scene towards in-teracting within it,these assumptions no longer hold true.Physical objects such as the user’s hands,will inevitably ap-pear in the scene,move dynamically and impact tracking and reconstruction.Our camera tracking is robust to transient and rapid scene motions (such as the earlier example in Figure 5).However,prolonged interactions with the scene are problem-atic as illustrated in Figure 8.

While this is clearly a challenging problem within computer vision,our GPU-based pipeline is extended to approximate camera motion from scene motion for certain user interac-tion scenarios.When a user is interacting in the scene,the camera tracking ‘locks’onto the background and ignores the foreground user for camera pose prediction (shown later in Figure 15).This foreground data can be tracked (in 6DOF)and reconstructed independently of camera tracking and re-construction of the static background.

This ability to reconstruct and track the user in the scene can enable novel extensions to our physics-based simula-tion shown earlier.Rigid particles can now collide with the rapidly changing dynamic foreground.Figure 9demon-strates particles interacting with a dynamically updated re-construction of a moving user.This enables direct interaction between the user and the physics-enabled virtual objects.Furthermore,as we have captured geometry of both the back-ground scene and foreground user (e.g.hands or potentially the full body),we can determine intersections between the two.These points of intersection indicate when the fore-ground ‘comes into contact with’the background,and forms a robust method to detect when a user is touching any ar-bitrarily shaped surface –including non-planar geometries.This allows direct multi-touch interactions,such as those

Figure 9:Interactive simulation of particle physics on dy-namic scenes.Particles interact with dynamically moving foreground user,whilst physical camera and user https://www.wendangku.net/doc/e511423752.html,er can collide with the physics-enabled virtual

objects.

Figure 10:Enabling touch input on arbitrary surfaces with a moving camera.A)Live RGB.B)Composited view with segmented hand and single ?nger touching curved surface.C)rendered as surface normals.D)Single ?nger drawing on a curved surface.E)Multi-touch on regular planar book surface.F)Multi-touch on an arbitrarily shaped surface.

demonstrated in the interactive tabletop community,to reach any surface in the user’s environment (see Figure 10).

GPU IMPLEMENTATION

Our approach for real-time camera tracking and surface re-construction is based on two well-studied algorithms [1,5,24],which have been designed from the ground-up for paral-lel execution on the GPU.A full formulation of our method is provided in [21],as well as quantitative evaluation of recon-struction performance.The focus of this section is on the im-plementation of our novel core and extended GPU pipeline,which is critical in enabling interactive rates.

The main system pipeline consists of four main stages (la-beled appropriately in Figure 11):

a)Depth Map Conversion The live depth map is converted from image coordinates into 3D points (referred to as ver-tices)and normals in the coordinate space of the camera.b)Camera Tracking In the tracking phase,a rigid 6DOF transform is computed to closely align the current oriented points with the previous frame,using a GPU implementa-tion of the Iterative Closest Point (ICP)algorithm [1].Rel-ative transforms are incrementally applied to a single trans-form that de?nes the global pose of the Kinect.

c)Volumetric Integration Instead of fusing point clouds or creating a mesh,we use a volumetric surface representation based on [5].Given the global pose of the camera,oriented points are converted into global coordinates,and a single 3D voxel grid is updated.Each voxel stores a running average of its distance to the assumed position of a physical surface.d)Raycasting Finally,the volume is raycast to extract views of the implicit surface,for rendering to the user.When us-ing the global pose of the camera,this raycasted view of

the

Figure 11:Overview of tracking and reconstruction pipeline from raw depth map to rendered view of 3D scene.

volume also equates to a synthetic depth map,which can be used as a less noisy more globally consistent reference frame for the next iteration of ICP.This allows tracking by aligning the current live depth map with our less noisy ray-casted view of the model,as opposed to using only the live depth maps frame-to-frame.

Each of these steps is executed in parallel on the GPU using the CUDA language.We describe each of these steps of the pipeline in the following sections.

Depth Map Conversion

At time i ,each CUDA thread operates in parallel on a sep-arate pixel u =(x,y )in the incoming depth map D i (u ).Given the intrinsic calibration matrix K (of the Kinect in-frared camera),each GPU thread reprojects a speci?c depth measurement as a 3D vertex in the camera’s coordinate space as follows:v i (u )=D i (u )K ?1[u ,1].This results in a sin-gle vertex map V i computed in parallel.

Corresponding normal vectors for each vertex are computed by each GPU thread using neighboring reprojected points:n i (u )=(v i (x +1,y )?v i (x,y ))×(v i (x,y +1)?v i (x,y ))(normalized to unit length n i /||n i ||).This results in a single normal map N i computed in parallel.

The 6DOF camera pose at time i is a rigid body transform matrix T i =[R i |t i ]containing a 3x3rotation matrix (R i )and 3D translation vector (t i ).Given this transform,a ver-tex and normal can be converted into global coordinates:v g i (u )=T i v i (u )and n g i (u )=R i n i (u )respectively.

Camera Tracking

ICP is a popular and well-studied algorithm for 3D shape alignment (see [24]for a detailed study).In KinectFusion,ICP is instead leveraged to track the camera pose for each new depth frame,by estimating a single 6DOF transform that closely aligns the current oriented points with those of the previous frame.This gives a relative 6DOF transform T rel which can be incrementally applied together to give the single global camera pose T i .

The important ?rst step of ICP is to ?nd correspondences be-tween the current oriented points at time i with the previous at i ?1.In our system we use projective data association [24]to ?nd these correspondences.This part of the GPU-based al-gorithm is shown as pseudocode in Listing 1.Given the pre-vious global camera pose T i ?1,each GPU thread transforms a unique point v i ?1into camera coordinate space,and per-spective projects it into image coordinates.It then uses this 2D point as a lookup into the current vertex (V i )and normal

maps (N i ),?nding corresponding points along the ray (i.e.projected onto the same image coordinates).Finally,each GPU thread tests the compatibility of corresponding points to reject outliers ,by ?rst converting both into global coordi-nates,and then testing that the Euclidean distance and angle between them are within a threshold.

Listing 1Projective point-plane data association.

1:for each image pixel u ∈depth map D i in parallel do 2:if D i (u )>0then

3:v i ?1←T ?1i ?1v g

i ?14:p ←perspective project vertex v i ?15:if p ∈vertex map V i then 6:v ←T i ?1V i (p )7:n ←R i ?1N i (p )8:if ||v ?v g i ?1||

i ?1

9:point correspondence found Given these set of corresponding oriented points,the output of each ICP iteration is a single relative transformation ma-trix T rel that minimizes the point-to-plane error metric [3],de?ned as the sum of squared distances between each point in the current frame and the tangent plane at its correspond-ing point in the previous frame:

arg min u

D i (u )>0

||(T rel v i (u )?v g i ?1(u ))·n g i ?1(u )||2

(1)

We make a linear approximation to solve this system,by as-suming only an incremental transformation occurs between frames [3,17].The linear system is computed and summed in parallel on the GPU using a tree reduction.The solution to this 6x6linear system is then solved on the CPU using a Cholesky decomposition.

One of the key novel contributions of our GPU-based cam-era tracking implementation is that ICP is performed on all the measurements provided in each 640×480Kinect depth map.There is no sparse sampling of points or need to explic-itly extract features (although of course ICP does implicitly require depth features to converge).This type of dense track-ing is only feasible due to our novel GPU implementation,and plays a central role in enabling segmentation and user interaction in KinectFusion,as described later.

Volumetric Representation

By predicting the global pose of the camera using ICP,any depth measurement can be converted from image coordinates into a single consistent global coordinate space.We integrate this data using a volumetric representation based on [5].A 3D volume of ?xed resolution is prede?ned,which maps to speci?c dimensions of a 3D physical space.This volume is subdivided uniformly into a 3D grid of voxels.Global 3D vertices are integrated into voxels using a variant of Signed Distance Functions (SDFs)[22],specifying a relative dis-tance to the actual surface.These values are positive in-front of the surface,negative behind,with the surface interface de-?ned by the zero-crossing where the values change sign.In practice,we only store a truncated region around the ac-tual surface [5]–referred to in this paper as Truncated Signed Distance Functions (TSDFs).Whilst this approach has been studied in the context of laser range ?nders,we have found

this representation also has many advantages for the Kinect sensor data,particularly when compared to other represen-tations such as meshes.It implicitly encodes uncertainty in the range data,ef?ciently deals with multiple measurements,?lls holes as new measurements are added,accommodates sensor motion,and implicitly stores surface geometry.Listing 2Projective TSDF integration leveraging coalesced memory access.

1:for each voxel g in x,y volume slice in parallel do 2:while sweeping from front slice to back do 3:v g ←convert g from grid to global 3D position

4:v ←T ?1i v g

5:p ←perspective project vertex v 6:if v in camera view frustum then 7:sdf i ←||t i ?v g ||?D i (p )8:if (sdf i >0)then 9:tsdf i ←min (1,sdf i /max truncation)10:else 11:tsdf i ←max (?1,sdf i /min truncation)12:w i ←min(max weight,w i ?1+1)13:tsdf avg ←(tsdf i ?1w i ?1+tsdf i w i )/w i 14:store w i and tsdf avg at voxel g Volumetric Integration

To achieve real-time rates,we de-scribe a novel GPU implementation of volumetric TSDFs.The full 3D voxel grid is allocated on the GPU as aligned linear memory.Whilst clearly not memory ef?cient (a 5123volume containing 32-bit voxels requires 512MB of mem-ory),our approach is speed ef?cient.Given the memory is aligned,access from parallel threads can be coalesced to in-crease memory throughput.This allows a full sweep of a volume (reading and writing to every voxel)to be performed incredibly quickly on commodity graphics hardware (e.g.a 5123sweep,accessing over 130gigavoxels,takes ~2ms on a NVIDIA GTX470).

Our algorithm provides three novel contributions.First,it en-sures real-time coalesced access to the voxel grid,whilst in-tegrating depth data projectively .Second,it generates TSDF values for voxels within the current camera frustum that do not contain a direct measurement in the current depth map.This allows continuous surface estimates to be discretized into the voxel grid,from the point-based Kinect depth maps.Third,it is much simpler to implement than hierarchical tech-niques (e.g.[29]),but with the increased available memory on commodity GPUs,can scale to modeling a whole room.The pseudocode Listing 2illustrates the main steps of our al-gorithm.Due to the large number of voxels typically within a volume,it is not feasible to launch a GPU thread per voxel.To ensure coalesced memory access,a GPU thread is as-signed to each (x,y)position on the front slice of the vol-ume.In parallel,GPU threads then sweep through the vol-ume,moving along each slice on the Z-axis.Given the reso-lution of the volume,and the physical dimensions this maps to,each discrete 3D grid location can be converted into a ver-tex in global coordinates.A metric distance from the camera center (the translation vector of the global camera transform)to this vertex can be calculated.This 3D vertex can also be perspective projected back into image coordinates to lookup the actual depth measurement along the ray.The difference

between measured and calculated distances gives a new SDF

value for the voxel(line7).This is normalized to a TSDF

(lines9and11)and averaged with the previous stored value

using a simple running weighted average(line13)[5].Both

the new weight and averaged TSDF are stored at the voxel. Raycasting for Rendering and Tracking A GPU-based raycaster is implemented to generate views of the implicit

surface within the volume for rendering and tracking(see

pseudocode Listing3).In parallel,each GPU thread walks

a single ray and renders a single pixel in the output image.

Given a starting position and direction of the ray,each GPU

thread traverses voxels along the ray,and extracts the posi-

tion of the implicit surface by observing a zero-crossing(a

change in the sign of TSDF values stored along the ray).The

?nal surface intersection point is computed using a simple

linear interpolation given the trilinearly sampled points either

side of the zero-crossing.Assuming the gradient is orthogo-

nal to the surface interface,the surface normal is computed

directly as the derivative of the TSDF at the zero-crossing

[22].Therefore each GPU thread that?nds a ray/surface in-

tersection can calculate a single interpolated vertex and nor-

mal,which can used as parameters for lighting calculations

on the output pixel,in order to render the surface.

Listing3Raycasting to extract the implicit surface,compos-ite virtual3D graphics,and perform lighting operations. 1:for each pixel u∈output image in parallel do

2:ray start←back project[u,0];convert to grid pos

3:ray next←back project[u,1];convert to grid pos

4:ray dir←normalize(ray next?ray start)

5:ray len←0

6:g←?rst voxel along ray dir

7:m←convert global mesh vertex to grid pos

8:m dist←||ray start?m||

9:while voxel g within volume bounds do

10:ray len←ray len+1

11:g prev←g

12:g←traverse next voxel along ray dir

13:if zero crossing from g to g prev then

14:p←extract trilinear interpolated grid position

15:v←convert p from grid to global3D position

16:n←extract surface gradient as?tsdf(p)

17:shade pixel for oriented point(v,n)or

18:follow secondary ray(shadows,re?ections,etc)

19:if ray len>m dist then

20:shade pixel using inputed mesh maps or

21:follow secondary ray(shadows,re?ections,etc) Our rendering pipeline shown in Figure12also allows con-ventional polygon-based graphics to be composited on the raycasted view,enabling blending of virtual and real scenes with correct occlusion handling(see Figure6).In the?rst step(labeled a),a mesh-based scene is rendered with graph-ics camera parameters identical to the physical global cam-era pose(T i)and intrinsics(K).Instead of rendering to the framebuffer,the vertex buffer,surface normals and unshaded color data are stored in off-screen vertex,normal and color maps respectively(labeled b),and used as input during ray-casting(labeled c).For each GPU thread,a distance from the associated mesh vertex to the camera center is calculated in grid coordinates(Listing3lines7and8).This distance

acts

Figure12:Rendering pipeline combining raycasting of vol-ume with compositing of virtual polygon-based graphics.

as an additional termination condition while stepping along each ray(line19),allowing accurate occlusion testing be-tween volumetric and mesh surface geometries. Ambient,diffuse and specular lighting contributions can be calculated across reconstructed and virtual geometries(see Figure6).More advanced shading calculations can be per-formed by walking along the second(and possibly further) bounce of each ray.Shadows are calculated after the?rst ray hits a voxel or mesh surface(Listing3line13and19),by walking a secondary ray from the surface to light position (using grid coordinates).If a surface is hit before ray ter-mination then the vertex is shadowed.For re?ections,once the?rst ray hits a surface,a new ray direction is calculated, based on the surface normal and initial ray direction.

A novel contribution of our raycaster is the ability to view the implicit surface of the reconstructed3D model,compos-ite polygon geometry with correct occlusion handling,and provide advanced shading requiring raytraced operations,all in real-time,through a single algorithm.Any6DOF graph-ics camera transform can be used to raycast the volume,in-cluding arbitrary third-person views allowing user navigation of the3D model.However,another key contribution of our raycaster,is in generating higher-quality data for ICP cam-era tracking.When the raycast camera transform equates to the physical camera pose,the extracted vertices and normals equate to depth and normal maps(from the same perspec-tive as the physical camera)but with considerably less noise, shadows and holes than the raw Kinect data.As shown in [21],this allows us to mitigate issues of drift and reduce ICP errors,by tracking directly from the raycasted model as op-posed to frame-to-frame ICP tracking.

Simulating Real-World Physics

Taking the merging of real and virtual geometries further, the GPU pipeline is extended to support simulation of phys-ically realistic collisions between virtual objects and the re-constructed scene.A particle simulation is implemented on the GPU,based on[9]and[10].Scene geometry is repre-sented within the simulation by a set of static particles(see Figure13).These are spheres of identical size,which remain stationary but can collide with other dynamically simulated particles.Whilst an approximation,this technique models every discrete surface voxel within the volume in real-time, achieving compelling results even for very small and arbi-trarily shaped objects such as a book’s edges or a teapot’s handle in Figures7(bottom right)and13.

Figure13:Simulating physics on the real-time reconstruc-tion.Left:Surface is approximated as series of static par-ticles(updated per integration sweep)which interact with dynamic particles.Every surface voxel is represented by a static particle.Middle:Surface normals of static and dy-namic particles.Right:Shaded scene with only dynamic particles composited.

Static particles are created during volume integration.As the volume is swept,TSDF values within an adaptive thresh-old close to zero(de?ning the surface interface or zero level set)are extracted.For each surface voxel,a static particle is instantiated.Each particle contains a3D vertex in global (metric)space,a velocity vector(always empty for static par-ticles),and an ID.One key challenge then becomes detect-ing collisions.We use a spatially subdivided uniform grid to identify neighboring particles[9].Each cell in the grid has a unique ID.Each dynamic or static particle is assigned a grid cell ID by converting the particle’s global vertex to grid coordinates.Our system then maintains two lists–one con-taining static particles;the other dynamic.In both,particles are binned into the grid cells by sorting them by their current grid ID(using a GPU-based radix sort).During each simu-lation step,a GPU thread is launched per dynamic particle. Each thread processes collisions by examining(33)neighbor-hood of cells(?rst for dynamic-dynamic collisions and then dynamic-static).The Discrete Element Method(DEM)[10] is used to calculate a velocity vector when two particles col-lide.The particle’s global velocity is incremented based on all neighboring collisions,gravity,and interactions with the bounding volume.Each particle is then repositioned based on the accumulated velocity per simulation step.

Figure7shows thousands of particles interacting with the re-constructed scene.A major contribution of our GPU-based pipeline is that it maintains interactive rates despite the over-head of physics simulation,whilst performing real-time cam-era tracking and reconstruction.By default,only dynamic particles are rendered during raycasting and again can be cor-rectly occluded by the reconstructed geometry(see Figure7). INTERACTING IN THE SCENE

The core system described so far makes assumptions that the scene will remain reasonably static.Clearly in an interaction context,users want to move freely in front of the sensor,and interact in the scene.This opens up two main challenges. First,ICP tracking assumes a single rigid transform occurred per frame due to camera https://www.wendangku.net/doc/e511423752.html,er interaction in front of the sensor will cause scene motion independent of cam-era motion,which breaks this assumption.Because our ICP tracking is dense(https://www.wendangku.net/doc/e511423752.html,es all available points)our system is resilient to transient scene motions.For example,in Figure5, even when the user moves the object,enough background points will remain for ICP to converge.However,large or longer-term scene motions will cause tracking failure. Second,whilst our system supports real-time reconstruction, surface predictions are re?ned over time using a

running

Figure14:Extended GPU pipeline for real-time foreground and background segmentation,tracking and reconstruction. weighted average of distance values.By adapting the weight-ing,higher precedence can be given to new TSDF values, allowing for faster model updates,but the trade-off is addi-tional noise being introduced to the reconstruction.In prac-tice,a weight is chosen to balance quality of reconstruction with regular updates to the reconstruction based on scene changes.However,this does not support a continuously mov-ing scene.Typically a user freely moving in the scene leads to associated depth data being partially integrated into the volume(Figure8middle).As camera tracking relies directly on this model,which is now inconsistent with the live data, failures will occur(Figure8right).

ICP Outliers for Segmentation To begin to explore dy-namic user interaction with the reconstructed scene,a novel extension to the core GPU pipeline is provided(shown in Figure14).The technique leverages a unique property of dense ICP tracking.When all depth measurements are used, outliers from projective data association can form a strong initial predication as to parts of the scene moving indepen-dent of camera motion–if enough rigid background points are present for ICP still to converge.Our solution robustly segments a moving foreground object from the background, allowing tracking failures to be reduced,and enabling users to interact directly in the scene.

This pipeline assumes that at least parts of a rigid scene have been initially reconstructed using the core reconstruc-tion pipeline(labeled a).After this initial scan,a moving object entering the scene contains oriented points with signif-icant disparity to already reconstructed surfaces.These fail ICP projective data association and are copied into an out-lier map(labeled b).Next,a depth-aware connected compo-nent analysis is performed on the outlier map to cluster large connected patches and remove smaller outliers due to sen-sor noise(labeled c).Large connected patches,where fore-ground scene motion has been detected,are masked in the in-put depth map for core‘background’reconstruction(labeled d).This stops associated foreground depth measurements be-ing used for reconstruction or tracking in the core pipeline. Large patches of outliers can be additionally reconstructed using a second volume(labeled e)–potentially running on a

Figure15:Moving user is segmented and reconstructed, independent of background.Left to right:1)Live RGB.

2)ICP outliers(for initial segmentation prediction).3)?nal composited scene showing foreground shaded differently to background.4)Composited normal maps.

separate GPU with different reconstruction settings.A?nal step raycasts the two separate volumes and composites the output(labeled f),using the same method as Figure12. Overall our technique yields compelling results in stabiliz-ing tracking and therefore improving reconstruction quality for a static background,even when parts of the scene contin-ually move in front of the camera.Furthermore,it allows a foreground object to be robustly segmented,and potentially reconstructed separately of the background(see Figure15). Listing4Create touch map–testing if foreground and back-ground vertices overlap.

1:V g fg←raycasted vertex map from foreground volume 2:for each pixel u∈O(touch map)in parallel do

3:cast single ray for u(as Listing3)

4:if zero crossing when walking ray then

5:v g bg←interpolated global zero crossing position 6:if||v g bg?V g fg(u)||

7:O(u)←V g fg(u)

Detecting Touch on Arbitrary Surfaces The pipeline can be further extended to support multi-touch input by observ-ing intersections between foreground and background.We extend the default raycasting of the background volume to output a touch map,as shown in pseudocode https://www.wendangku.net/doc/e511423752.html,-ing the raycasted foreground vertex map as input,each GPU thread again walks a ray through the background volume.If a zero crossing is located,the corresponding foreground ver-tex(along the same ray)is tested(line6).If foreground and background are within range,the foreground position is out-put in the touch map.A depth-aware connected component analysis of the touch map suppresses noise and labels?n-gertip candidates,which are tracked over time.Examples of enabling multi-touch on both planar and non-planar surfaces are shown in Figures10and16.

Towards Modeling of Dynamic Scenes

The ability of now distinguishing moving foreground ro-bustly from background raises interesting questions regard-ing how best to reconstruct such moving surfaces.The key challenge becomes how to integrate foreground data into

a

Figure16:Segmentation,tracking and reconstruction of user’s arm with moving Kinect.Top left:Arm is?rst intro-duced and reconstruction contains a great deal of noise. Top right:surface is re?ned based on separate ICP-based pose prediction.Bottom left:the moving surface is rapidly reconstructed to a much higher quality than the raw Kinect signal.Bottom right:The intersection between foreground and background surfaces are used for multi-touch detection. second volume so that correspondence between surface mea-surements can be ensured over time.As an initial explo-ration,we have experimented with independently predicting the pose of the foreground object using another instance of ICP.Again dense ICP is performed but only using the fore-ground oriented points(from the live depth map and ray-casted second volume).In practice we have found that dense ICP converges even if small parts of the foreground are mov-ing non-rigidly.A compelling example is a user’s arm(Fig-ure15)where ICP converges on the rigid parts even if?n-gers are moving non-rigidly.This offers a coarse method for predicting the pose of the foreground object,relative to the global camera transform.

Using this predicted pose,depth measurements can be aligned and fused into the second volume.A surface prediction of the foreground,which becomes more re?ned and complete, can be built up over time.Because the foreground surface will likely be moving,we give more weight to new measure-ments being integrated.One simple extension uses a per-voxel weighting,adapted based on a running average of the derivative of the TSDF(prior to integration).This allows us to adapt the weight of individual surface voxels,giving higher priority to new measurements when the rate of change is high(e.g.?ngers or hands),and lower if the TSDF mea-surements are stable(e.g.the forearm).Figures16and15 shows our initial results based on foreground ICP tracking and per-voxel adaptive weighting.Note there is considerably less noise than the raw Kinect data–the user’s arms,hand and?ngers are clearly identi?able–and that this foreground reconstruction occurs alongside camera tracking and re?ne-ment of the background reconstruction.

For our physics simulation,we can now represent the entire foreground reconstruction as static particles,allowing colli-sions between the moving user,and the dynamic particles, to be modeled per frame(as shown in Figure9).This ap-proach of reconstructing a moving foreground,can also be used purely to track the pose of rigid objects held in the user’s hand–enabling tracking independent of camera motion and without markers or prior knowledge of the object.One ex-ample is shown in Figure1(far right)where an already re-constructed teapot(from Figure5)is tracked in6DOF and re-registered with the real physical object.

CONCLUSIONS

We have presented KinectFusion,a real-time3D reconstruc-tion and interaction system using a moving standard Kinect. Our contributions are threefold.First,we detailed a novel GPU pipeline that achieves3D tracking,reconstruction,seg-mentation,rendering,and interaction,all in real-time us-ing only a commodity camera and graphics hardware.Sec-ond,we have demonstrated core novel uses for our system: for low-cost object scanning and advanced AR and physics-based interactions.Third,we described new methods for seg-menting,tracking and reconstructing dynamic users and the background scene simultaneously,enabling multi-touch on any indoor scene with arbitrary surface geometries.We be-lieve this is the?rst time that a reconstruction system has shown this level of user interaction directly in the scene. Our hope is to scale the system further,reconstructing larger scenes where more memory ef?cient representations such as octrees might be needed[29].Encouraged by our initial re-sults,we also wish to explore more?ne-grained methods for tracking and reconstruction of moving deformable surfaces, including the user.Our hope is that KinectFusion will open many new topics for research both in terms of the underlying technology,as well as the interactive possibilities it enables. REFERENCES

1.P.J.Besl and N.D.McKay.A method for registration

of3-d shapes.IEEE Trans.Pattern Anal.Mach.Intell., 14:239–256,February1992.

2.X.Cao and R.Balakrishnan.Interacting with dynami-

cally de?ned information spaces using a handheld pro-jector and a pen.In UIST,pages225–234,2006.

3.Y.Chen and G.Medioni.Object modeling by registra-

tion of multiple range images.Image and Vision Com-puting(IVC),10(3):145–155,1992.

4.Y.Cui et al.3d shape scanning with a time-of-?ight

camera.In Computer Vision and Pattern Recognition (CVPR),pages1173–1180,June2010.

5.B.Curless and M.Levoy.A volumetric method for

building complex models from range images.ACM Trans.Graph.,1996.

6.S.Farsiu et al.Fast and robust multiframe super

resolution.IEEE Transactions on Image Processing, 13(10):1327–1344,2004.

7.J.Frahm et al.Building Rome on a cloudless day.In

Proc.Europ.Conf.on Computer Vision(ECCV),2010.

8.B.Freedman,A.Shpunt,M.Machline,and Y.Arieli.

Depth Mapping Using Projected Patterns.Patent Ap-plication,102008.WO2008/120217A2.

9.S.L.Grand.Broad-phase collision detection with

CUDA.In GPU Gems3.Addison-Wesley,2007. 10.T.Harada.Real-time rigid body simulation on gpus.In

GPU Gems3.Addison-Wesley Professional,2007. 11.R.Hartley and A.Zisserman.Multiple View Geome-

try in Computer Vision.Cambridge University Press, second edition,2004.12.P.Henry et al.RGB-D mapping:Using depth cam-

eras for dense3D modeling of indoor environments.In Proc.of the Int.Symposium on Experimental Robotics (ISER),2010.

13.B.Huhle et al.Fusion of range and color images

for denoising and resolution enhancement with a non-local?https://www.wendangku.net/doc/e511423752.html,puter Vision and Image Understanding, 114(12):1336–1345,2010.

14.M.Kazhdan,M.Bolitho,and H.Hoppe.Poisson sur-

face reconstruction.In Proc.of the Eurographics Sym-posium on Geometry Processing,2006.

15.G.Klein and D.W.Murray.Parallel tracking and map-

ping for small AR workspaces.In ISMAR,2007.

16.M.Levoy et al.The digital Michelangelo Project:3D

scanning of large statues.ACM Trans.Graph.,2000.

17.K.Low.Linear least-squares optimization for point-to-

plane icp surface registration.Technical report,TR04-004,University of North Carolina,2004.

18.P.Merrell et al.Real-time visibility-based fusion of

depth maps.In Proc.of the Int.Conf.on Computer Vision(ICCV),2007.

19.R.A.Newcombe and A.J.Davison.Live dense recon-

struction with a single moving camera.In Proc.of the IEEE(CVPR),2010.

20.R.A.Newcombe,S.Lovegrove,and A.J.Davison.

Dense tracking and mapping in real-time.In Proc.of the Int.Conf.on Computer Vision(ICCV),2011.

21.R.A.Newcombe et al.Real-Time Dense Surface Map-

ping and Tracking with Kinect.In ISMAR,2011. 22.S.Osher and R.Fedkiw.Level Set Methods and Dy-

namic Implicit Surfaces.Springer,2002.

23.S.Rusinkiewicz,O.Hall-Holt,and M.Levoy.Real-

time3D model acquisition.ACM Trans.Graph.,2002.

24.S.Rusinkiewicz and M.Levoy.Ef?cient variants of the

ICP algorithm.3D Digital Imaging and Modeling,Int.

Conf.on,0:145,2001.

25.S.Thrun.Robotic mapping:A survey.In Exploring

Arti?cial Intelligence in the New Millenium.2002. 26.D.Vlasic et al.Dynamic shape capture using multi-

view photometric stereo.ACM Trans.Graph.,28(5), 2009.

27.D.Wagner,https://www.wendangku.net/doc/e511423752.html,nglotz,and D.Schmalstieg.Robust

and unobtrusive marker tracking on mobile phones.In ISMAR,pages121–124,2008.

28.T.Weise,T.Wismer,B.Leibe,and L.V.Gool.In-

hand scanning with online loop closure.In IEEE Int.

Workshop on3-D Digital Imaging and Modeling,2009.

29.K.Zhou,M.Gong,X.Huang,and B.Guo.Data-

parallel octrees for surface reconstruction.IEEE Trans.

on Visualization and Computer Graphics,17,2011.

相关文档