文档库 最新最全的文档下载
当前位置:文档库 › pcl

pcl

pcl
pcl

3D is here:Point Cloud Library(PCL)

Radu Bogdan Rusu and Steve Cousins

Willow Garage

68Willow Rd.,Menlo Park,CA94025,USA

{rusu,cousins}@https://www.wendangku.net/doc/8513647794.html, Abstract—With the advent of new,low-cost3D sensing

hardware such as the Kinect,and continued efforts in advanced point cloud processing,3D perception gains more and more importance in robotics,as well as other?elds.

In this paper we present one of our most recent initiatives in the areas of point cloud perception:PCL(Point Cloud Library –https://www.wendangku.net/doc/8513647794.html,).PCL presents an advanced and extensive approach to the subject of3D perception,and it’s meant to provide support for all the common3D building blocks that applications need.The library contains state-of-the art algorithms for:?ltering,feature estimation,surface reconstruction,registration,model?tting and segmentation. PCL is supported by an international community of robotics and perception researchers.We provide a brief walkthrough of PCL including its algorithmic capabilities and implementation strategies.

I.I NTRODUCTION

For robots to work in unstructured environments,they need to be able to perceive the world.Over the past20years, we’ve come a long way,from simple range sensors based on sonar or IR providing a few bytes of information about the world,to ubiquitous cameras to laser scanners.In the past few years,sensors like the Velodyne spinning LIDAR used in the DARPA Urban Challenge and the tilting laser scanner used on the PR2have given us high-quality3D representations of the world-point clouds.Unfortunately, these systems are expensive,costing thousands or tens of thousands of dollars,and therefore out of the reach of many robotics projects.

Very recently,however,3D sensors have become available that change the game.For example,the Kinect sensor for the Microsoft XBox360game system,based on underlying technology from PrimeSense,can be purchased for under $150,and provides real time point clouds as well as2D images.As a result,we can expect that most robots in the future will be able to”see”the world in3D.All that’s needed is a mechanism for handling point clouds ef?ciently, and that’s where the open source Point Cloud Library,PCL, comes in.Figure1presents the logo of the project.

PCL is a comprehensive free,BSD licensed,library for n-D Point Clouds and3D geometry processing.PCL is fully integrated with ROS,the Robot Operating System(see https://www.wendangku.net/doc/8513647794.html,),and has been already used in a variety of projects in the robotics community.

II.A RCHITECTURE AND I MPLEMENTATION

PCL is a fully templated,modern C++library for3D point cloud processing.Written with ef?ciency and

per-

Fig.1.The Point Cloud Library logo.

formance in mind on modern CPUs,the underlying data structures in PCL make use of SSE optimizations heavily. Most mathematical operations are implemented with and based on Eigen,an open-source template library for linear algebra[1].In addition,PCL provides support for OpenMP (see https://www.wendangku.net/doc/8513647794.html,)and Intel Threading Building Blocks(TBB)library[2]for multi-core parallelization.The backbone for fast k-nearest neighbor search operations is provided by FLANN(Fast Library for Approximate Nearest Neighbors)[3].All the modules and algorithms in PCL pass data around using Boost shared pointers(see Figure2),thus avoiding the need to re-copy data that is already present in the system.As of version0.6,PCL has been ported to Windows,MacOS,and Linux,and Android ports are in the works.

From an algorithmic perspective,PCL is meant to incor-porate a multitude of3D processing algorithms that operate on point cloud data,including:?ltering,feature estimation, surface reconstruction,model?tting,segmentation,registra-tion,etc.Each set of algorithms is de?ned via base classes that attempt to integrate all the common functionality used throughout the entire pipeline,thus keeping the implementa-tions of the actual algorithms compact and clean.The basic interface for such a processing pipeline in PCL is:?create the processing object(e.g.,?lter,feature estima-tor,segmentation);

?use setInputCloud to pass the input point cloud dataset to the processing module;

?set some parameters;

?call compute(or?lter,segment,etc)to get the output. The sequence of pseudo-code presented in Figure2shows a standard feature estimation process in two steps,where a NormalEstimation object is?rst created and passed an input dataset,and the results together with the original input are then passed together to an FPFH[4]estimation object.

To further simplify development,PCL is split into a series of smaller code libraries,that can be compiled separately:?libpcl?lters:implements data?lters such as downsam-

978-1-61284-380-3/11/$26.00 ? 2011 IEEE

ICRA Communications

Fig.2.An example of the PCL implementation pipeline for Fast Point Feature Histogram(FPFH)[4]estimation.

pling,outlier removal,indices extraction,projections, etc;

?libpcl features:implements many3D features such as surface normals and curvatures,boundary point estima-tion,moment invariants,principal curvatures,PFH and FPFH descriptors,spin images,integral images,NARF descriptors,RIFT,RSD,VFH,SIFT on intensity data, etc;

?libpcl io:implements I/O operations such as writing to/reading from PCD(Point Cloud Data)?les;?libpcl segmentation:implements cluster extraction, model?tting via sample consensus methods for a va-riety of parametric models(planes,cylinders,spheres, lines,etc),polygonal prism extraction,etc’

?libpcl surface:implements surface reconstruction tech-niques,meshing,convex hulls,Moving Least Squares, etc;

?libpcl registration:implements point cloud registration methods such as ICP,etc;

?libpcl keypoints:implements different keypoint extrac-tion methods,that can be used as a preprocessing step to decide where to extract feature descriptors;?libpcl range image:implements support for range im-ages created from point cloud datasets.

To ensure the correctness of operations in PCL,the methods and classes in each of the above mentioned libraries contain unit and regression tests.The suite of unit tests is compiled on demand and veri?ed frequently by a dedicated build farm,and the respective authors of a speci?c compo-nent are being informed immediately when that component fails to test.This ensures that any changes in the code are tested throughly and any new functionality or modi?cation will not break already existing code that depends on PCL. In addition,a large number of examples and tutorials are available either as C++source?les,or as step-by-step instructions on the PCL wiki web pages.

III.PCL AND ROS

One of the corner stones in the PCL design philosophy is represented by Perception Processing Graphs(PPG).The rationality behind PPGs are that most applications that deal with point cloud processing can be formulated as a concrete set of building blocks that are parameterized to achieve dif-ferent results.For example,there is no algorithmic difference between a wall detection algorithm,or a door detection,or a table detection–all of them share the same building block, which is in this case,a constrained planar segmentation algorithm.What changes in the above mentioned cases is a subset of the parameters used to run the algorithm.

With this in mind,and based on the previous experience of designing other3D processing libraries,and most recently, ROS,we decided to make each algorithm from PCL available as a standalone building block,that can be easily connected with other blocks,thus creating processing graphs,in the same way that nodes connect together in a ROS ecosystem. Furthermore,because point clouds are extremely large in nature,we wanted to guarantee that there would be no unnecessary data copying or serialization/deserialization for critical applications that can afford to run in the same process.For this we created nodelets,which are dynamically loadable plugins that look and operate like ROS nodes,but in a single process

(as single or multiple threads).

A concrete nodelet PPG example for the problem of identifying a set of point clusters supported by horizontal planar areas is shown in Figure3.

Fig.3.A ROS nodelet graph for the problem of object clustering on planar surfaces.

IV.V ISUALIZATION

PCL comes with its own visualization library,based on VTK[5].VTK offers great multi-platform support for ren-dering3D point cloud and surface data,including visualiza-tion support for tensors,texturing,and volumetric methods. The PCL Visualization library is meant to integrate PCL with VTK,by providing a comprehensive visualization layer for n-D point cloud structures.Its purpose is to be able to quickly prototype and visualize the results of algorithms operating on such hyper-dimensional data.As of version0.2, the visualization library offers:

?methods for rendering and setting visual properties (colors,point sizes,opacity,etc)for any n-D point cloud dataset;

?methods for drawing basic3D shapes on screen(e.g., cylinders,spheres,lines,polygons,etc)either from sets of points or from parametric equations;

ICRA Communications

?

a histogram visualization module (PCLHistogramVisu-alizer)for 2D plots;

?a multitude of geometry and color handlers.Here,the user can specify what dimensions are to be used for the point positions in a 3D Cartesian space (see Figure 4),or what colors should be used to render the points (see Figure 5);

?RangeImage visualization modules (see Figure 6).The handler interactors are modules that describe how colors and the 3D geometry at each point in space are computed,displayed on screen,and how the user interacts with the data.They are designed with simplicity in mind,and are easily extendable.A code snippet that produces results similar to the ones shown in Figure 4is presented in Algorithm 1.

Algorithm 1Code example for the results shown in Figure 4.

using namespace pcl visualization ;PCLVisualizer p (“Test”);

PointCloudColorHandlerRandom handler (cloud);p.addPointCloud (cloud,handler,”cloud random”);p.spin ();

p.removePointCloud (”cloud random”);

PointCloudGeometryHandlerSurfaceNormal handler2(cloud);p.addPointCloud (cloud,handler2,”cloud random”);p.spin ();

The library also offers a few general purpose tools for visualizing PCD ?les,as well as for visualizing streams of data from a sensor in real-time in

ROS.

Fig.4.An example of two different geometry handers applied to the same dataset.Left:the 3D Cartesian space represents XYZ data,with the arrows representing surface normals estimated at each point in the cloud,right:the Cartesian space represents the 3dimensions of the normal vector at each point for the same

dataset.

Fig. 5.An example of two different color handers applied to the same dataset.Left:the colors represent the distance from the acquisition viewpoint,right:the color represent the RGB texture acquired at each

point.

Fig.6.An example of a RangeImage display using PCL Visualization (bottom)for a given 3D point cloud dataset (top).

V.U SAGE E XAMPLES

In this section we present two code snippets that exhibit the ?exibility and simplicity of using PCL for ?ltering and segmentation operations,followed by three application examples that make use of PCL for solving the perception problem:i)navigation and mapping,ii)object recognition,and iii)manipulation and grasping.

Filtering constitutes one of the most important operations that any raw point cloud dataset usually goes through,before any higher level operations are applied to it.Algorithm 2and Figure 7present a code snippet and the results obtained after running it on the point cloud dataset from the left part of the ?gure.The ?lter is based on estimating a set of statistics for the points in a given neighborhood (k =50here),and using them to select all points within 1.0·σdistance from the mean distance μ,as inliers (see [6]for more information).Algorithm 2Code example for the results shown in Figure 7.

pcl::StatisticalOutlierRemoval f;f.setInputCloud (input cloud);f.setMeanK (50);

f.setStddevMulThresh (1.0);f.?lter (output

cloud);

Fig.7.Left:a raw point cloud acquired using a tilting laser scanner,middle:the resultant ?ltered point cloud (i.e.,inliers)after a StatisticalOut-lierRemoval operator was applied,right:the rejected points (i.e.,outliers).

The second example constitutes a segmentation operation for planar surfaces,using a RANSAC [7]model,as shown in Algorithm 3.The input and output results are shown in Figure 8.In this example,we are using a robust RANSAC estimator to randomly select 3non-collinear points and calculate the best possible model in terms of the overall

ICRA Communications

number of inliers.The inlier thresholding criterion is set to a maximum distance of 1cm of each point to the plane model.Algorithm 3Code example for the results shown in Figure 8.

pcl::SACSegmentation s;f.setInputCloud (input cloud);

f.setModelType (pcl::SACMODEL PLANE);f.setMethodType (pcl::SAC RANSAC);f.setDistanceThreshold (0.01);f.segment (output

cloud);

Fig.8.Left:the input point cloud,right:the segmented plane represented by the inliers of the model marked with purple color.

An example of a more complex navigation and mapping application is shown in the left part of Figure 9,where the PR2robot had to autonomously identify doors and their handles [8],in order to explore rooms and ?nd power sockets [9].Here,the modules used included constrained planar segmentation,region growing methods,convex hull estimation,and polygonal prism extractions.The results of these methods were then used to extract certain statistics about the shape and size of the door and the handle,in order to uniquely identify them and to reject false positives.

The right part of Figure 9shows an experiment with real-time object identi?cation from complex 3D scenes [10].Here,a set of complex 3D keypoints and feature descriptors are used in a segmentation and registration framework,that aims to identify previously seen objects in the world.

Figure 10presents a grasping and manipulation applica-tion [11],where objects are ?rst segmented from horizontal planar tables,clustered into individual units,and a registra-tion operation is applied that attaches semantic information to each cluster found.

VI.C OMMUNITY AND F UTURE P LANS

PCL is a large collaborative effort,and it would not exist without the contributions of several people.Though the community is larger,and we accept patches and im-provements from many users,we would like to acknowledge the following institutions for their core contributions to the development of the library:AIST,UC Berkeley,University

of

Handle

Door

Fig.9.Left:example of door and handle identi?cation [8]during a navigation and mapping experiment [9]with the PR2robot.Right:object recognition experiments (chair,person sitting down,cart)using Normal Aligned Radial Features

(NARF)[10]

with

the PR2robot.

Fig.10.Experiments with PCL in grasping applications [11],from left to right:a visualization of the collision environment including points associated with unrecognized objects (blue),and obstacles with semantic information (green);detail showing 3D point cloud data (grey)with 3D meshes superimposed for recognized objects;successful grasping showing the bounding box associated with an unrecognized object (brown)attached to the gripper.

Bonn,University of British Columbia,ETH Zurich,Univer-sity of Freiburg,Intel Reseach Seattle,LAAS/CNRS,MIT,University of Osnabr¨u ck,Stanford University,University of Tokyo,TUM,Vienna University of Technolog,and Wash-ington University in St.Louis.

Our current plan for PCL is to improve the documentation,unit tests,and tutorials and release a 1.0version.We will continue to add functionality and make the system available on other platforms such as Android,and we plan to add support for GPUs using CUDA and OpenCL.

We welcome any new contributors to the project,and we hope to emphasize the importance of code sharing for 3D processing,which is becoming crucial for advancing the robotics ?eld.

R EFERENCES

[1]G.Guennebaud,B.Jacob,et al.,“Eigen v3,”https://www.wendangku.net/doc/8513647794.html,,

2010.

[2]J.Reinders,Intel threading building blocks :out?tting C++for multi-core processor parallelism .O’Reilly,2007.

[3]M.Muja and D.G.Lowe,“Fast approximate nearest neighbors with

automatic algorithm con?guration,”in International Conference on Computer Vision Theory and Application VISSAPP’09).INSTICC Press,2009,pp.331–340.

[4]R.B.Rusu,N.Blodow,and M.Beetz,“Fast Point Feature Histograms

(FPFH)for 3D Registration,”in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),Kobe,Japan,May 12-172009.

[5]W.Schroeder,K.Martin,and B.Lorensen,Visualization Toolkit:An

Object-Oriented Approach to 3D Graphics,4th Edition .Kitware,December 2006.

[6]R.B.Rusu,Z.C.Marton,N.Blodow,M.Dolha,and M.Beetz,

“Towards 3D Point Cloud Based Object Maps for Household Envi-ronments,”Robotics and Autonomous Systems Journal (Special Issue on Semantic Knowledge),2008.

[7] A.M.Fischler and C.R.Bolles,“Random sample consensus:a

paradigm for model ?tting with applications to image analysis and automated cartography,”Communications of the ACM ,vol.24,no.6,pp.381–395,June 1981.

[8]R.B.Rusu,W.Meeussen,S.Chitta,and M.Beetz,“Laser-based

Perception for Door and Handle Identi?cation,”in International Con-ference on Advanced Robotics (ICAR),June 22-262009.

[9]W.Meeussen,M.Wise,S.Glaser,S.Chitta,C.McGann,P.Mihelich,

E.Marder-Eppstein,M.Muja,V .Eruhimov,T.Foote,J.Hsu,R.Rusu,B.Marthi,G.Bradski,K.Konolige, B.Gerkey,and E.Berger,“Autonomous Door Opening and Plugging In with a Personal Robot,”in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),Anchorage,Alaska,May 3-82010.

[10] B.Steder,R.B.Rusu,K.Konolige,and W.Burgard,“Point Feature

Extraction on 3D Range Scans Taking into Account Object Bound-aries,”in Submitted to the IEEE International Conference on Robotics and Automation (ICRA),Shanghai,China,May 9-132010.

[11]M.Ciocarlie,K.Hsiao,E.G.Jones,S.Chitta,R.B.Rusu,and

I.A.Sucan,“Towards reliable grasping and manipulation in household environments,”New Delhi,India,12/20102010.

ICRA Communications

相关文档