ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

how to generate 3D point cloud

asked 2013-03-23 11:48:02 -0600

Rohitdewani gravatar image

updated 2014-01-28 17:15:51 -0600

ngrennan gravatar image


I needed some help regarding 3D modeling. I have the Asus Xtion Pro live .I have installed the Openni drivers on ros groovy on Ubuntu 12.04 and I can see the depth registered point cloud in rviz.

Here, we have a rotating circular table on which there are 5 pots. There will be a robotic arm near one of the pots to pluck the fruit from the pot.This pot has DC motor under it because of which it can rotate full 360 degrees. Because it can rotate a full 360 I will be able to create a 3D model of the current state of the plant.

How do I create a 3D model of the pot (in which there is a plant with fruits for example:-strawberry) and then feed it to moveit! or rviz . Perform motion planning and simulations on the plant and the robotic arm for collision avoidance and the end result goal should be that the arm is near fruit.

I also have a webcam mounted on top of the robotic arm.

I have tried installing object recognition kitchen and is not installing. I have tried various other packages like ethzasl_icp_mapper but I am not sure how to use it for a rotating pot. Which package should I use and how do I use that package? I am trying all the packages that are available since 15 days and I am not being able to go further that is why I am sending such a long email. Please let me know how I should go ahead with this.

P.S:- I also have a hokuyo Laser Range Finder which I could use.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2013-03-24 02:48:31 -0600

With ASUS, you can get a registered point cloud which contains a partial view of the target object. To create the pot's full 3D model, you just need to collect several clouds from different viewpoint and align these clouds together. So the key problem is to get the transformation between adjacent clouds. There are several to get the transformation. The easiest way is to use a chessboard which is usually used in camera calibration. Stick the chessboard on the table, rotate the table or rotate the sensor to get different clouds. The transformation between the chessboard and the sensor can be obtained by camera calibration. Since the chessboard is kept still, you can get the transformation between adjacent sensor position. But with this method, the pot and the table should be relatively still. hope that helps.

edit flag offensive delete link more



Thank you for your inputs. I will take point clouds by rotating the table. Then after that I did not understand what you meant by saying "The transformation between the chessboard and the sensor can be obtained by camera calibration"

How do I do this camera calibration?

Rohitdewani gravatar image Rohitdewani  ( 2013-03-26 13:54:29 -0600 )edit

well, camera calibration is a basic operation to get the intrinsic and extrinsic parameters of a camera. Generally we use a chessboard to do the calibration. you can refer to this page:

yangyangcv gravatar image yangyangcv  ( 2013-03-28 18:08:36 -0600 )edit


How do I achieve this transformation between chessboard and sensor? And then apply it to the rotating pots? Please explain in little detail. Thanks

Rohitdewani gravatar image Rohitdewani  ( 2013-04-03 17:04:39 -0600 )edit

Question Tools

1 follower


Asked: 2013-03-23 11:48:02 -0600

Seen: 1,847 times

Last updated: Apr 03 '13