ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Concatenating 20 pointclouds in one large cloud - 3d scanner

asked 2019-01-12 15:15:19 -0500

Wrt gravatar image

Hi,

I have about 20 pictures of object and there is a laser line on each one. It starts on the begin of object and goes further about 0.5cm with every picture. I want to create 3D object pointcloud using these pictures. I can generate a single point cloud from one picture using sensor_msgs::PointCloud2 cloudOfPoints;. And I can do that with every picture, but how can I concatenate them to get 3D object? I was looking and find some similar problems, but they didn't help me with that. I was trying solutions like pcl::concatenatePointCloud, but it combines clouds in the same place what makes no effect. I must consider their distance in 3D world.

Can someone help mi with that?

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2019-01-13 04:27:48 -0500

You're correct when you found pcl::concatenatePointCloud, but that is only half the answer. You also need to use the pcl::transformPointCloud function, there is a good tutorial for this here.

The important question you'll need to answer is how you find the exact transform from one slice of points to the next. For example if you're rotating the object being scanned on a flatform you could use an encoder to measure the angular offset and use that to generate the correct transformation matrix. If you could describe your system in a bit more detail then we could give you some specific advice about how to do this.

Hope this helps.

edit flag offensive delete link more

Comments

Aff, sorry I forgot to say something about my system, I have camera and laser mounted on one strip. This system is mounted on robotic arm and I only move it in one axis to scan object, so rotation is the same on all pictures, only translation is changed.

Wrt gravatar image Wrt  ( 2019-01-13 05:45:24 -0500 )edit

I was thinking about matrix transformation, but was not sure if it's a necessary to use it. I thought that different position of laser on pictures will be enough, so now I know it doesn't. I will check the tutorial. Thanks for your answer :)

Wrt gravatar image Wrt  ( 2019-01-13 05:45:26 -0500 )edit

Okay. You should be able to get the position of the sensor on the end of the robot arm (probably from the TF system). This will give you the transformation you need.

PeteBlackerThe3rd gravatar image PeteBlackerThe3rd  ( 2019-01-13 07:15:06 -0500 )edit

It works, thanks!

Wrt gravatar image Wrt  ( 2019-01-14 09:03:56 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2019-01-12 15:13:54 -0500

Seen: 546 times

Last updated: Jan 13 '19