ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Merging point clouds

asked 2020-03-08 06:00:23 -0600

noodle_dial gravatar image

I have a setup where I have 2 or more velodyne sensors publishing their point clouds to 'velodyne_points1',' velodyne_points2' and so on. When I publish all point clouds to the same topic, or I use topic_tools relay to point the point clouds to the same topic, the Point clouds overwrite each other, causing them to appear to flicker randomly when viewed in RViz. This makes it hard to run packages like Octomap which only subscribes to one topic. Is there anyway to merge the outputs of the sensors in realtime to one topic?

edit retag flag offensive close merge delete

3 Answers

Sort by ยป oldest newest most voted
3

answered 2020-08-28 10:51:41 -0600

dtyugin gravatar image

I recommend autoware's node link

It does what you want. Make sure you have connected tree of frames. You should configure both of your velodyne driver nodes to provide separate unique frame id (e.g. frame_id velodyne1, velodyne2). You should also provide transformation between frames of velodyne1 sensor and velodyne2 sensor. Usually it can be base_link frame (some point on your robot which you can use to measure relative distance in 3d to your sensors) and two static transforms from velodyne1 frame to base_link and velodyne2 frame to base_link. Then target output frame for your merged cloud should be base_link.

So merging points properly can be done only when you know relative distance between lidars. Because point coordinates in point cloud are provided relative to lidar physical center (velodyne frame_id). You have to transform coordinates in point clouds before merging them.

Here an example of two lidars separated on distance 4 meters by x axis with base_link point between them.

<node pkg="tf2_ros"      type="static_transform_publisher" name="velodyne1TF"      args="2.0 0.0 0.0 0.0 0.0 0.0 base_link velodyne" />
<node pkg="tf2_ros"      type="static_transform_publisher" name="velodyne2TF"      args="-2.0 0.0 0.0 0.0 0.0 0.0 base_link velodyne" />
edit flag offensive delete link more

Comments

The code from autoware is great!

zkytony gravatar image zkytony  ( 2022-09-11 16:35:25 -0600 )edit

how to get the transform values, should I calibrate or physically measure like trial and error?

Vimal Raj A gravatar image Vimal Raj A  ( 2023-02-16 05:32:25 -0600 )edit
0

answered 2020-08-31 02:53:11 -0600

TomSon gravatar image

Hi,

Seems you got the same problem like me :), check out my package and give it a try :

https://github.com/tomlogan501/pointc... (availaible also on melodic binaries)

You may encounter performances issues if your pointclouds are too big, I recommend to transform just one to another.

Also I will try the solution told by @dtyugin but it depends on the whole autoware code :(

edit flag offensive delete link more

Comments

This particular node could be easily taken from autoware without whole project build since it does not depend on any autoware specific code.

dtyugin gravatar image dtyugin  ( 2020-08-31 13:01:08 -0600 )edit

By editing the CMakelist directly ? and removing autoware_health_checker ? Kind of tricky but feasible Seems to depend to velodyne_pointcloud/point_types.h which is a old version of velodyne_pointcloud no ?

TomSon gravatar image TomSon  ( 2020-09-01 01:31:05 -0600 )edit
0

answered 2020-08-28 06:49:49 -0600

Deniz gravatar image

updated 2020-08-28 06:50:37 -0600

You would have to build your own middle-man node for this.

  1. meaning that your separate lidar nodes would publish each to a different topic
  2. your middle node subscribes to all topics the lidars publish to
  3. your middle node does pointcloud fusion and publishes the full result (use the TF2 library)
  4. Rviz subscribes to your middle node

Be aware though: Proper synchronization is needed with possibly time/space transforms needed to make the pointclouds overlap correctly:

  • Since the velodynes produce data asynchronously, moving objects will lag/stutter/blur because the pointclouds are not in sync
  • Since the lidars cannot physically be in the same place you need to "move" a pointcloud to the proper location in relation to the other.

All this is a very intensive system since pointcloud data processing could be very resource intensive.

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2020-03-08 06:00:23 -0600

Seen: 4,095 times

Last updated: Aug 31 '20