ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

RGB-6D-SLAM performance on a robot

asked 2011-04-20 08:42:23 -0600

Mac gravatar image

updated 2016-10-24 08:59:23 -0600

ngrennan gravatar image

Hello folks,

I have a Kinect mounted on a Create (not a turtlebot, but you get the idea), and I was hoping to build some 3D maps of my lab with that platform and the RGDB-6D-SLAM package. Sadly (even with the newly-released version) it doesn't work very well; even when driving (slowly!) a loop of only six or so meters, I get serious object doubling and (later) a catastrophic matching failure that throws one half of the map entirely out-of-plane; it's not even topologically correct.

My question is this: what's the correct next step? Fidgeting with RGBD-SLAM parameters? On a loop this tiny, I predict that gmapping + pointcloud_to_laserscan would have no trouble; it seems to me that graph techniques with full 3D and RGB should be able to do better.

I'm happy to provide the offending bagfile to anybody who would like to try this for themselves.



I figured I'd put the bagfiles up:

  1. First bag (435MB)
  2. Second bag (1GB) (Higher frame rate)
  3. Third bag (1GB) (Higher frame rate, different camera angle)

I'm curious to see what people think; it would be really handy to get this working. @felix, any thoughts?


edit retag flag offensive close merge delete


Hi Mac, I'll have a look into it. Meanwhile, please try to enable the use of USE_GICP_CODE at the top CMakeLists.txt.
Felix Endres gravatar image Felix Endres  ( 2011-04-26 05:28:25 -0600 )edit
Hi Felix. Thanks for taking a look. I've tried it with USE_GICP_CODE turned on; no improvement. (This was several days ago; IIRC, the behavior changed, but didn't get better.)
Mac gravatar image Mac  ( 2011-04-26 05:32:30 -0600 )edit

1 Answer

Sort by ยป oldest newest most voted

answered 2011-05-02 09:14:02 -0600

I tried your files today. Performance is really bad. First and second contain a difficult scenario. Most features aren't visible for long and sometimes there are few salient features. At other times there are objects occuring multiple times, which leads to undesired matches and transformations. The third set is is similar but somewhat more benign.

I managed to get reasonable results by changing some settings (and fixing a bug or two) that were not in global_definitions.cpp yet. I will commit the changes to the freiburg_tools repository tommorrow. Additionally to the new parameters and bugfixes, I suggest you increase the number of ...min/max_features... and decrease the values for ...min_translation... and ...min_rotation... However, I haven't tried running on default parameters after the new changes, so it might as well work with default parameters.

One thing I noticed is that your bag files contain badly synced images and point clouds. Usually the timestamp of depth image and point cloud should be identical and the timestamp of the corresponding monochrome image shouldn't be more than 1/30 seconds away. This can lead to bad feature locations when projecting them to 3D. This might be a problem of bag file recording, but to make sure, I introduced a parameter that can be set to true to block such asynchronous frames. Also the visualization now overlays the monochrome image and the depth image in different colors, so you can check whether this is a problem.

edit flag offensive delete link more


Thanks for taking a look! Have these changes hit the repository yet? My just-update checkout from shows a last-updated of April 26th.
Mac gravatar image Mac  ( 2011-05-05 09:00:59 -0600 )edit
As mentioned above i committed the changes to the freiburg tools repository. I will only update the repository for major changes. Sorry for the inconvenience. There is still a huge issue with spurious feature matches that is hard to address. Let me know whether this helps you so far.
Felix Endres gravatar image Felix Endres  ( 2011-05-06 01:22:17 -0600 )edit
Ah, my mistake. Your link suggests checking out from openslam; I missed the distinction. I'll give this a try (possible after ICRA) and let you know.
Mac gravatar image Mac  ( 2011-05-06 01:55:12 -0600 )edit
Can you explain how you used these bags to generate the map? I am trying to debug my rgbdslam -> octomap setup.
ngidgas gravatar image ngidgas  ( 2011-05-10 20:19:17 -0600 )edit
Make sure openni is NOT running, then "rosbag play third.bag" in a terminal, "rosrun rgbdslam rgbdslam" in another (use space to start processing), and then the experimental octomap server, as described elsewhere in this forum.
Felix Endres gravatar image Felix Endres  ( 2011-05-11 03:48:24 -0600 )edit
Felix, your theory about synchronization was spot-on: openni_camera_unstable plus some careful bagging produced much better results. There are still issues with frame-to-frame tracking, but hey, progress!
Mac gravatar image Mac  ( 2011-05-31 09:11:41 -0600 )edit
Cool. Could you elaborate on the "careful bagging". What do you do different now?
Felix Endres gravatar image Felix Endres  ( 2011-05-31 21:48:42 -0600 )edit
Rather than have the (underpowered) netbook on my robot generate the point clouds, I used openni_camera_unstable to bag the rgb and depth images, and then generate the point clouds in an offline pass; that greatly improved the synchronization; I only drop a few frames due to sync errors now.
Mac gravatar image Mac  ( 2011-06-02 05:35:49 -0600 )edit

Question Tools



Asked: 2011-04-20 08:42:23 -0600

Seen: 1,322 times

Last updated: May 02 '11