ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
2

How to combine HectorSLAM and RGB-D camera data to achieve 3D mapping?

asked 2020-02-03 05:18:20 -0500

SamH gravatar image

updated 2020-02-03 08:34:23 -0500

I represent a team of engineers from Lancaster University. We are attempting to combine 2D LIDAR data (preferably using HectorSLAM) and RGB-D camera data (as done by Technische Universität Darmstadt https://youtu.be/olGZv05RLHI) for an autonomous UAV mapping application. We are using AND RPLIDAR A2 scanner and a Realsense Depth Camera D415. The ROS distro is Kinetic. How could we achieve this and could it be performed using ROS on an Nvidia Jetson Nano? Can we run two SLAM algorithms concurrently (e.g HectorSLAM and ORB-SLAM), or do we need to combine the sensor data before applying SLAM? Is there any open-source code available to achieve this?

Many Thanks.

edit retag flag offensive close merge delete

1 Answer

Sort by » oldest newest most voted
2

answered 2020-02-03 11:33:14 -0500

There are many ways to approach this problem. I'll outline the simplest one, but your comment about:

to combine the sensor data before applying SLAM

will probably give you a better result. Or approaching this from a tightly-coupled approach, if that's terminology you're familiar with.

The only way to my knowledge that's fully-open source and relatively plug and play is as follows:

1) Use the 2D laser scanner to build a map. This can be done with Hector like you mention, but also slam toolbox, karto, or gmapping.

2) Look at Octomap. Use the positioning provided by the slam algorithm and odometry to project your points of your depth sensor into the global frame provided by the 2D slam

3) Rejoice!

Obvious asterisks:

  • While this is a popular method for junior developers and folks that don't want to actually create a SLAM solution, there are clear downsides.

  • If you're working with a 2D laser scanner, then you're throwing out a ton of data that could be used to build a better map and position yourself, and only just using that 3D information to build the global model. Calling this 3D SLAM is a bit of a misnomer, but again, its a popular method.

  • To increase fidelity, you may need to continuously post the graph to octomap to update the positioning of individual measurements if they shift around. This is necessary for loop closure and reduction of residual error operations. Hector doesn't do loop closures, so that may not be an issue you have the ability to resolve if you're married to Hector.

edit flag offensive delete link more

Comments

Thank you for your fast and comprehensive response. This gives us everything we need to get started.

Can I clarify what you mean by "Use the positioning provided by the slam algorithm and odometry to project your points of your depth sensor into the global frame provided by the 2D slam".

How do you specifically suggest we combine the Odometry data with the Octomap and then combine this with the global frame provided by the 2D SLAM? Is this just a case of finding appropriate open-source code on Github? Thanks again for your help!

SamH gravatar image SamH  ( 2020-02-03 12:35:55 -0500 )edit

Awesome, please mark the answer as correct to get it off the unanswered questions queue.

Basically Hector / SLAM will give you a pose estimation in TF. You should use octomap to take in sensor readings in their current frame (camera_frame, or something) and transform into the global frame (map, or something) to insert into the octomap occupancy grid.

stevemacenski gravatar image stevemacenski  ( 2020-02-03 13:31:43 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2020-02-03 05:18:20 -0500

Seen: 1,439 times

Last updated: Feb 03 '20