ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
3

benchmarking between SLAM alghoritms

asked 2019-03-20 03:10:52 -0600

zeynep gravatar image

Hello,

I'm trying to make a benchmark between GMAPPING and Cartographer SLAM alghoritms. I have made 2D maps with gmapping and cartographer on ROS kinetic.(occupancy grid map format , i have saved maps with map_saver) I need a roadmap about how to compare maps. Also I have to make a ground-truth map and compare the maps against this ground truth map.

My questions; 1. How can I make a ground truth map, which tools can be used? 2. How can I compare pgm files(maps) I know that first I have to allign my maps and then compare but how?

Thanks for your help.

edit retag flag offensive close merge delete

4 Answers

Sort by ยป oldest newest most voted
2

answered 2019-11-21 23:25:16 -0600

I know it's quite late to answer this but I recently did compare two results from two SLAM packages. For the comparison I used a Python package called evo that is meant to be used to compare odometry and SLAM.

I have described my full workflow in this post but in short it consists of the following steps:

  • Record a bag file while the robot is moving around space
  • Replay the data while running slam packages of your choice, record these as bag files again
  • Run a script that will convert the logged tf to a message supported by evo on every bag file
  • Merge two bag files

What I was missing in my experiments is groundtruth but if you have this information you will easily be able to add it.

edit flag offensive delete link more
2

answered 2019-03-20 14:15:38 -0600

The best way to have ground truth is to make the ground the robot rolls on (literally) :) Make a Gazebo world for your robot to roll around in so that you know exactly where everything is to compare to. Then run the different SLAM implementations over the _exact same_ mapping run's data of that space and compare.

As for comparison techniques, many are available, I'd recommend consulting other SLAM benchmarking papers for their methodologies and using one or a variant of one for formal comparison.

edit flag offensive delete link more
1

answered 2019-10-17 16:21:28 -0600

Orhan gravatar image

As @stevemacenski said, easiest way to test their accuracy is a simulation environment which you could directly access to the ground truth data. This was what I did to compare effects of navigation's parameters' changes 3 years ago.

But; maybe in a testing environment, you could measure exact positions of multiple points in a room, then navigate the robot manually to those positions continuously on maps generated by both those algorithms. After navigating between those points enough, you could have enough data to compare the performance.

Or let's say that this is an academic project and you could create an environment which you can place different sized AR tags everywhere. You can continuously collect the differences between navigation beliefs on generated maps by both packages and AR tag positioning reports.

edit flag offensive delete link more
1

answered 2019-03-20 10:41:44 -0600

kosmastsk gravatar image

I don't really know to answer your first question, but I would suggest to try running several SLAM algorithms and keep as ground truth the one that you think performs better.

As for the second question, there exists this package https://github.com/robotics-4-all/ogm_merging that implements the evaluation between two different OGMs.
It is applying the Nearest Neighbor method, for every point of the map that is considered to be an obstacle, calculates the distance between these points and finds the Overall Mean Square Error.

edit flag offensive delete link more

Comments

@kosmastsk your feeedback is greatly appreciated. I was able to build and run the mentioned package (env: ros kinetic, opencv 2.4.13 , cv_bridge build with opencv 2.4.13) in order to benchmark SLAM-generated OGM maps (cartographer vs. gmapping). Alignment and map merge is successful, but the difference is it shown in Result Field of "Feature Matching MSE" window and is it in meters? Thanks in advance

seamus gravatar image seamus  ( 2019-03-25 04:11:09 -0600 )edit

@seamus glad that this package helped you. Yes, the result is shown there and MSE is calculated in meters, and the Quality Metric is a normalized MSE. So, I would suggest to take into account the Q, when you evaluate maps, as a more well defined and bounded metric

kosmastsk gravatar image kosmastsk  ( 2019-03-25 04:54:36 -0600 )edit

Question Tools

2 followers

Stats

Asked: 2019-03-20 03:10:52 -0600

Seen: 2,633 times

Last updated: Nov 21 '19