ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

An algorithm question about SLAM

asked 2012-11-25 13:57:05 -0500

NYC gravatar image

updated 2014-01-28 17:14:23 -0500

ngrennan gravatar image

Hi, this is a SLAM algorithm concept question.

There are 2 things I know:

  1. There are 2 kinds of maps we usually use - Occupancy grid map and Feature map. (Occupancy grid map divide the map into grids. and Feature map needs to convert sensor readings to landmarks.)

  2. There are some algorithms we usually use on SLAM. Such as EKF, Particle Filter. I have read some articles about the algorithms above. They often refer their input data as "landmarks".

The question is. If I use these algorithms (EKF, PF), do I always have to convert my sensor data into landmarks first, even when I am using grid map?

I mean, my sensor readings are from a range finder. If I have to convert those data, I need to build a procedure to interpreter those data into landmarks, (such as walls and corners, etc) right?

Then, if the procedure is not well designed. there might be some landmarks I cannot recognize. Then my robot will act abnormal, is this correct?

Thx in advance.

edit retag flag offensive close merge delete


Thanks a lot for your answers. These are very helpful. Thank you.

NYC gravatar image NYC  ( 2012-11-27 22:48:14 -0500 )edit

3 Answers

Sort by » oldest newest most voted

answered 2012-11-26 22:09:12 -0500

Claudio gravatar image

updated 2013-03-20 01:46:49 -0500

Frankly speaking, I think you have to read a whole lot more to get a better idea of the problem.

A couple links from amazon (in no particular order: just easily spotted and cheap) Introduction-Autonomous-Mobile-Intelligent-Robotics


Now onto your specific question. Every entity that has no navigate an environment has to accomplish multiple tasks:

  • path finding
  • obstacle avoidance
  • map construction
  • localization


There are a number of algorithms (from the well known A* and D* to the most esoteric ones) but these all depend on the way you represent the navigation map (which may be different from say the real-time map, or the obstacle map: you may have different resolution maps for different tasks).

The simplest theory is this: your map is a connected graph, the algorithm tries to find the shortest path between two nodes. This way the hardest part, the one that determines where a node effectively is, is moved outside the path planning and into the map construction.

The key point here being what and where is a node. The amount of nodes heavily depends on the algorithm for the path planning: some algorithms have very short expansions (meaning they wander very short distances from nodes, trying to stay near them) so you need lots of nodes to cover the environment (grid maps are often used here), other algorithms have much wider expansions, so you can have fewer nodes representing the area (voronoi expansions and medial axis can be used when you can wander far from nodes).


The task of avoiding obstacles may be complicated ad-libitum. Meaning you can go from bumpers (like the first Roombas) up to multiple stereo-cameras and laser range finders (like the autonomous cars developed by Google or for the Darpa Urban Challenge <-check it out!)

For most applications a laser ranger (Hokuyo and SICK are the most famous manufacturers) is the default choice: such a sensor gives you a distance reading across a circular sector up to 270°, with resolutions up to 0.25°, at frequencies up 40Hz and distances up 80 m.

The key point though is how you process your data. In fact you have to decide how you categorize an obstacle -meaning how you determine what sequence of data is effectively an obstacle, and what properties an obstacle can have- and after that how this categorization affects the path planning and/or map building.

The simplest (but still working) obstacle avoidance is the VFH(+ or *) method: this is the simplest method still doing some kind of data coalescence. Otherwise an even simpler method would be: get as far away from the nearest obstacle.

More sophisticated methods involve determining some characteristics of the obstacle (like dimensions, shape, orientation) and placing it in the map, to allow the planner to plan a path around the obstacle. But again this depends on the kind and resolution of the map: for example if your planning algorithm runs on a low resolution grid map, you may even not put obstacles in ... (more)

edit flag offensive delete link more

answered 2012-11-25 15:16:07 -0500

Mac gravatar image

Occupancy-grid mapping algorithms (GMapping, DP-SLAM) don't convert their inputs into landmarks first. From a mathematical point of view, it can be useful to use (broadly) the same constructions to talk about the grid cells as one would the landmarks ("features") in a landmark-based map.

Note that particle filtering, EKFs, etc. are general techniques, and can apply to both kinds of maps.

edit flag offensive delete link more

answered 2012-11-26 17:46:28 -0500

NYC gravatar image

particle filtering, EKFs, etc. are general techniques, and can apply to both kinds of maps

Hi, thanks for your reply.

However, I have asked on other forum, the answer is quite different.

Why is this?

edit flag offensive delete link more



I'm going to write a more detailed answer, but suffice it to say for the moment that filtering methods don't require any particular feature pertaining to the data being filtered.

Claudio gravatar image Claudio  ( 2012-11-26 21:21:26 -0500 )edit

@NYC: Please do not create answers for discussion or comments. This is not a forum. Instead, either edit your original post or use the comment functionality.

dornhege gravatar image dornhege  ( 2012-11-26 22:26:30 -0500 )edit

Question Tools



Asked: 2012-11-25 13:57:05 -0500

Seen: 2,915 times

Last updated: Mar 20 '13