# Can I use a single-point laser range finder with a SLAM algorithm (not a Hokuyo-type/wide-angle type scanner)?

Hello hello,

I've seen plenty of examples of building maps with ROS using Hokuyo laser scanners that can scan at wide angles (270 degrees!). However, from what I've seen, these scanners are way out of my budget, easily weighing in at over a thousand bucks.

Can you use a single-point laser range finder such as this one ( http://www.fluke.com/fluke/m3en/Laser... ) and still get usable data for mapping, it would just take longer to scan the walls of areas? This particular range finder can be fitted with an interface board ( http://porcupineelectronics.com/Laser... ) that allows for USB communication/interfacing with microcontrollers like Arduinos. Should I be able to send the sensor's data to ROS this way? The rangefinder and the interface board would normally cost about $300 in comparison to the$1000+ Hokuyo alternatives.

edit retag close merge delete

Sort by » oldest newest most voted

Well, just adding my few cents here, indeed, you need to scan the environment that surrounds your robot in order to build a map, BUT, contrary to common sense the speed of this scanning depends on you robot's mission and expected performance.

In other words, you could try mounting the single point range finder on a servo or step motor, but keep in mind you will probably have to stop you robot frequently to update the map, or find a way of building and updating it accordingly. On top of that, not just the speed of a range find matters. Its resolution matters also.

Translating this to our hypothetical case of a servo-mounted range finder, the step of the motor will be your resolution, and, the farther an object is from you sensor, the greater the chance you either miss it completely, of just capture it partially.

Another way would be initializing your brobot, making it do a 360 deg spin around itself (if its mechanics allow for it, like in a turtle), map the initial environment and then start planning its path. Zig-zag it a little bit to cause the "scanning" effect, while it moves. BUT that is over-thinking this solution. Let's use another approach.

You might want to consider the (in)famous Microsoft Kinect from Xbox 360. It can be used for that as well; You will find many videos on youtube with ROS implementations of this device, with a very fair result. It has TWO cameras: an IR (monochromatic) one that works as the range finder, and the regular one for image. (Some ideas about it on youtube: https://www.youtube.com/watch?v=dRPEn... )

Now, there is more here. Thinking outside the box (an depending on what your robot will do) you could use the regular camera using an image filter called "edge find", combined with the IR range finder to fine-tune your mapped area. (Check some ideas here on youtube: https://www.youtube.com/watch?v=yQZIS... )

Kinect's IR can only detect from 1.2 meters to 3.5 meters. You can use OpenCV to calculate distances smaller than 1 m and greater than 3.5m. There are some applications using Raspberry PI and OpenCV (not necessarily ROS) to implement dashcams that do this. No reason you could not hack it a bit and make a great robot for less money. (yet another example here: https://www.youtube.com/watch?v=dcm9N... )

A few hints before you buy a Kninect:

-This thing is BIG and HEAVY. won't fly on a light quad, or, if you are planning on building a terrestrial bot, make sure to take its dimensions into consideration. For reference, it is almost as wide as a Roomba vacuum cleaner.

• You want the "original" version, from Xbox 360. Until a few months ago, the new Kninect was not supported (no one found out the communication protocol)

• Let's call this device the Kninect 360; It DOES NEED a special ...

more

Wow thanks so much for all the information and external links. I really appreciate all the effort. I saw a ROSCon 2012 presentation here (http://www.youtube.com/watch?v=ZTR16W0DsEM) that's all about the Kinect's hardware details and capabilities with ROS. Can be helpful for others studying Kinect

( 2014-06-19 12:58:55 -0600 )edit

All LIDAR based SLAM approaches require a certain scan FOV, because implicitly the 2D robot pose (x,y and orientation) has to be estimated from a scan. With a single measurement this generally is not possible. Additionally ,in the SLAM case (as opposed to localization only), a map has to be learned online. This map has to be somewhat dense to provide enough information. For these reasons, things will either not work very well, or you accept the fact that you have to run things slow enough as to generate a dense LaserScan message and "simulate" a spinning LIDAR. Slow enough here likely means painfully and unusably slow.

A similar approach is to use cheap IR distance sensing (see PML video and Q/A about trying to use with gmapping), but that mainly works for simple obstacle avoidance, not so much for SLAM.

A low-cost spinning LIDAR is part of the Neato XV-11 vacuum cleaning robot and there are also new low-cost devices coming to the market (Robopeak). Those are likely your best bet for actually doing SLAM.

Another option is using RGB-D sensing which is relatively cheap, and can simulate low FOV LIDARs as done on the Turtlebot (pointcloud_to_laserscan).

more

Great response! And thanks for the suggestions for alternatives, the Neato XV11 and Robopeak look pretty promising!

( 2014-06-19 12:55:20 -0600 )edit
( 2015-01-04 10:22:33 -0600 )edit