ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
All LIDAR based SLAM approaches require a certain scan FOV, because implicitly the 2D robot pose (x,y and orientation) has to be estimated from a scan. With a single measurement this generally is not possible. Additionally ,in the SLAM case (as opposed to localization only), a map has to be learned online. This map has to be somewhat dense to provide enough information. For these reasons, things will either not work very well, or you accept the fact that you have to run things slow enough as to generate a dense LaserScan message and "simulate" a spinning LIDAR. Slow enough here likely means painfully and unusably slow.
A similar approach is to use cheap IR distance sensing (see PML video and Q/A about trying to use with gmapping), but that mainly works for simple obstacle avoidance, not so much for SLAM.
A low-cost spinning LIDAR is part of the Neato XV-11 vacuum cleaning robot and there are also new low-cost devices coming to the market (Robopeak). Those are likely your best bet for actually doing SLAM.
Another option is using RGB-D sensing which is relatively cheap, and can simulate low FOV LIDARs as done on the Turtlebot (pointcloud_to_laserscan).