ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

aarons's profile - activity

2014-10-07 00:34:32 -0500 received badge  Great Question (source)
2014-03-31 02:11:54 -0500 received badge  Stellar Question (source)
2014-03-18 01:27:36 -0500 received badge  Favorite Question (source)
2012-08-15 10:50:52 -0500 received badge  Famous Question (source)
2012-03-05 18:15:45 -0500 received badge  Notable Question (source)
2011-09-13 21:40:07 -0500 received badge  Popular Question (source)
2011-08-11 05:27:13 -0500 received badge  Good Question (source)
2011-07-19 22:12:41 -0500 received badge  Taxonomist
2011-03-18 06:30:18 -0500 marked best answer Using multiple laser scanners

I don't believe the current slam-gmapping implementation in ROS directly supports laser scans coming from lasers with different frames.

A quick solution: You could try to combine your two 180-deg lasers scans into one 360-deg laser scan. Compute the x-y coordinates from both the laser scans, then calculate the distance from each beam endpoint to a virtual laser scanner, and output the result as a LaserScan.

This would work best if you align your sensors well, ie, exactly 180 degrees apart from each other. Also, try to get the optical centers of the lasers as close as possible. The laser scan structure assumes the angular gap between two scan beams is constant, so hacking two laser scans into one might introduce some distortions in the data. Nevertheless, it will probably be minimal, if you are careful with the settings of your scanners.

Now, for a more correct solution: slam-gmapping (as well as other laser tools, such as canonical_scan_matcher) don't algorithmically require that the input data is in polar coordinates. In fact, they take the (r, theta) readings from the laser, and covert them to (x, y) coordinates internally. What would be best is to rewrite the wrappers to accept (x, y) coordinates directly. Then, you would be able to combine any number of laser scans, regardless of their physical configuration. All you would have to do is convert the LaserScans to PointClouds, and then merge the PointClouds together. You would also be able to use non-laser sources of data (like the Kinect) without having to go through a clunky PointCloud-To-LaserScan conversion.

2011-03-05 06:27:11 -0500 received badge  Nice Question (source)
2011-03-05 06:22:12 -0500 received badge  Student (source)
2011-03-05 06:17:47 -0500 commented answer Using multiple laser scanners
Unfortunately my scanners are at opposite sides of the robot, about 1m apart. I already hacked together a node that merges the LaserScans into one PointCloud, and was planning to look into modifying slam_gmapping to accept a PointCloud. It looks like I'm on the right path then. Thanks.
2011-03-05 05:32:29 -0500 asked a question Using multiple laser scanners

What is the best way to use multiple laser scanners on a robot? Currently I have two hokuyo scanners, one facing forward, and one facing backwards. I'd like to do slam as efficiently as possible using data from both scanners.