ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

I don't believe the current slam-gmapping implementation in ROS directly supports laser scans coming from lasers with different frames.

A quick solution: You could try to combine your two 180-deg lasers scans into one 360-deg laser scan. Compute the x-y coordinates from both the laser scans, then calculate the distance from that point to a virtual laser scanner, and output the result as a LaserScan.

This would work best if you align your sensors well, ie, exactly 180 degrees apart from each other. Also, try to get the optical centers of the lasers as close as possible. The laser scan structure assumes the angular gap between two scan beams is constant, so hacking two laser scans into one might introduce some distortions in the data. Nevertheless, it will probably be minimal, if you are careful with the settings of your scanners.

Now, for a more correct solution: slam-gmapping (as well as other laser tools, such as canonical_scan_matcher) don't algorithmically require that the input data is in polar coordinates. In fact, they take the (r, theta) readings from the laser, and covert them to (x, y) coordinates internally. What would be best is to rewrite the wrappers to accept (x, y) coordinates directly. Then, you would be able to combine any number of laser scans, regardless of their physical configuration. All you would have to do is convert the LaserScans to PointClouds, and then merge the PointClouds together. You would also be able to use non-laser sources of data (like the Kinect) without having to go through a clunky PointCloud-To-LaserScan conversion.

I don't believe the current slam-gmapping implementation in ROS directly supports laser scans coming from lasers with different frames.

A quick solution: You could try to combine your two 180-deg lasers scans into one 360-deg laser scan. Compute the x-y coordinates from both the laser scans, then calculate the distance from that point each beam endpoint to a virtual laser scanner, and output the result as a LaserScan.

This would work best if you align your sensors well, ie, exactly 180 degrees apart from each other. Also, try to get the optical centers of the lasers as close as possible. The laser scan structure assumes the angular gap between two scan beams is constant, so hacking two laser scans into one might introduce some distortions in the data. Nevertheless, it will probably be minimal, if you are careful with the settings of your scanners.

Now, for a more correct solution: slam-gmapping (as well as other laser tools, such as canonical_scan_matcher) don't algorithmically require that the input data is in polar coordinates. In fact, they take the (r, theta) readings from the laser, and covert them to (x, y) coordinates internally. What would be best is to rewrite the wrappers to accept (x, y) coordinates directly. Then, you would be able to combine any number of laser scans, regardless of their physical configuration. All you would have to do is convert the LaserScans to PointClouds, and then merge the PointClouds together. You would also be able to use non-laser sources of data (like the Kinect) without having to go through a clunky PointCloud-To-LaserScan conversion.