ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

rtabmap use kinect and laser to create map

asked 2020-12-06 13:39:37 -0500

lazt omen gravatar image

tried moving from gmapping to rtabmap to be able to project 3d obstacles in 3d map. The robot has both a laser and a kinect. in rtabmap.launch GridMapfroDepth is set to true but this only creates de grid from the kinect. Is it possible to use both laser and kinect to create the map. The idea is having a very accurate map using the laser and have the kinect camera project 3d objects on top of it. Any help would be hugely appreciated.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2020-12-10 13:51:38 -0500

matlabbe gravatar image

We cannot combine both, you have to generate the map either with the lidar, or from projecting the depth image. It is not trivial to do both to avoid scan ray tracing over obstacles on previous nodes added to map by the depth camera. A not super efficient approach would be to convert the scan to a point cloud, then combine that cloud with the one created from the depth camera (rtabmap_ros/point_cloud_aggregator nodelet could be used for that). You would then feed the combined point cloud (subscribe_scan_cloud) to rtabmap to create the grid. However, to have the same ray tracing effect with laser scan in 2D, you would have to enable 3D ray tracing, which is quite expensive in term of computation time (an OctoMap is created).

For that kind of setup, I generally use the scan for the grid map, and use the point cloud of the depth camera in the local costmap of move_base. That way, you have a nice global static map created from laser scans, then for security when moving around, the local costmap will make the robot avoid obstacles that lidar cannot see.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2020-12-06 13:20:50 -0500

Seen: 231 times

Last updated: Dec 10 '20