ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

I see two setups that could be possible:

  1. Use the top camera for SLAM (global occupancy grid map) and the bottom camera to update the obstacle layer of the costmaps. That way the local and global planners will plan also accordingly to obstacles detected by the bottom camera. To segment the cloud from stereo_image_proc of the bottom camera into obstacles, you can use rtabmap_ros/obstacles_detection nodelet.

  2. Use both cameras for rtabmap and visual odometry nodes. To use multiple stereo cameras with rtabmap, they should have the same resolution and they should be converted in RGB-D cameras using stereo_image_proc to generate the disparity image (rtabmap_ros/disparity_to_depth nodelet can be used for convenience to convert to depth image). This is similar to this setup. When you have a RGB image, a Depth image and a camera info for each camera, see the demo_two_kinects.launch example (subscribe_rgbd=true and rgbd_cameras=2). With stereo cameras, make sure to set approx_sync parameter to false for rtabmap_ros/rgbd_sync nodelets. TF between cameras and base_link should be also accurate to give good results.

For odometry, rtabmap uses only one input, so you have to choose between wheel odometry and visual odometry. To mix two odometry sources together, see robot_localization package. The output of robot_localization can then be used as input to rtabmap.

Cheers,

Mathieu