ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

6DOF Localization with stereo camera against RGBD dense point cloud

asked 2016-11-02 01:23:47 -0500

natejgardner gravatar image

I have a point cloud of an environment generated by a Kinect One using rtabmap. I have a stereo camera, and I'm looking for a ROS package capable of performing full 6DOF localization using the stereo camera against this existing point cloud. Although I have found many solutions for stereo SLAM, I am having a hard time finding anything for localization-only using the stereo camera. What I have found seems to only be depth-based. I'm looking for something that could perform this localization based on the full RGBD cloud, and is compatible with Kinetic. Are there any packages that do this? I would like to produce a pose estimate primarily based on the stereo camera, augmented by IMUs and IR rangefinders, and other monocular cameras. Presumably, once I have a stereo localization, I could pass the other sensors through the robot_localization package to get a fused posed estimate. Any suggestions for a stereo localization package?

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2016-11-02 12:51:14 -0500

matlabbe gravatar image

Since you mentionned rtabmap, I'll describe what you can already do with it. rtabmap can do stereo localization in a map created previously by rtabmap (with any other sensors, stereo or rgb-d):

$ roslaunch rtabmap_ros stereo_mapping.launch database_path:="~/map_created_with_kinect.db" localization:=true

See stereo_mapping.launch for other remapping options (image topics or frame_id.). You can also see this tutorial on stereo mapping with rtabmap.

rtabmap will not do stereo/rgbd localization on an arbitrary raw point cloud, only with its database format.


edit flag offensive delete link more


This solution is only localization based on loop closure images, right? It is not localizing against any point cloud or 3D data, but only against the loop closure images and their vectors, right?

natejgardner gravatar image natejgardner  ( 2016-11-02 21:16:51 -0500 )edit

Yes this is against the loop closure images. When a matching image is found, the 6DOF pose of the camera is computed against it (using the 3D keypoints and/or laser scans), so it is localized in the map.

matlabbe gravatar image matlabbe  ( 2016-11-03 08:43:30 -0500 )edit

Question Tools


Asked: 2016-11-02 01:23:47 -0500

Seen: 634 times

Last updated: Nov 02 '16