ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

What you should consider is the following: RGB-D Slam implements recording of a volumetric map, but neither localization in it, nor navigation. Navigation is outside of the scope of RGB-D Slam. So if you have another software for navigating in a Pointcloud, Octomap or a 2D map from the downprojected Octomap, then you can use rgbdslam to create the map. I know 3D navigation for this kind of maps exists for the PR2, but it is based on the laser scanner.

There is also a hack to localize via the recorded landmarks (the visual features) if the program is not shutdown inbetween. There is functionality to save the features and their locations to file, yet not to read them back in. I have currently no incentive to add this functionality.

However, it is possible to write a program that reads the features from the file and does localization with it.

In short: I don't know how to do navigation on a turtlebot based on the kinect alone.