Ask Your Question
3

RGBDSLAM & Octomap for continuous realtime mapping

asked 2012-05-13 12:22:58 -0600

jwrobbo gravatar image

updated 2014-01-28 17:12:18 -0600

ngrennan gravatar image

I am trying to use RGBDSLAM for 3D navigation, using its tf output for visual odometry and its point cloud for obstacle avoidance. However if this is run for an extended period of time, a huge amount of memory is taken up. Also, the point cloud needs to be sent at intervals rather than being immediately available. Octomap seems ideal for the mapping side of it but I'm not entirely sure the best way of creating this:

Directly feeding the point cloud from the kinect to octomap and using the tf values from RGBDSLAM, as suggested in Felix Endres's answer to this question: http://answers.ros.org/question/33175/replacing-gmapping-with-rgbdslam has the advantage that the point clouds don't need to be saved or periodically sent but errors are rapidly accumulated as, as far as I can tell, there is no optimisation or loop closure going on. Is there a way I could produce a cleaner octomap using this method? I can't work out whether I would be better to increase the settings to give more accurate camera position estimations, or decrease them to give more frequent and therefore more up to date ones. Also, setting save point clouds to false doesn't seem to stop them being built up. Am I missing something?

It would be great to use the output of RGBDSLAM as it gives a clean, accurate model. However the cloud gets bigger and bigger (even if viewing the same things, due to loop closure artefacts and extra detail) and in addition to memory constraints, it takes longer and longer to send the model. Increasing the send model rate in graph_manager.cpp seems to have a big effect but it still becomes impractical after a while. Would voxelfiltering keep the cloud size down? What sort of effect would this have on performance?

Also, it seems that the octomap can be incrementally built up and therefore doesn't need the whole cloud to be sent each time. Is there any way that only the latest few frames could be used? If filtering could be used to keep the cloud to a manageable size, is there a way that you could only send the latest few frames to octomap server for speed, whilst keeping a larger, sparse, cloud to be randomly sampled for position estimation by RGBDSLAM. Otherwise rather than periodically clearing the whole cloud, is there any way that once it reaches a certain size, the oldest points could be removed, to give a rolling window effect?

Happy to dig through the code but any pointers in the right direction would be much appreciated! Thanks.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2012-05-29 07:26:13 -0600

Hi, in src/nodes.cpp there is a function void Node::clearPointCloud(). You could call this from void GraphManager::sendAllCloudsImpl() in src/graph_manager.cpp after sending the node. Could look like this:

    graph_[i]->publish("/openni_rgb_optical_frame", now, batch_cloud_pub_);
    graph_[i]->clearPointCloud(); //Add this line

Let me know whether that works for you.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

6 followers

Stats

Asked: 2012-05-13 12:22:58 -0600

Seen: 1,882 times

Last updated: May 29 '12