Ask Your Question

Revision history [back]

Regarding the integration of semantic information, here's my thoughts:

The easiest way (maybe not the most efficient though) to do this, is should be to have your recognition software subscribe to the rgb image and point-cloud, do recognition, recolor the point cloud, e.g. red points for class 1, green for class 2, black points for unclassified. then send the rgb image and the recolored cloud to the topics rgbdslam listens to (you can modify them via parameters). Then the point cloud colors will be integrated into the map and can be seen as labels. To make things efficient, you would integrate your software into rgbdslam and call it from the callback methods in openni_listener.cpp (e.g. kinect_callback, noCloudCallback).

A cleaner way is to use a point cloud type that contains semantic information. Therefore you would need to redefine the point cloud type in rgbdslam (src/parameter_server.cpp) to one that contains your semantic information. Note that, if you omit the color (i.e. the point.rgb field), there will be errors in compiling glviewer.cpp. These should be easily solvable though.

Then you need to adapt the octomap server to use a voxel leaf that stores semantic information. This has been done, but I don't know how.