How should I add semantic segmentation data to my costmap?

asked 2019-09-06 23:20:06 -0600

tsbertalan gravatar image

I currently have a working drivable-space semantic segmentation (and might add more classes in the future, such as pedestrians), and I’m using message_filters.TimeSynchronizer and depth_image_proc/point_cloud_xyz to overlay this with the corresponding depth image and create (decimated) point cloud data for either class.

Is there a recommended way to combine this with the standard navigation stack? I’m using the TEB local planner, and a grid map from RTAB-map, but I would like to somehow add the non-drivable space to the resulting cost map as not lethal obstacles, but as perhaps intermediate-cost obstacles with some inflation.

I would prefer not to add these as hard obstacles, because there will occasionally be transient holes in the segmentation, and I don’t want these to cause erratic driving. I can do some morphological filtering to cut down on these holes, but the idea of using intermediate cost values to code these segmentations is appealing, because it leaves open the possibility of driving, for instance, on grass that RTABmap still deems drivable.

What is the recommended way to go about integrating such data? (For that matter, how are multiple costmap layers actually combined—with addition or max? I guess the costmap is an 8-bit unsigned image, so addition would need to be capped.)

I see that an intern at Nvidia created a custom layer for doing something similar to this, but they assume a particular fixed perspective transform, and then they also only use LETHAL_OBSTACLE, NO_INFORMATION, and FREE_SPACE to fill in the costmap. In contrast, I have corresponding depth data per pixel, so it's easy to use point_cloud_xyz to produce a PointCloud2 topic. So, an equivalent to the NVidia code would be to use a regular obstacle layer to project this pointcloud into the costmap, but this would also only admit lethal obstacles.

Should I give up on the idea of having intermediate-valued costs?

edit retag flag offensive close merge delete

Comments

Did you end up implementing the semantic point clouds into your costmaps somehow? I saw on your gudrun repo it seems you simply added them to your local costmap as a SpatioTemporalVoxelLayer?

jorgemia gravatar image jorgemia  ( 2021-03-11 05:55:38 -0600 )edit

@jorgemia If I remember correctly, I segmented the RGB part of the RGBD data with a roughly trained U-net, used that as a mask on the equivalent point cloud to select obstacle points, maybe decimated that point cloud, and then I think created a "virtual laser scan" from this (not from single height plane, but I think from a range of heights). From here, it was much like the basic turtlebot tutorials to get it into the costmap (the parts about dynamic obstacles, that is; not gmapping obviously, since that's covered by RTAB-map with the full 3D data).

tsbertalan gravatar image tsbertalan  ( 2021-06-02 14:21:02 -0600 )edit