Deeper control over robot self-filtering
Good day,
For an application in development, it is important to use the self-filtering feature from Moveit package, but there are quite a few things that we need to either further develop or be able to control.
First, currently Moveit! only allows starting self-filtering with the sensor_pluging occupancy_map_monitor/PointCloudOctomapUpdater but in our case, we only want to use the self-filtering function (collision avoidance is handled differently).
Furthermore, the output of the self-filter function stores only the x,y,z channels of the input cloud. In our case, normals and curvature information are important for the next stage of processing, which in theory we want to perform with the self-filtered cloud; meaning that, we either need a cloud with the original channels minus the robot points, or the points "masks" indicating which of them are part of the robot.
I have seen the robot_self_filter package and its documentation but this looks rather outdated, and since there is no use of this package in the current moveit code, I assume this is not the package to use.
My question would be, which are the packages and scripts used by Moveit for its self-filtering functions (I understand this function is not stable or in beta), is there any documentation on only self-filtering?.
I am interested in updating this function to include the functionalities mentioned above and maybe try to include some more, and document it.