Robotics StackExchange | Archived questions

Dealing with robot as obstacle in octomap

My goal is to be able to point a camera (eye-on-base style) to the pick&place scene and extract the table, and regions where placing is happening (to avoid coding the scene statically via moveit::CollisionObjects). Those seem to be feasible via PointCloud Library.

I'm however having issues with the robot being seen in the octomap as an obstacle to itself: either the calibration isn't perfect (I used easy_handeye while waiting for the moveit assistant integrated one) or the camera isn't too precise. I tried enhancing the octomap resolution thinking maybe the extra voxels would become small enought to be included in the robot arm: it's either not helping (the octomap still contains some extra voxels which impede arm movement) or blocking the system completely (too much load on the cpu).

I'm thinking I'd like to:

I cannot find other ways to deal with this. In the meanwhile I'm investigating rtabmap regarding the first bullet point (even though it seems more suited for mobile robots, I think it could work) and waiting on a component manufacturing process in regards to the second bullet point.

Is there someone from the community that has a lead on how it's been previously done?

Asked by aPonza on 2019-05-07 03:48:31 UTC

Comments

For rtabmap, you could only use rtabmap node by setting odom_frame_id parameter to "base_link" (base frame of your robot) with RGBD/AngularUpdate and RGBD/LinearUpdate parameters set to 0 to make it add new data even if the robot doesn't move. You can also set Kp/MaxFeatures to -1 to disable loop closure detection. Make sure TF between the camera and the base is correctly updated.

Asked by matlabbe on 2019-05-15 14:01:56 UTC

Hi, @aPonza I am also interested in this. What did you end up using/doing? Thanks

Asked by ryanc on 2020-06-10 14:43:37 UTC

I've had other (business mandated) priorities in the meanwhile and still haven't touched on this, not planning to for at least 3+ more months.

Asked by aPonza on 2020-06-27 03:36:11 UTC

Answers

MoveIt does support self-filtering but you have to make sure that it is setup correctly. First, make sure that the transforms can be read from the cache in time see this question.

Second, if you are using a depth camera, have a look at the sensors.yaml. Tweak these values as they are used by the self-filtering algorithm. My suggestions are:

  • near_clipping_plane_distance: shorter than the camera datasheet, otherwise the filter mask will be clipped and you will see OctoMap cubes in the middle of your robot
  • far_clipping_plane_distance: farther than the camera datasheet
  • padding_scale: 4.0 from the MoveIt tutorial works well for me
  • padding_offset: 0.03 from the MoveIt tutorial works well for me
  • shadow_threshold: 0.2 I don't think it is used for filtering but could be helpfull for debugging the filtered_label topic

Generally, you can enable debug topics in move_group.launch by setting debug=true in move_group.launch. Then visualize the filtered_label and model_depth topics in RViz to tune your padding values.

Asked by Tuebel on 2021-08-04 05:05:29 UTC

Comments