Ask Your Question

Dealing with robot as obstacle in octomap

asked 2019-05-07 03:48:31 -0500

aPonza gravatar image

updated 2019-05-08 04:09:39 -0500

My goal is to be able to point a camera (eye-on-base style) to the pick&place scene and extract the table, and regions where placing is happening (to avoid coding the scene statically via moveit::CollisionObjects). Those seem to be feasible via PointCloud Library.

I'm however having issues with the robot being seen in the octomap as an obstacle to itself: either the calibration isn't perfect (I used easy_handeye while waiting for the moveit assistant integrated one) or the camera isn't too precise. I tried enhancing the octomap resolution thinking maybe the extra voxels would become small enought to be included in the robot arm: it's either not helping (the octomap still contains some extra voxels which impede arm movement) or blocking the system completely (too much load on the cpu).

I'm thinking I'd like to:

  • map the pick&place station with the robot in motion, and, after a set amount of time, keep only the obstacles which never moved; this would leave me with a fixed occupancy map to load, and I could leave the eye-on-base camera to just detect people passing by and trying to get hurt via e.g. a CNN;
  • mount an eye-on-hand camera and do something similar to the very first part of this video;
  • extract the robot from the poincloud (seems the most unfeasible option);

I cannot find other ways to deal with this. In the meanwhile I'm investigating rtabmap regarding the first bullet point (even though it seems more suited for mobile robots, I think it could work) and waiting on a component manufacturing process in regards to the second bullet point.

Is there someone from the community that has a lead on how it's been previously done?

edit retag flag offensive close merge delete



For rtabmap, you could only use rtabmap node by setting odom_frame_id parameter to "base_link" (base frame of your robot) with RGBD/AngularUpdate and RGBD/LinearUpdate parameters set to 0 to make it add new data even if the robot doesn't move. You can also set Kp/MaxFeatures to -1 to disable loop closure detection. Make sure TF between the camera and the base is correctly updated.

matlabbe gravatar image matlabbe  ( 2019-05-15 14:01:56 -0500 )edit

Hi, @aPonza I am also interested in this. What did you end up using/doing? Thanks

ryanc gravatar image ryanc  ( 2020-06-10 14:43:37 -0500 )edit

I've had other (business mandated) priorities in the meanwhile and still haven't touched on this, not planning to for at least 3+ more months.

aPonza gravatar image aPonza  ( 2020-06-27 03:36:11 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted

answered 2021-08-04 05:05:29 -0500

Tuebel gravatar image

MoveIt does support self-filtering but you have to make sure that it is setup correctly. First, make sure that the transforms can be read from the cache in time see this question.

Second, if you are using a depth camera, have a look at the sensors.yaml. Tweak these values as they are used by the self-filtering algorithm. My suggestions are:

  • near_clipping_plane_distance: shorter than the camera datasheet, otherwise the filter mask will be clipped and you will see OctoMap cubes in the middle of your robot
  • far_clipping_plane_distance: farther than the camera datasheet
  • padding_scale: 4.0 from the MoveIt tutorial works well for me
  • padding_offset: 0.03 from the MoveIt tutorial works well for me
  • shadow_threshold: 0.2 I don't think it is used for filtering but could be helpfull for debugging the filtered_label topic

Generally, you can enable debug topics in move_group.launch by setting debug=true in move_group.launch. Then visualize the filtered_label and model_depth topics in RViz to tune your padding values.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools



Asked: 2019-05-07 03:48:31 -0500

Seen: 422 times

Last updated: Aug 04