ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

using a saved octomap as collision object

asked 2019-03-07 06:31:45 -0500

ChristophBrasseur gravatar image

Hi there

I am currently testing some motion planners in moveit. To test the obstacle avoidance of the planner i would like to insert an environment scanned by a kinect.

I succesfully managed to create an octomap of the environment I want and saved it as a .bt file. My question is how I can add this .bt file into moveit so my robot detects it as a collision object.

If I use rosrun octomap_server octomap_server_node my_environment.bt I am able to visualise the saved octomap in rviz. The main problem I run into is that my robot uses fixed frame: "base_link" while the octomap is published in fixed frame: "map".

Am i right in concluding this is why the octomap is not detected as a collision for my robot? If so, can anyone point me in the right direction for using this saved octomap as a collision object?

The software I am using is Ubuntu 16.04 and ROS Kinetic

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2019-03-10 01:47:10 -0500

Hello!

Have you started off by going through the perception pipeline tutorial on the MoveIt Website? They have an example that sounds very similar to what you are trying to achieve.

MoveIt! Perception Pipeline Tutorial

edit flag offensive delete link more

Comments

I already found this tutorial but somehow couldn't get it to work, I will have another look at it though.

Also, I think this tutorial uses live images from the kinect which is not exactly what I need but it might help me figure some things out.

ChristophBrasseur gravatar image ChristophBrasseur  ( 2019-03-10 03:13:11 -0500 )edit

What does the TF look like? Does the Rviz visualizaiton look correct? You may very well need some kind of publisher to the two frames for the octomap to visualize properly. I have to admit, I haven't played much with this feature before. May be worth trying to experiment with as a result.

atomoclast gravatar image atomoclast  ( 2019-03-11 10:03:14 -0500 )edit

I have read about the TF but I can't tell you what it looks like since I don't know where to look for it. I also read that this could be used to transform from one frame to another, although I am not sure I interpreted that correctly.

The RViz visualisation also has its issues, as shown in the pictures below I have look from the bottom at my octomap but I was under the impression that this could be fixed with the TF. https://imgur.com/ZNMUBgZhttps://imgur.com/l0s5UFy

ChristophBrasseur gravatar image ChristophBrasseur  ( 2019-03-12 03:47:18 -0500 )edit

Consider which coordinate system/frame your saved octomap is defined in. If it was defined relative to your camera, you will need to place its frame in a similar position (rotation and translation).

fvd gravatar image fvd  ( 2019-03-13 03:18:27 -0500 )edit

To build on what @fvd said, recall the images from the tutorial. It's a requirement for the robot description and the octomap to be described in such a way (a transformation) such that is indicative to how your camera is with respect to your robot in real life (if this is a physical system)...or how your Simulation is set up.

atomoclast gravatar image atomoclast  ( 2019-03-13 08:14:38 -0500 )edit

I managed to get the Perception Pipeline tutorial to work. They use a rosbar file to get the octomap in RViz which is fine by me. I am also able to replace the panda robot from the tutorial with my own robot and everything works fine with their rosbag file for the octomap/pointcloud.

I have now recorded my own pointcloud in a robag file to use in this setting but it isn't being visualized in RViz (neither with my robot or the panda robot).

The steps I took in order to use my own rosbag file are:

  1. In the sensors_kinect_pointcloud.yaml of the panda robot I changed point_cloud_topic to the topic of my rosbag file

  2. In MoveIt tutorials in the perception pipeline package I changed the topics in bag_publisher_maintain_time

Am I missing a step to get my own rosbag to work?

ChristophBrasseur gravatar image ChristophBrasseur  ( 2019-03-13 09:02:22 -0500 )edit

I haven't done the tutorial or tried replaying octomaps from a rosbag, but in broad strokes, I would try this: 1) Check that Rviz listens to the right topic for the PointCloud. 2) Make sure that the frame the octomap is published in exists in the scene 3) Check if the robot behaves correctly and the octomap is in the PlanningScene.

fvd gravatar image fvd  ( 2019-03-13 11:04:44 -0500 )edit

Everything works now, all this time the transform was set to transform between frames camera_rgb_optical_fram and base_link, but the frame my rosbag was published in frame kinect2_rgb_optical_frame so I changed this in the transform and everything works. Thanks for the help

ChristophBrasseur gravatar image ChristophBrasseur  ( 2019-03-14 05:22:49 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2019-03-07 06:31:45 -0500

Seen: 768 times

Last updated: Mar 10 '19