ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

robot_localization tf tree structure

asked 2015-08-31 19:28:28 -0500

K7 gravatar image

I am having trouble getting robot_localization to work.

I have read through the robot_localization wiki, the tf wiki and the REP-105/103 doc but I still don't understand how to structure the tree for robot_localization.

I have 2 frames_ids from IMU and PTAM messages.

Concepts i don't understand:

  1. Where do these map, odom, base_link frames come from? Do I generate those using tf as a virtual representation of the real world sensors and their relative positions and orientations?
  2. If so how do I link the supplied frames to the tree I create? In the doc it says map -> odom -> base_link. Do I also link base_link -> imu and odom -> ptam?
  3. Finally do I supply robot_localization with the odom and base_link or imu and ptam frames?

The only way I have ever managed to get robot_localization to produce messages in the /odometry/filtered topic is to use the sensor frames directly but this doesn't seem the correct way to do it.

Any example code or step by step instructions would be great. Docs have not helped me with my confusion. I just don't understand how to piece it all together.

edit retag flag offensive close merge delete

Comments

@K7 where you able to solve this problem? Im facing the exact same issue as you did but can not fix it.

Would be great if you could share your solution!

paul3333 gravatar image paul3333  ( 2021-03-04 16:07:24 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
8

answered 2015-08-31 20:20:08 -0500

Tom Moore gravatar image

I think this is really more of a tf question. Let's ignore sensors and such for a second and just focus on two coordinate frames, odom and base_link. When your robot starts out, it is at position (0, 0) in the odom frame. It then drives, say, four meters forward, turns left, and drives six meters. Its position in the odom frame is now (4, 6). This information can be expressed in two ways:

  1. A ROS message (in this case, nav_msgs/Odometry) that contains the robot's world pose (and velocity, but let's ignore that for now). We fill out this message with the robot's current pose and send it off to be consumed.
  2. A transform from the odom frame to the base_link frame. This contains the exact same information, but is sent via tf instead.

robot_localization produces both (1) and (2).

Now let's say we have an IMU on your robot, but instead of facing forward, we turn it sideways. Additionally, we mount it 0.3 meters to the left of the robot's centroid. We can represent this offset as a transform, and use the static_transform_publisher package to continuously send out a transform from base_link to your IMU's frame_id (e.g., imu). Now, before we can use that IMU data, we need to make sure to transform it into the correct coordinate frame. In this case, robot_localization uses the tf libraries to look up what the transform is between base_link and imu, and then "corrects" (transforms) the IMU data into the robot's body (base_link) frame, and then passes it on to the filter to be fused.

Now let's say you have a LIDAR on your robot, and you define a base_link->lidar transform using static_transform_publisher. Your LIDAR detects an obstacle 10 meters away at 30 degrees. You have a node somewhere that needs to know where that obstacle is in the odom frame and not the lidar frame, so it uses the tf libraries to look up a transform from lidar to odom, which produces the location of that obstacle in the odom frame.

Note that you can name the frames whatever you want. REP-105 just gives suggestions that are commonly used. For PTAM, you can see if PTAM lets your specify the frame_id. If it does, you can set it to odom. If not, you can either (a) just set the odom_frame and world_frame parameters to ptam, or you can specify a static transform of 0 between ptam and odom.

edit flag offensive delete link more

Comments

Yes! Thanks Tom, that has helped a lot. Now to try and put the theory to use...

K7 gravatar image K7  ( 2015-08-31 21:55:17 -0500 )edit

Great I think it's working now. Now I'm just getting some major drift, but that's another story ;) Can't thank you enough for your help, you have saved me hours of frustration.

K7 gravatar image K7  ( 2015-08-31 23:54:03 -0500 )edit
3

No problem! Good luck.

Tom Moore gravatar image Tom Moore  ( 2015-09-01 05:33:20 -0500 )edit

Please don't use an answer to leave a comment. This isn't a forum

jayess gravatar image jayess  ( 2021-03-06 21:48:38 -0500 )edit

Hi Tom.IMU sensor or Lidar sensor are easier to understand.Because they are only relative to the robot body base_link.So we can publish the transform between them using /tf.

What about GPS?The GPS sensor also has the transform to robot body.How should I publish the transform relationship between them? If I set GPS sensor's fram_id to gps, and publish the tf from base_link to gps.Does the robot_localization package think that the gps data /fix is not based on the map framwork? If I set GPS sensor's fram_id to map, and publish the tf from base_link to map.Will this cause conflict?

How should I configure it?

peter_feng gravatar image peter_feng  ( 2023-02-24 03:52:15 -0500 )edit

@peter_feng you should create a new question instead of making a comment on a nearly 8 year old answer. This will give you more visibility

jayess gravatar image jayess  ( 2023-02-24 13:47:06 -0500 )edit

Question Tools

4 followers

Stats

Asked: 2015-08-31 19:28:28 -0500

Seen: 2,150 times

Last updated: Mar 04 '21