ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

How does robot_localization finds tf between odom and base_link?

asked 2018-01-20 08:50:30 -0500

ZanaZak gravatar image

updated 2018-01-26 03:04:21 -0500

Hello

Is there any documnetation that describes how the relation between odom frame and based_link frame is calculated?

It seems that some sort of self-calibration is happening there

Edit 1: In response to Tom's answer

Thank you. I was totally misunderstood about robot_localization and robot_pose_ekf. I thought that they can calculate the transformations between sensor frames automatically. This is called self-calibration.

I was trying to fuse IMU with Camera (visual odometry). I was putting my IMU frame as base_link and the camera frame as odom. I thought that robot_localization calculates the transofrmation between two. Now I understand that setting IMU frame as base_link is wrong. The base_link is actually some point at the middle of robot's chasis. I have to publish a tf between imu_link and camera frame (odom).

Now I can see that the power of robot_localization is multi_rate fusion of multiple sensors but with known inter-sensor transformations. This is different from MSF:

http://ethz-asl.github.io/ethzasl_msf...

which tries to estimate inter-sensor transformations by knowing initial estimates of those transformations and updating them in the state vector.

edit retag flag offensive close merge delete

Comments

Ah, I understand your confusion now. Yes, the EKF doesn't really know anything about the relationships between sensors, as you've stated. It just wants to know, for a given measurement, how to transform it into a frame that it "knows" about, and you provide those transforms.

Tom Moore gravatar image Tom Moore  ( 2018-01-26 03:35:20 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
7

answered 2018-01-26 02:34:31 -0500

Tom Moore gravatar image

I'm not sure what you mean by calibration in this case, but I can tell you what the odom->base_link transform is.

The tf library exists to define coordinate frames and the relationships between them. REP-105 defines some of the principal coordinate frames in ROS, but we'll focus on base_link and odom.

The base_link frame is a coordinate frame that is firmly affixed to the robot. In other words, the +X axis extends straight out of the front of the robot, the +Y axis extends directly to the robot's left, and the +Z axis extends straight up in the air. No matter where the robot goes, the base_link frame is attached to the robot's origin, and so those axes remain in the same location w.r.t. the robot. Just imagine that someone 3D printed a small coordinate frame and glued it to your robot.

Now, as your robot moves around in the world, we have to know what its pose is in that world. That "world" is the odom coordinate frame. When the robot starts out, it's typically at position (0, 0) and with rotation/yaw/heading of 0. That means your pose in the odom frame is (0, 0) with a yaw of 0. If you drive forward 1 meter, than your pose in the odom frame is (1, 0) with a yaw of 0. If you then turn left pi/2 radians, your pose in the odom frame is (1, 0) with a yaw of pi/2. If you drive forward another meter, your robot's odom frame pose is (1, 1) with a yaw of pi/2.

The key point is this: your robot's pose in the world is equivalent to the odom->base_link transform (really, you could say it's base_link->odom, but that's not really important for this discussion). In other words, if I want to transform a point from the base_link coordinate frame to the odom (world) coordinate frame, you need that transform, and that transform is just your robot's odom frame pose.

All the state estimation nodes are doing in r_l is computing the robot's pose in the world frame (e.g., map or odom) and doing two things:

  1. Sending out a nav_msgs/Odometry message with the robot's pose, velocity, and respective covariances
  2. Sending out a transform from the world_frame parameter (e.g., odom) to the base_link_frame parameter (e.g., base_link). Note that this data is the same as the data in (1), but just sent out as a tf message.

The way it computes those things is via sensor fusion. If you have wheel encoders that say you are driving at 1.0 m/s forward, then the EKF integrates those velocities over time, so after 10 seconds, it says you went 10 meters straight forward. If you also have an IMU that says you are turning with some velocity, it integrates those at the same time, so if your wheel ... (more)

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2018-01-20 08:50:30 -0500

Seen: 1,933 times

Last updated: Jan 26 '18