ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A
Ask Your Question

Sensor Frame offset for robot_localization - Absolute odometer offset

asked 2016-07-12 06:30:29 -0500

NZNobody gravatar image

updated 2016-07-25 17:56:39 -0500

EDIT - TLDR: If I have a singular source of absolute odometry providing x,y,z,roll,pitch & yaw (visual odom), how do I account for the camera's offset from base_link? Simply setting up a base_link -> zed_frame (zed is my camera) transform doesn't work, because the odometry data is published under the odom frame. EG: My camera is mounted at the center of my robot at a 45 degree pitch incline. If I move the robot in the x direction by 10 units, my odometry reports an offset of ( 10Sin(45) 0 10Cos(45) ) which gets fed to the filter and misplaces my odom -> base_link transform.

I have tried having an intermediate frame, zed_initial_frame. In this scenario my visual odometry is published under the zed_initial_frame, and a static tf with a rotation of 45 deg pitch links odom to zed_initial_frame. Now moving the robot 10 units in x results in the odometry pose being(10 0 0); HOWEVER, the base_link frame now gets reported in line with zed_initial_frame, i.e at 45 deg pitch. This is because at t (time) = 0, the odometry reports a 0 (eigen) transform. But this is then transformed to the odom frame, resulting in the initial odometry reading showing a 45 deg pitch. This is where I possibly need to point the finger at robot_localization, because even setting the odom0_relative parameter to true, doesn't fix this. Here is a the potential offending source code, my theory is that with the variable set to true, the first measurement is set to zero in section 7g, and only then transformed in 7h, resulting in a non-zero initial measurement after all due to the rotation.

Original Question: Hello, I have been stuck on this for days and am sure I am fundamentally missing something. I have read:

Yet I am unable to produce the results I want. My scenario:

Physically my set up is the ZED Camera mounted on a tripod at a 45 degree inclination downwards (pitch = 45). This is because I would like to see the floor in front of me. it is roughly 1.1m off the ground.

What I am trying to achieve: Get everything to line up in the visualizer in terms of transforms and frames. The ZED ouputs all data in the frame zed_tracked_frame, except for the zed_tracked_frame itself, which I have ... (more)

edit retag flag offensive close merge delete


I think I have found the cause of the issue, but am unsure who (which package) should be responsible for correcting it. I will edit my question accordingly.

If my obsolete odometer is offset from my base_link, how and who accounts for this offset?

NZNobody gravatar image NZNobody  ( 2016-07-12 19:41:42 -0500 )edit

2 Answers

Sort by ยป oldest newest most voted

answered 2016-07-13 02:55:25 -0500

Tom Moore gravatar image

updated 2016-08-04 03:41:12 -0500

It appears your stereo camera doesn't support velocities, as that would be a solution. You could use a static transform from base_link in that case.

The short answer is that the EKF and UKF in r_l don't do a great job of handling sensors that are measuring world-frame data from an offset position. I've run into this same issue with UUVs and depth sensors.

In any case, if you define a transform directly from odom, it can't be static, as it will be dependent on your pose in the odom frame. Relative mode won't help you either, as that's simply meant to remove the first pose measurement from a given sensor so that it effectively starts at 0. Differential mode converts subsequent poses to velocities, but then doesn't (and I'm not convinced it should) apply the transform to base_link afterwards.

So there are three options:

  1. Write a quick node that computes the odom->zed transform dynamically, and run everything else as-is
  2. Modify the camera driver to output velocities in its odometry message
  3. Add some kind of parameter to r_l for pose sensors that tell it that it needs to dynamically adjust the "static" transform from odom based on the vehicle's current pose.

EDIT in response to comments:

I think there are a couple of unknowns for me. First, you said your camera is mounted at -45 degrees pitch, which would point it at the ceiling (positive pitch is down in an ENU frame). Second, even if the camera is pointed down at 45 degrees, does that make a difference in the reported odometry? For example, if you aim the camera down 45 degrees toward the floor and drive straight forward for 10 meters, what do you get for reported odometry? Does it say you are at (10, 0, 0) or (7.07, 0, 7.07)? I think I will know how to address this once I have that information.

EDIT in response to updates:

OK, so the raw output from the camera (not passed through any transform), when you drive forward 10 meters, reports that you at ~(7.07, 0, 7.07). There are two things to consider:

  1. Imagine your camera wasn't inclined, so it was facing straight forward. Then, when driving straight forward or backward, the pose data from the camera would be correct (though instead of going from X = 0 to N, the robot's X position would be 0 - linear_offset to N - linear_offset). However, when turning, the camera would register linear motion as a result of its offset and the rotational motion. In the most extreme case, the robot is turning in place, and the camera is registering its pose change along the circle with radius r, which is equal to the X/Y distance from the center of rotation.

    Assuming you have defined a static transform from base_link->zed_frame, since the camera is measuring pose data, the standard method of applying the transform from the zed_frame ...

edit flag offensive delete link more


  • Option 1: Sorry, I am a little confused what this would achieve? If the odometry data is reported in the odom frame, base_link will still be offset will it not? Since r_l computes odom -> base_link. Unless you mean report odometry data in the dynamically computed zed frame?
NZNobody gravatar image NZNobody  ( 2016-07-13 03:36:02 -0500 )edit
  • Option 2: This is an option, but wouldn't this decrease accuracy and cause drift on a sensor that is semi drift robust? The camera performs loop closure detection and so if I return to my starting position, it is very accurate at outputting the exact same location again. Or would it not be affecte
NZNobody gravatar image NZNobody  ( 2016-07-13 03:38:45 -0500 )edit
  • Option 3: If I understand this correctly, this parameter could effectively look up the static transform from base_link -> zed_frame and apply the inverse of this to the data provided by in the odom frame to correctly position the base_link. I imagine it like this: (Photo in next comment)
NZNobody gravatar image NZNobody  ( 2016-07-13 03:58:32 -0500 )edit

Sketch of Option 3 - The odometry reports pose C -> A, normally r_l would place base_link at A. Use A -> B to calc base_link offset as C -> A + A -> B resulting in C -> B. Does this make sense?

NZNobody gravatar image NZNobody  ( 2016-07-13 04:01:21 -0500 )edit

answered 2017-02-21 04:20:20 -0500

NZNobody gravatar image

NOTE: This is really just an answer to how I implemented Tom's answer, but I needed to post an image so couldn't just comment.

Ultimately I tried options #1 and #2, and ended up using #1. Attached is the final transform tree graph that resulted in usable transforms. At startup /rex_ground_calib will calculate an initial offset that will make the ground flat (using ground plane detection), and then dynamically broadcast a transform that maintains this as the zed_tracked_frame moves relative to this by dynamically linking zed_tracked_frame and base_link so that everything offsets correctly.

Final TF Tree

Zombi answer inspired by Tom's comment here: Github Page

edit flag offensive delete link more


Can you please post the code for your solution?

danielsnider gravatar image danielsnider  ( 2017-04-11 11:17:19 -0500 )edit

@ NZNobody, I am trying to follow your solution, as I have similar setup. However, I was wondering how did you implement /rex_ground_calib?

hdbot gravatar image hdbot  ( 2018-03-02 08:36:17 -0500 )edit

Not really a fan of the ol' copy paste but roughly as so:


Code is incomplete on purpose. Please don't ask for complete implementation.

NZNobody gravatar image NZNobody  ( 2018-03-05 02:40:48 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools



Asked: 2016-07-12 06:30:29 -0500

Seen: 2,412 times

Last updated: Feb 21 '17