Ask Your Question

robot_localization odom wrong frame

asked 2020-02-24 03:28:46 -0500

jnoyola gravatar image

updated 2020-02-25 11:23:09 -0500

I'm having an issue with robot_localization when my sensor frames are offset from the base_link frame.

I have robot with a realsense T265 odometry sensor, but when I run a robot_localization UKF, it places the robot frame at the odometry message instead of the sensor frame.

TF tree:

odom        base_link -> base_link/t265


name: /base_link/t265/odom
type: Odometry
frame_id: odom
child_frame_id: base_link/t265

UKF Config:

map_frame: map
odom_frame: odom
base_link_frame: base_link
world_frame: odom

odom0: /base_link/t265/odom

Below I have the Odometry topic and the TF tree after the UKF is run.

Issue: Since the odom's child_frame_id is base_link/t265, the odom topic should identify the location of that frame, not the base_link frame, right? I’m expecting the UKF to compute the odom->base_link tf such that the t265 frame will be placed at its odom message, but instead it places base_link there directly.

image description

edit retag flag offensive close merge delete


Please attach your screenshot to the question. I've given you sufficient karma.

gvdhoorn gravatar image gvdhoorn  ( 2020-02-24 04:32:33 -0500 )edit

Thank you. Updated the question with the screenshot.

jnoyola gravatar image jnoyola  ( 2020-02-24 10:28:37 -0500 )edit

What's your issue exactly? Its unclear to me.

stevemacenski gravatar image stevemacenski  ( 2020-02-24 10:58:38 -0500 )edit

Updated the last sentence. Hopefully that clarifies it. This sounds like a standard use case that I’ve seen lots of discussion about, but no end to end examples.

jnoyola gravatar image jnoyola  ( 2020-02-24 11:09:46 -0500 )edit

3 Answers

Sort by » oldest newest most voted

answered 2020-03-02 02:55:18 -0500

jnoyola gravatar image

updated 2020-03-02 02:57:30 -0500

I went with the manual solution, manually calculating where the odom message should be for the robot frame instead of the sensor frame.

This problem is also described in the last comment on the accepted answer here, which also suggests it must be done manually.

To do this, I post-multiply the odom message's pose by the sensor-to-base_link transform. Then, in order to make the base_link appear to start at 0 instead of the sensor, I pre-multiply this by the base_link-to-sensor transform. Thus if your base_link-to-sensor (or sensor mount pose) transform is S, I compute

odom_msg.pose.pose = S * odom_msg.pose.pose * S.inverse()

This assumes your twists (linear and angular) are already rotated by the pose's rotation after this calculation. Also note it's more performant to precompute S.inverse() rather than recalculating this every frame.

edit flag offensive delete link more


Not sure why there are downvotes. It looks like this has been an issue for several years, and Steve's answer only works for putting all of the fused data in the T265 frame, which is not what I want. Please share a better solution if you have one.

jnoyola gravatar image jnoyola  ( 2020-03-03 11:32:20 -0500 )edit

@jnoyola Please consider your tone and expectations when you're writing your responses. People on this forum are volunteering their time to help answer questions. It's not our responsibility to solve your problem. You will find that if you become confrontational people will choose to help others.

You've accepted your own answer to the question but reading your question the other answer appears to be a better solution. I think that your use case has more details than your explaining, that external reviewers do not understand due to a partial problem statement. This especially rings true to me as your statement that you don't want "all" the fused data in the same frame, as that seems to be the opposite of computing a fused result.

tfoote gravatar image tfoote  ( 2020-03-03 12:53:46 -0500 )edit

I apologize if that came across as confrontational. I was just asking an honest question, because my solution solved my problem but received downvotes, suggesting people believe there is a better solution. In the comments on the other solution, I have tried explaining in 3 different ways why that does not appear to solve the problem, but received no response.

In response to your point here, I did not say that I don't want all the fused data in the same frame -- I simply don't want all the fused data in the T265 frame. The problem I have been trying to solve is where I have sensor A readings giving me odom-to-A and sensor B readings giving me odom-to-B and want to output odom-to-base_link. The other solution suggests I can use the T265 frame, but it seems like r_l will then interpret all sensors as providing odom-to-T265, which ...(more)

jnoyola gravatar image jnoyola  ( 2020-03-03 18:49:09 -0500 )edit

answered 2020-09-27 07:19:22 -0500

whoobee gravatar image

I had a similar problem... and I did exactly what jnoyola did... Here is the github repo if someone is interested and has a similar solution: ROS sensor_odom_manager

I use Intel RS on a differential drive robot, therefore this might not work in all situations, so fell free to modify/fork it.

edit flag offensive delete link more

answered 2020-02-24 13:57:13 -0500

Your configuration clearly shows that you want the odom frame to be odom, not /base_link/t265/odom.

The odom / IMU / etc frames are the input frame of the data, not the output. The point of RL is to fuse data from multiple sources and multiple frames and give a smooth estimate in the odom_frame you provided, which is odom.

If you want all the fused data to be in the T265 frame, you should use that.

edit flag offensive delete link more


Maybe I'm misunderstanding. If I have multiple sensors mounted at different positions on the robot, and each reports odometry for its own frame (not the robot's), how do I fuse these to get the robot's position relative to the odom frame?

jnoyola gravatar image jnoyola  ( 2020-02-24 15:09:33 -0500 )edit

Using the robot localization package.

stevemacenski gravatar image stevemacenski  ( 2020-02-24 15:30:53 -0500 )edit

Hey jnoyola it's not clear from your response that you know this - folks usually define the offsets between sensor frames and base frame with a urdf file and the robot_state_publisher package. The wiki explains this well.

johnconn gravatar image johnconn  ( 2020-02-24 16:51:46 -0500 )edit

Does the robot_state_publisher do anything helpful here beyond publishing static TFs? I think I'm accomplishing the same task by publishing the sensor offset with a StaticTransformBroadcaster.

jnoyola gravatar image jnoyola  ( 2020-02-24 17:30:18 -0500 )edit

@stevemacenski, I have read pages and pages of documentation and forum posts, and sadly put dozens of hours into this quite simple problem. If you could be more specific I would really appreciate it.

jnoyola gravatar image jnoyola  ( 2020-02-24 17:43:52 -0500 )edit

I'm not sure in what way I can be more specific - what's the problem you're trying to solve? I think in the answer I make it clear what you should do to change the output from to the T265 frame from odom

stevemacenski gravatar image stevemacenski  ( 2020-02-24 17:49:20 -0500 )edit

You say change the configuration if I want all fused data to be in the T265 frame, but I don't. I want to take the T265 data relative to the T265 frame, and any other potential data relative to its sensor frame, and fuse them to generate the position of base_link in the odom frame. Is that what you're saying?

jnoyola gravatar image jnoyola  ( 2020-02-24 17:59:08 -0500 )edit

As a demonstration of the problem, when I hold the sensor in my hand and rotate it in place, the base_link frame turns in place and the t265 frame revolves around it. I could update the configuration as you say, but then what happens when I have 2 sensors in different frames? I don't want both of their odometry relative to the t265 frame.

jnoyola gravatar image jnoyola  ( 2020-02-24 23:34:34 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2020-02-24 03:28:46 -0500

Seen: 864 times

Last updated: Sep 27 '20