# robot_localization how to set up tf

My understanding of a fundamental concept is way off somewhere and was hoping someone could set me straight :)

I am trying to use robot_localization to fuse PTAM+IMU.

I have read through the docs and tutorials for the robot localization package but I am still new to ROS so I am having trouble understanding how tf works with robot_localization.

I have an IMU topic publishing from a pixhawk via mavros: /mavros/imu/data

and I also have a ptam topic: /vslam/pose

Lets say that the orientation of both sensors are aligned with a positional offset on the y of 50cm.

I am guessing that I am now suppose to set up a tf system in code that represents the physical model (with the 50cm offset) and then broadcast the tf system so that robot_localization can use it. Is that correct?

Or am I suppose to use the frame_ids provided by the sensors?

Also if anyone knows of any step by step tutorials for something like this then please let me know. Thanks!

EDIT:

Ok so I tried using the frame_ids from the sensor messages and put those in the launch file for robot_localization. usb_cam is the frame_id from the /vslam/pose and fcu is from /mavros/imu/data. I'm not using a map frame.

<param name="odom_frame" value="usb_cam"/>

<param name="base_link_frame" value="fcu"/>

<param name="world_frame" value="usb_cam"/>

Now robot_localization publishes to the /odometry/filtered topic. When I view the tf tree on rviz it doesn't look right but I am thinking that I have not aligned the axes right?

I've been trying to get this right but still not sure if this is even the right way to use robot_localization?!?!

edit retag close merge delete

Sort by » oldest newest most voted

I would start by reading through REP-105, and also the tf wiki page.

Your odometry frame_id is typically "odom" and is a world-referenced frame. In other words, if you start your robot and drive it forward five meters and left 3 meters, your position in the odom frame should be (5, 3). However, your position in this frame is subject to drift (e.g., just fusing wheel encoders and IMU data).

Your map frame_id is typically "map" and is identical to the odom frame, except that your position in the map frame is not guaranteed to be continuous, and is not subject to drift over time. For example, if you are including GPS data, your position will probably jump around a bit, but over time, your position in that frame will be more accurate than a position estimate that only used wheel encoders and IMUs.

The base_link frame_id is typically "base_link" and is rigidly affixed to your robot. Most velocity (twist) data is reported in this frame. For example, a simple differential drive wheeled robot can only have +/- X and +/- yaw velocity.

For your problem, you need to make sure each message has a frame_id specified, and then make sure you produce a transform (e.g., via static_transform_publisher) that specifies the transform from the frame_id of the message to one of the frame_ids listed above. For IMU data, you'll need to specify a base_link-->imu transform (assuming you use those two frame_id names). For the PTAM data, the frame_id probably defaults to map or odom, but you'll have to verify that. See this tutorial for more information on preparing sensor data.

more

Thanks for your help Tom. I've read through those links but I still don't understand the relationship between tf, the sensor frame_ids and the robot_localization launch file. Feeling really dense right now. I'll have another read, maybe something will click this time.

( 2015-08-31 05:26:52 -0500 )edit

So I have 2 messages from third party packages and they each have a frame_id: fcu (imu) and usb_cam (ptam). Do I then make 2 tf links using static_transform_publisher: base_link->fcu and odom->usb_cam. Then in robot_localization do I reference base_link and odom in the launch file???

( 2015-08-31 06:00:23 -0500 )edit
1

Still stuck. Where do these map, odom, base_link frames come from? Do I generate them?

( 2015-08-31 17:12:29 -0500 )edit
2

The frame_ids are just names. The transforms between them are all managed by tf. So you can use the tf library to specify a transform from frame A to frame B, and then broadcast it. Other nodes can then consume that transform by also using the tf libraries.

( 2015-08-31 19:43:55 -0500 )edit
3

So, e.g., if you have a LIDAR that is mounted on its side on your robot, and it senses an obstacle 5 meters away at 30 degrees (from the LIDAR's origin), you might need to know where that point is for the robot's frame, or even in the world (e.g., odom) coordinate frame.

( 2015-08-31 19:48:19 -0500 )edit

Thanks again Tom! Hope that wasn't too painful for you. I didn't know what I was missing so didn't even know the right questions to ask. I think I get it now.

( 2015-08-31 22:05:55 -0500 )edit