robot_localization how to set up tf
My understanding of a fundamental concept is way off somewhere and was hoping someone could set me straight :)
I am trying to use robot_localization to fuse PTAM+IMU.
I have read through the docs and tutorials for the robot localization package but I am still new to ROS so I am having trouble understanding how tf works with robot_localization.
I have an IMU topic publishing from a pixhawk via mavros: /mavros/imu/data
and I also have a ptam topic: /vslam/pose
Lets say that the orientation of both sensors are aligned with a positional offset on the y of 50cm.
I am guessing that I am now suppose to set up a tf system in code that represents the physical model (with the 50cm offset) and then broadcast the tf system so that robot_localization can use it. Is that correct?
Or am I suppose to use the frame_ids provided by the sensors?
Also if anyone knows of any step by step tutorials for something like this then please let me know. Thanks!
EDIT:
Ok so I tried using the frame_ids from the sensor messages and put those in the launch file for robot_localization. usb_cam is the frame_id from the /vslam/pose and fcu is from /mavros/imu/data. I'm not using a map frame.
<param name="odom_frame" value="usb_cam"/>
<param name="base_link_frame" value="fcu"/>
<param name="world_frame" value="usb_cam"/>
Now robot_localization publishes to the /odometry/filtered topic. When I view the tf tree on rviz it doesn't look right but I am thinking that I have not aligned the axes right?
I've been trying to get this right but still not sure if this is even the right way to use robot_localization?!?!