How does robot_localization finds tf between odom and base_link?
Hello
Is there any documnetation that describes how the relation between odom frame and based_link frame is calculated?
It seems that some sort of self-calibration is happening there
Edit 1: In response to Tom's answer
Thank you. I was totally misunderstood about robot_localization and robot_pose_ekf. I thought that they can calculate the transformations between sensor frames automatically. This is called self-calibration.
I was trying to fuse IMU with Camera (visual odometry). I was putting my IMU frame as base_link and the camera frame as odom. I thought that robot_localization calculates the transofrmation between two. Now I understand that setting IMU frame as base_link is wrong. The base_link is actually some point at the middle of robot's chasis. I have to publish a tf between imu_link and camera frame (odom).
Now I can see that the power of robot_localization is multi_rate fusion of multiple sensors but with known inter-sensor transformations. This is different from MSF:
http://ethz-asl.github.io/ethzasl_msf...
which tries to estimate inter-sensor transformations by knowing initial estimates of those transformations and updating them in the state vector.
Ah, I understand your confusion now. Yes, the EKF doesn't really know anything about the relationships between sensors, as you've stated. It just wants to know, for a given measurement, how to transform it into a frame that it "knows" about, and you provide those transforms.