How does robot_localization fuse pose odometry data?
Hi guys, I have a question concerning the working principle of robot_localization. I'm fusing odometry and IMU data. My odometry provides only absolute pose information, so I'm letting the robot_localization to fuse X, Y, yaw:
<param name="odom0" value="/odom"/>
<rosparam param="odom0_config">[true, true, false,
false, false, true,
false, false, false,
false, false, false,
false, false, false]
</rosparam>
But according to my knowledge about Kalman filter, it is not possible to supply it with pose data from odometry. In the prediction step, we need to supply velocity or acceleration, to carry out the prediction phase (control input u):
x[k|k-1] = A*x[k-1|k-1] + B*u
Niether can we use it as the measurement (z) in the correction phase:
x[k|k] = x[k|k−1] + K[k]( z[k] − H[k]x[k|k−1] )
Can someone please clarify this? I have read almost the whole robot_localization manual, Thomas Moore's presentation and his paper but couldn't get the answer. Thank you!
http://docs.ros.org/kinetic/api/robot... https://roscon.ros.org/2015/presentat... http://docs.ros.org/lunar/api/robot_l...