ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A answers.ros.org

How does robot_localization fuse pose odometry data?

Hi guys, I have a question concerning the working principle of robot_localization. I'm fusing odometry and IMU data. My odometry provides only absolute pose information, so I'm letting the robot_localization to fuse X, Y, yaw:

    <param name="odom0" value="/odom"/>
<rosparam param="odom0_config">[true,  true,  false,
false,  false,  true,
false,  false,  false,
false,  false,  false,
false,  false,  false]
</rosparam>


But according to my knowledge about Kalman filter, it is not possible to supply it with pose data from odometry. In the prediction step, we need to supply velocity or acceleration, to carry out the prediction phase (control input u):

x[k|k-1] = A*x[k-1|k-1] + B*u


Niether can we use it as the measurement (z) in the correction phase:

x[k|k] = x[k|k−1] + K[k]( z[k] − H[k]x[k|k−1] )


Can someone please clarify this? I have read almost the whole robot_localization manual, Thomas Moore's presentation and his paper but couldn't get the answer. Thank you!

edit retag close merge delete

Sort by » oldest newest most voted

A couple things:

1. You can absolutely fuse pose data from odometry. It's just a pose measurement like any other. It gets applied in the correction phase like any other measurement (i.e., z is just the [X Y theta] of your odometry measurement).
2. If you just fuse pose data from the wheel odometry, the EKF will end up with non-zero values in its velocities (the magic of matrix inversion). However, it is wise to have a velocity reference instead (or as well) so that prediction is more accurate.
more

( 2018-04-04 09:41:06 -0600 )edit

No, not silly! State estimation is not trivial. Happy to help.

( 2018-04-09 03:52:05 -0600 )edit