How does robot_localization ekf work?
I have looked through the paper that was published regarding the package and I was wondering how was the kalman filter implemented, this is, which values are used in the prediction step and which values are used in the update step. I currently have 3 different sensors:
-> one for X,Y,Z
-> one for roll, pitch, yaw
-> one for angular velocity and linear acceleration
I would like to know if these values are used in different steps or if they are used in the same (update step).
Also, is it possible to configure any of these steps regarding the information they use?