ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
An EKF isn't going to necessarily clean up noise in your data. Every time the EKF processes a measurement, it's just doing fancy (optimal) weighted averaging between its current state and the measurement you gave it. How much weight is given to each is determined by their relative covariance values.
So it goes something like this:
process_noise_covariance
(scaled by the time delta) to the current state covariance.So the two quantities that matter w.r.t. how much a given measurement is "trusted" are the process_noise_covariance
and the covariance in the message itself.
Let's look at just the X
variance in your messages. In the raw odometry message, you have an X
variance of 4.407756932778284e-06
, which is a standard deviation of 0.0021
. Your X
variance in your process_noise_covariance
is set to 0.05
, which is a standard deviation of 0.2236
. Let's say your odometry is published at 10 Hz. That means you'd likely be looking at a state with an X standard deviation of at least 0.02236
, which is an order of magnitude larger than your measurement covariance. So when the new state is computed, you're telling the filter to trust the odometry measurement at least 10x more than its predicted state.
Secondly, you said you added noise to the measurement. The variance from your noisy measurement is 4.446595085028093e-06
, which is pretty much identical to the original odometry message. If you are adding noise to the values, then that error needs to be represented in the covariance. Otherwise, your measurement is lying.
Finally, your odometry data has tiny covariance values _for pose data_. But if your robot is driving around for a long time, the pose covariance from odometry should grow without bound. If it was providing the velocity (twist) data, then those covariance values might be more or less static, but what you're telling the filter is that no matter how far the robot travels, the X
standard deviation on the odometry pose is only 0.0021
.
2 | No.2 Revision |
An EKF isn't going to necessarily clean up noise in your data. Every time the EKF processes a measurement, it's just doing fancy (optimal) weighted averaging between its current state and the measurement you gave it. How much weight is given to each is determined by their relative covariance values.
So it goes something like this:
process_noise_covariance
(scaled by the time delta) to the current state covariance.So the two quantities that matter w.r.t. how much a given measurement is "trusted" are the process_noise_covariance
and the covariance in the message itself.
Let's look at just the X
variance in your messages. In the raw odometry message, you have an X
variance of 4.407756932778284e-06
, which is a standard deviation of 0.0021
. Your X
variance in your process_noise_covariance
is set to 0.05
, which is a standard deviation of 0.2236
. Let's say your odometry is published at 10 Hz. That means you'd likely be looking at a state with an X standard deviation of at least 0.02236
, which is an order of magnitude larger than your measurement covariance. So when the new state is computed, you're telling the filter to trust the odometry measurement at least 10x more than its predicted state.
Secondly, you said you added noise to the measurement. The variance from your noisy measurement is 4.446595085028093e-06
, which is pretty much identical to the original odometry message. If you are adding noise to the values, then that error needs to be represented in the covariance. Otherwise, your measurement is lying.
Finally, your odometry data has tiny covariance values _for for pose data_. data. But if your robot is driving around for a long time, the pose covariance from odometry should grow without bound. If it was providing the velocity (twist) data, then those covariance values might be more or less static, but what you're telling the filter is that no matter how far the robot travels, the X
standard deviation on the odometry pose is only 0.0021
.
EDIT in response to comment.
Although, I am still quite ambiguous about which input data is used for a prediction step and which one for a correction
At least in r_l
, no specific sensor is used for prediction or correction. Every sensor measurement causes both steps to happen. Let's pretend we have an EKF with a state vector that is just [X, Y, Z, X velocity, Y velocity, Z velocity]
.
t0
. We'll use [0, 0, 0, 0, 0, 0]
.A
at time t1
. Sensor A
is configured to just give us X
velocity. The measurement has a value of X velocity = 2
.t0
to t1
using our motion model. Predicted state will still be [0, 0, 0, 0, 0, 0]
because the robot had 0 velocity in the previous step.[0, 0, 0, 1.8, 0, 0]
.B
at time t2
. Sensor B
is configured to just give us Z
position. The measurement has a value of Z = -1
.t1
to t2
. Predicted state is, say, [0.6, 0, 0, 1.8, 0, 0]
. Note the non-zero X
position value that happened as a result of our kinematic model and the X
velocity.[0.6, 0, -0.7, 1.8, 0, 0]
.I am glossing over a lot of things here, but the point to take away is that measurements arrive in a sequence, and for each measurement, we predict to the time of that measurement, then use that measurement to correct the prediction.