Prediction step and IMU orientation have little impact on robot_localization position
Hi, I have Odometry (from wheel encoders) and IMU data available, but for each setup the EKF Position replicates the Odometry Position (see fig xyoffsets.png). At the end of driving several meters the difference is not more than 1cm for x and y value).
The Odometry data only provides absolute X and Y Position and Orientation, which I all fuse. The IMU has in theory absolute orientation, angularvelocity and linear acceleration values. However I want to use the ~usecontrol parameter so that the prediction step is done based on the velocity commands (doing this for research project). Therefor I cannot fuse any linear acceleration data because that overrides the ~usecontrol parameter as stated in the documentation. That's fine with me. So the IMU cannot alter the current position, but it provides addition data for orientation of the robot (see attached picture yawoffsets.png). Looking at the data I see that EKF-yaw value is in the middle between odom-yaw and imu-yaw (both have similar variances).
So I would expect, that over time the EKF and the Odometry position drift apart, since the EKF has a more accurate orientation, which is to my knowledge included at the prediction step.
For variances I made some measurements and calculated the following on my own:
process_x: 3.80192200351542e-05
process_y: 0.000113269151605415
prozess_yaw: 0.000734031281740257
odom_x: 6.59418396241594e-06
odom_y: 0.00011676237024532
odom_yaw: 0.00101425160795909
imu_yaw: 0.00117190176298063
imu_yaw_vel: 0.0154674185084969
I also varied Initialcovariancematrix values (x,y,yaw) from between 10^6 to 0.001. For low values it took more time steps until EKF was very close to Odometry data again, but no other impact. I also changed some variances to see if they have a biger impact. ie. when increasing the odomyaw variance I see that EKF-Orientation is always close to IMU-yaw value (which is expected). When increasing the odomx, and y variance I can see slightly more variation. e.g. when keeping the process variance but putting odomx, and odomy variance to 0.5 I can see that the positions vary slightly more (about 5-10 cm) as can be seen in Fig. xyoffsetsodom0_5.png. However comparing the last EKF-position with the ground truth (measured by accurate device) I cannot say that the position is more accurate than raw Odometry position.
I would appreciate any help/suggestions. Maybe I have just reached the limit with my sensor configuration, and there is nothing to better. I also considered to include IMU-linear acceleration data and comment parts of robotlocalization (so that ~usecontrol is still active) but I find this rather risky. If you need more information, please tell me.
PS: As I am new to ROS-community, I cannot add pictures yet. As soon as I find out how to generate more Karma I will upload the promised 3 pictures.
frequency: 20
sensor_timeout: 0.2
two_d_mode: true
transform_time_offset: 0
transform_timeout: 0.1
publish_tf: true
publish_acceleration: false
map_frame: map
odom_frame: odom
base_link_frame: base_link
world_frame: odom
odom0: odom_new
odom0_config: [true, true, false,
false, false, true,
false, false, false,
false, false, false,
false, false, false]
odom0_queue_size: 6
odom0_nodelay: false
odom0_differential: false
odom0_relative: false
imu0: imu_new
imu0_config: [false, false, false,
false, false, true,
false, false, false,
false, false, true,
false, false, false]
imu0_nodelay: false
imu0_differential: false
imu0_relative: true
imu0_remove_gravitational_acceleration: false
use_control: true
stamped_control: false
control_timeout: 0.2
control_config: [true, false, false, false, false, true]
permit_corrected_publication: true
history_length: 0.1
process_noise_covariance:
[6.59418396241594e-5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.00011676237024532, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0.00734031281740257, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.25, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5]
initial_estimate_covariance: [0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, e3, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, e3, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9]
sample odom_msg:
header:
seq: 427
stamp:
secs: 1660918688
nsecs: 890906583
frame_id: "odom"
child_frame_id: ''
pose:
pose:
position:
x: 1.4626716375350952
y: -0.07026634365320206
z: 0.0
orientation:
x: 0.0
y: 0.0
z: 0.18179159266228068
w: 0.9833370820005272
covariance: [6.59418396241594e-06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00011676237024532, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.005, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.005, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.005, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00101425160795909]
twist:
twist:
linear:
x: 0.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
sample imu_msg:
header:
seq: 190
stamp:
secs: 1660918723
nsecs: 839721754
frame_id: "imu_link"
orientation:
x: -0.015164938755333424
y: -0.008995688520371914
z: -0.9820319414138794
w: 0.1878892481327057
orientation_covariance: [0.05, 0.0, 0.0, 0.0, 0.05, 0.0, 0.0, 0.0, 0.00117190176298063]
angular_velocity:
x: -0.07236731052398682
y: -0.03831210732460022
z: -0.2586067318916321
angular_velocity_covariance: [0.1, 0.0, 0.0, 0.0, 0.1, 0.0, 0.0, 0.0, 0.0154674185084969]
linear_acceleration:
x: -0.5602431893348694
y: -0.2059013396501541
z: 10.864887237548828
linear_acceleration_covariance: [0.5, 0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.5]
---
Asked by Laschoking on 2022-08-19 09:11:06 UTC
Answers
So I would expect, that over time the EKF and the Odometry position drift apart, since the EKF has a more accurate orientation
No, they won't drift apart, at least not by much. You are fusing absolute pose data from your wheel encoder data, as well as abolute yaw from your IMU. The robot's EKF position is going to very closely track the position from the wheel encoders, because that's your only position reference. Every prediciton the EKF makes will just get corrected towards the position provided by the wheel encoders the next time you receive a measurement. And your covariance on your wheel encoder pose is tiny and probably isn't growing, which is incorrect. That should grow over time, and probably fairly rapidly.
Also, fusing absolute yaw from two sources is not going to work well. Even with relative mode enabled for the IMU data, over time, your robot's odometry estimate of yaw will drift from reality. So at time step A you'll get a message from the wheel encoders that tells the filter the robot is facing one direction, and in the next timestep, you will get a message from the IMU that says it's facing another. So the behavior will be that the robot moves to the X, Y positions from the wheel encoders (or very near them), but then ends up facing the direction dictated by the IMU.
Can your wheel encoders not output velocity data as well? That would be the best way to this.
If it were me, I'd:
- Turn off the control values until you get the rest behaving
- Stop fusing absolute pose from the wheel encoders. Instead, fuse X, Y, and yaw velocity from the encoders, if you can get the driver to fill those fields out. Otherwise, I suppose you could turn on
differential
mode for the wheel encoders, but I'd definitely recommend trying to fill those fields out directly. - Turn off yaw from your IMU, and fuse yaw velocity instead. Magnetometers are terrible.
- Make sure the covariance values are as accurate as they can be for each sensor
Get that functioning as you like, and then try introducing the control term or the magnetometer data or whatever else. Just change one thing at a time.
EDIT: also, are you definitely providing a transform from base_link
to imu_link
? The IMU data will be ignored without that.
Asked by Tom Moore on 2022-09-26 04:41:59 UTC
Comments