How to best integrate imprecise GPS into localization pipeline?
Hi, we've started integrating GPS into our localization pipeline using robot_localization. Reading through all of the available sources, I got the impression that it is fairly easy to set up if you have a good GPS and a good compass.
TL;DR: Neither of these is the case for our robots. We can't rely on the compass, and the GPS signal may be bad. But we want smooth Earth-referenced odometry (as much as possible given the absolute measurements quality). This is a general question where I try to figure out the approaches. Although I do have a concrete implementation and configuration at hand, I don't feel at this time it is important to share it. But of course I can if somebody finds it useful.
Magnetometer from IMU works at least a bit only after doing a 360 turn with the robot to calibrate magnetic bias. We've already resigned to doing this from time to time and save the calibrations. On some robots, they're valid for a long time, while on others not. Importantly, the unbiased mag measurements are only valid in the X-Y plane in which we calibrated the magnetometer. When the robot has a non-zero pitch/roll, the magnetometer measurements are nonsense. Doing a more proper 3D calibration does not seem reasonable, as the robots are research platforms where people add/remove parts on a daily basis.
With GPS, we do everything we can to get precise measurements, but we cannot use an RTK base placed nearby. We subscribe to NTRIP over LTE, we put the GPS antennas on a 1 m pole to get further from the robot body, we add metal ground plane under the antenna. Under clear sky, the precision gets close to a few centimeters. However, the robots are expected to drive via any outdoor location, so we also need to localize in forests, where the fix precision jumps between decimeters and hundreds of meters. What's worse, we've caught the receiever reporting a few centimeter precision while being meters off.
All of this put together, running the standard dual EKF with navsattransformnode results in veeery jumpy odometry. On startup, the utm frame has to converge to the odometry frame, so the fused trajectory is complete nonsense for the first few meters, even if I set initial covariance in yaw to a large value. I somehow expected the compass would help with this, but even if we supply a nicely working compass measurement, the yaw still seems to be off when GPS measurements start being integrated. When yaw gets stabilized after this initial period, all the GPS fix jumps around the robot have a very bad effect on yaw estimate - I guess this is due to the holonomic motion model wired into robot_localization? I.e. the filter thinks the robot shifted 1 meter to the left, so it concludes it had to do it by changing its yaw and driving forward... I'm a bit confused this happens even when integrating a GPS fix with covariances like 1000.0.
So I thought I'll discard all of the really bad GPS measurements, and maybe inflate their covariance. So now I'm throwing away all measurements with covariance higher than 2.0, and iflating the rest with some constant, squaring it or whatever comes to my mind. My conclusion is - if I inflate them too much, it takes the filter very long to fix the initial yaw error, but when that happens, the odometry looks pretty nice (not sure how long it would take to correct odometry drift, though). If I inflate it only a bit, yaw still gets too much affected by the fix jumps to call the results a smooth odometry.
How do other people cope with this? Thanks for any hints.
Asked by peci1 on 2022-08-19 11:41:21 UTC
Answers
What's worse, we've caught the receiever reporting a few centimeter precision while being meters off.
All other issues aside, this is a problem. Your sensor is lying to the filter about its accuracy.
I'd really need to see your config and some sample sensor messages to comment much more, but just so we're on the same page, you don't have to fuse yaw from the mag into your EKF state estimate. navsat_transform_node
just needs it to obtain your pose in the UTM frame. Having said that, if you lack an absolute yaw source (i.e., if you are just using wheel odom and IMU and your heading estimate is drifting without bound), then you'll get into cases where your robot will appear to be facing in a direction that differs from its direction of motion.
One thing I've wanted to do for a long time is obtain heading by differentiating GPS positions. That would eliminate the need for mag data, though it would require the robot to move before a heading was generated.
Asked by Tom Moore on 2023-03-09 09:29:10 UTC
Comments