Robotics StackExchange | Archived questions

Transient odometry rotation error (robot_localization)

TLDR:

I'm working on a differential drive base with encoders for odometry and an IMU. I'm working on fusing these data sources using robot_localization in an effort to get a more accurate odometry estimate so that our localization will perform better. Outside of some reasonable noise, all of our data tracks desired yaw velocity pretty well:

rqt plot

However, we are experiencing a significant transient yaw error when starting or stopping a turn. There is not a corresponding error in linear motion.


The long version:

I'm working on a differential drive base with encoders for odometry and an IMU. I'm working on fusing these data sources using robot_localization in an effort to get a more accurate odometry estimate so that our localization will perform better. Outside of some reasonable noise, all of our data tracks desired yaw velocity pretty well:

rqt plot

However, we are experiencing a significant transient yaw error when starting or stopping a turn. There is not a corresponding error in linear motion.

Assuming that everything is functioning as it should be, we should be able to rotate the robot with laser scan persistence set to a couple of seconds in rviz, and see the environment remain static relative to the robot.

Ideally, when rotating, the lidar scan (with persistence) should look similar to this static lidar scan: static

When rotating and at steady state, the rotating lidar scan looks like this (which is pretty close to desired): steady state

When we start or stop rotation, however, it looks like this: transient

Which is significantly worse than desired. Ultimately, this leads to localization error when turning around corners and the like, which then ultimately messes up our navigation.

Here's a bagfile for your perusal:

Bag File

Topic names are namespaced by robot name (in this case, p1_001), so:

/p1001/mobilebasecontroller/cmdvel is commanded velocity

/p1_001/scan is the lidar scan

/p1001/mobilebase_controller/odom is the raw odometry

/p1001/mobilebase_controller/imu is the raw imu data

/p1_001/odom is the fused output from robot_localization

I would appreciate it greatly to hear your thoughts on the matter!

EDIT:

Here are launch and config files for relevant nodes.

ROS control launchfile, launches hardware interface, Diff drive and IMU controller

ROS control config, mostly sensor covariances

Robot Localization Config file

Asked by Bradley Powers on 2015-10-19 19:08:01 UTC

Comments

Any conclusion on this?

Asked by Akif on 2016-02-08 02:02:12 UTC

Answers

Can you post your launch file?

Looking at your plots, it looks as if your IMU registers the turn long before your wheel encoder odometry. Additionally, it looks as if the filtered output is tracking much closer to your wheel encoder odometry than your IMU. I'd have to look at their relative covariances (which I realize are available in the bag file) to determine why you're seeing a delay in the filter response, but my first checks would be to (a) make sure that your angular velocity in your controller is reported as soon as the robot starts turning (there's some lag between the two, if I'm reading your plot correctly), and (b) the relative covariance between your IMU angular velocity and odometry angular velocity are set correctly, i.e., I'd expect a much lower variance on yaw velocity for the IMU than for your wheel encoders. Finally, you might want to also try increasing your initial_estimate_covariance for yaw velocity. This will help it converge faster when your robot is first starting.

EDIT 1

Still looking at this. A few things:

  1. Did you intentionally turn the yaw off for your IMU0 config?

    imu0_config: [
        false, false, false,
        false, false, false,
        false, false, false,
        false, false, true,
        true, true, true
    ]
    

Just want to make sure that was right.

  1. Have you had a chance to look into the delay in your wheel encoder odometry?
  2. Regardless, here's a sample odometry message:

    header: 
      seq: 1997
      stamp: 
        secs: 1445291702
        nsecs: 645665414
      frame_id: odom
    child_frame_id: p1_001/base_link
    pose: 
      pose: 
        position: 
          x: -0.00496491246256
          y: 2.86013970781e-05
          z: 0.0
        orientation: 
          x: 0.0
          y: 0.0
          z: -0.158302775866
          w: 0.987390617311
      covariance: [0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001]
    twist: 
      twist: 
        linear: 
          x: 0.00780897589398
          y: 0.0
          z: 0.0
        angular: 
          x: 0.0
          y: 0.0
          z: -1.10046405246
      covariance: [0.0001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001]
    

...and here's a sample IMU message:

    header: 
      seq: 1965
      stamp: 
        secs: 1445291701
        nsecs: 25784241
      frame_id: p1_001/imu
    orientation: 
      x: -0.608557879925
      y: -0.377054542303
      z: 0.5998955369
      w: -0.357228845358
    orientation_covariance: [0.01, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.01]
    angular_velocity: 
      x: -0.00852211564779
      y: -0.00372842559591
      z: 0.0145142283291
    angular_velocity_covariance: [0.01, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.01]
    linear_acceleration: 
      x: -8.96927833557
      y: -0.289698392153
      z: -0.223259314895
    linear_acceleration_covariance: [0.2, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.2]

Right now, you're fusing yaw velocity from both the IMU and the wheel encoders. One problem is that your covariance for your wheel encoder odometry's yaw velocity is an order of magnitude smaller than the yaw velocity from your IMU. Kalman Filters really come down to weighted averaging, so if you have two data sources, and one has an error that is much smaller than the other, the one with the lower error will be trusted more by the filter. Pair this with what appears to be a delay in your wheel encoder odometry, and you might have a reason for the perceived lag.

  1. On that front, I would set the initial_estimate_covariance parameter. Use the default values in the template launch file, except for the variables you're directly measuring, which I would make an order of magnitude larger than your largest error for any sensor measuring that variable.

  2. Try the version that's in the repo (just updated). The way it was before, there was a one-spin-cycle delay between measurement processing and output. There are other improvements I can make there, but give that a shot first.

Asked by Tom Moore on 2015-10-22 07:54:04 UTC

Comments

Relevant launch/config files uploaded. Let me know if anything else would be relevant.

Asked by Bradley Powers on 2015-10-22 08:17:09 UTC

I'll try to check this out tonight. Sorry for the delay.

Asked by Tom Moore on 2015-11-10 12:30:08 UTC

Much appreciated.

Asked by Bradley Powers on 2015-11-10 14:53:36 UTC

I have a similar problem, which is coming from the fact that although my robot is a differential robot, it does not have simply two wheels but four tracks instead (like a tank). As a result depending on the floor material we experience different amount of error in the odometry.

Asked by mohsen1989m on 2015-11-18 08:12:16 UTC

I guess integrating IMU data is not a promising solution when you robot is slow, I tried to the exactly the same thing and it didn't worked. A better way could be trying to use laser scanner data and performing a laser match. there is a node called laser_scan_matcher but I couldn't install it...

Asked by mohsen1989m on 2015-11-18 08:15:07 UTC

In this case, though, you can see it the MATLAB plots that the EKF output is tracking much closer to the wheel encoder odometry than the IMU. I'd like to see what happens if that is addressed (along with the other ideas I suggested).

Asked by Tom Moore on 2015-11-18 08:30:57 UTC

You mean that you want to determine how much your EKF output should rely on the IMU data?

Asked by mohsen1989m on 2015-11-18 08:57:08 UTC

If you mean this laser_scan_matcher, you unfortunately need to build it from source. RE: IMU data: The above plots show the EKF tracks encoder data more tightly than IMU data, but the IMU data responds faster. Tom wants to see EKF data when this is swapped.

Asked by etappan on 2015-11-18 09:27:00 UTC

For linear speed, let ax and ay be fed by only IMU, then play with covariance matrix values for ax and ay. For angular velocity it is not that easy since there is no such data as ayaw. so maybe you can get better results by reading vyaw only from IMU and then manipulating corresponding cov value

Asked by mohsen1989m on 2015-11-18 10:01:30 UTC