Ask Your Question
1

Dynamically assigning covariance values to Odometry node [closed]

asked 2014-08-05 12:16:46 -0600

l0g1x gravatar image

updated 2014-11-16 17:37:44 -0600

So I am working on using robot_pose_ekf as my only source of localization. Gmapping unfortunately is not applicable to my scenario since it only localizes the robot when there is scan data (meaning objects in your laser range). My odometry has a pretty big drift. If I drive straight with a joystick, my odom says i am 2 meters left of where i should be, where i traveled 10 meters forward from my starting point.

Obviously this is not my most accurate sensor input into robot pose ekf, so my covariance values should be higher then sensors which i believe are more accurate. Now my question is, I know you can statically assign covariance values in your odometry node to something like 0.1 0.001, but how do I go about assigning the covariance values with dynamic variances? How do I calculate the equations to know what the covariance is, over a certain time. From my understanding, the longer the robot travels, the higher the covariance should be for my odometry since the drift becomes greater and greater.

I know the gps_common package that converts the gps /fix to a odometry message is dynamically assinging its covariances like this. But I have no idea how to do that myself.

My second question would be, if lets say the robot has traveled 100 meters and the covariance values are really great, I would assume that my wheel odometry sensor would at a certain point just become obsolete because the differences in pose is so great (say gps pose vs odom pose), as well as the variance would practically be minimally considered by robot pose ekf when trying to filter all the data). Is there a way to make odometry a more reliable sensor source after a certain period of time? (i.e make covariance value go from 5.0 to .1 again just like it was in the beggining). I know Tom Moore said he would respond to this question, but other suggestions are appreciated as well.


EDIT: Currently my covariance values are:

[0] = 0.1;
[7] = 0.1;
[14] = 1000000000000.0;
[21] = 1000000000000.0;
[28] = 1000000000000.0;
[35] = 0.01;

But the odom_combined tf frame from ekf is still really pretty far off.

Here is the bag played with my odom data (left) and gps data (right) (i inverted the axis for visuals) image description

So how can i calculate what my static covariance value should be from this?

edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by l0g1x
close date 2015-04-02 21:54:32.746256

2 Answers

Sort by ยป oldest newest most voted
1

answered 2014-08-05 12:59:49 -0600

Tom Moore gravatar image

updated 2014-08-05 14:08:48 -0600

It really depends on how your state estimation software works. Ideally, you'd want to feed a filter velocity estimates and let it use its kinematic model to integrate those velocity estimates for you. In that case, the velocities can (usually) have static covariances.

However, I believe robot_pose_ekf works a bit differently, as it doesn't integrate velocities directly. For reasons I won't go into here, I believe it takes the difference between a given sensor measurement value and the last value, and adds that to the current overall state estimate, and produces a new measurement that way (robot_pose_ekf maintainers, please correct me if I'm wrong). In that case, I think what you'd really want is for that measurement to have a covariance that is equal to the last state estimate's covariance plus the small covariance generated from the current measurement delta. I'm not sure if that's what robot_pose_ekf is doing or not.

If you'd rather integrate the velocities and not worry about the covariances of absolute position measurements (at least for odometry), I would suggest using the ekf_localization node in the robot_localization package.

EDIT in response to author's edit: And what does the data look like when combined? Again, depending on how robot_pose_ekf functions (and I don't know the answer to that), using a static covariance may or may not be appropriate. If you really want to make it grow with distance traveled, you can do a rough approximation via something like

dist_traveled = ::hypot(cur_x_pos - last_x_pos, cur_y_pos - last_y_pos);
x_covariance += dist_traveled * error_per_meter * ::cos(yaw); 
y_covariance += dist_traveled * error_per_meter * ::sin(yaw);

Might wanna double-check my math there, but it ought to be something like that. Note that this assumes a robot navigating in 2D, and only really addresses the variances for X and Y.

edit flag offensive delete link more

Comments

I am simultaneously trying to get the robot_localization package to work as well, except i dont understand how to get it all to work even after the tutorials. How can you set one covariance matrix in the launch file for all sensor inputs? I am getting really erratic data from its output.

l0g1x gravatar imagel0g1x ( 2014-08-05 13:32:06 -0600 )edit
1

You don't manually set covariances with it. Feel free to ask a new question about it, or shoot me an e-mail (see the package wiki page) and we'll sort it out and then post the question and answer.

Tom Moore gravatar imageTom Moore ( 2014-08-05 13:47:28 -0600 )edit
3

answered 2014-08-05 12:39:30 -0600

ahendrix gravatar image

The real value of the odometry input to a position estimation is not the absolute position (which becomes progressively more useless over time), but rather the velocity estimates in the odometry, which give a rough difference between the previous estimated position and the current position. These can have statically assigned covariances.

If you understand your system particularly well, or if it's particularly nonlinear, it may be wise to model the velocity covariance as a function of the velocity.

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2014-08-05 12:16:46 -0600

Seen: 1,744 times

Last updated: Nov 16 '14