ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
13

How do I compute the covariance matrix for an orientation sensor? [closed]

asked 2011-03-16 06:52:10 -0600

updated 2011-04-26 06:27:24 -0600

If I'm writing a driver for an orientation sensor that outputs quaternions, how do I calculate the orientation covariance matrix? I don't even know the units for covariance in the context of the quat's cov matrix (rad^2 maybe?). obviously, I'm lost.

The purpose of the driver (at least in my use-case) is for using it with robot_pose_ekf so I believe I need a valid cov matrix for robot_pose_ekf to do its magic...

edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by allenh1
close date 2017-08-10 12:47:26.739721

Comments

Check out for hints about limitations of robot_pose_ekf and alternatives: http://answers.ros.org/question/39790/robot_pose_ekf-and-gps/

demmeln gravatar image demmeln  ( 2014-02-11 00:39:12 -0600 )edit

2 Answers

Sort by ยป oldest newest most voted
13

answered 2011-03-16 14:19:58 -0600

Chad Rockey gravatar image

Shorter answer: I had to dig through the robot_pose_ekf code, and the default covariance for the 3x3 IMU is just (0.01)^2 degrees converted into radians along the diagonal [so ~0.00017)^2]. You should probably set this to roll pitch yaw covariance in rads^2.


Long answer: the units in here don't look so great. What ROS will fill in as a default is listed as 0.01 degrees/sec. This should be because the robot_pose_ekf only filters on the differences between the last message and the current message. So this might mean that you'll actually need to send in the covariance of your angular velocities to get the best results, as this value does get multiplied by dt^2 later. However, the code does clearly read orientation_covariance and NOT angular_velocity_covariance.


Original question answer: This thesis seems to have laid the math out okay for getting the necessary covariance into RPW from quaternions. It's on page physical page 69 for reference, but I doubt you have your covariance in quaternion space. To get the quaternion covariances, you should be able to just inverse the math there.

edit flag offensive delete link more

Comments

2
Yeah, generally the covariance won't change on an IMU sensor very much. You can pull it from the datasheet of the individual sensors for the most part. So, in ROS, that 0.01 is a pretty solid guess. If your IMU doesn't have that axis, set it to something obscenely high (MAX_DBL works well).
mjcarroll gravatar image mjcarroll  ( 2011-03-16 14:27:49 -0600 )edit
Chad, strangely enough my CH Robotics UM6 IMU sensor does in fact output a cov matrix in quaternion space. I'll attempt to implement the transform that thesis points to. Thanks!
JeffRousseau gravatar image JeffRousseau  ( 2011-04-05 01:59:55 -0600 )edit
Chad, for your "Long answer" I'm a bit confused... are you saying the robot_pose_ekf is using the orientation cov matrix as if it was the angular velocity cov matrix? That sounds like a bug!
JeffRousseau gravatar image JeffRousseau  ( 2011-04-05 02:02:12 -0600 )edit
1
It sure does! They really take the change in orientation (new-old) over a certain dt, resulting in an angular velocity. My guess is that you should try the node by just feeding in the approximate covariances of your angular velocities and see if it works well.
Chad Rockey gravatar image Chad Rockey  ( 2011-04-05 14:46:24 -0600 )edit
1
After much discussion with my roommate, we were a bit dubious as to how "Kalmany" ROS's EKF node really is. It actually seems like it is closer to a sensor blending using a weighted average based on the covariance values you feed in. Covariance on integrated sensors becomes tricky (and incorrect).
mjcarroll gravatar image mjcarroll  ( 2011-04-05 15:15:54 -0600 )edit
Correct. It uses weighted zero-mean Gaussians to merge sensors. However, the covariance does not grow unbounded. As a design choice, the filter they implemented actually calculates dx, dy, dz, dr, dp, dy and then propagates that as "truth". It calculates the differences between messages [next]..
Chad Rockey gravatar image Chad Rockey  ( 2011-04-05 15:20:56 -0600 )edit
and then uses that as a relative increment from the last message. The only instance where this is not the case is IMU roll and pitch. So the robot_pose_ekf is very exclusive to a PR2-like robot needing better odometry for SLAM/AMCL, but not useful for any robot needing to have absolute odometry.
Chad Rockey gravatar image Chad Rockey  ( 2011-04-05 15:22:52 -0600 )edit
If one were to really compute the covariances for this method, it would need to be something like [thisMsg - lastMsg]/dt and then apply the appropriate variance propagation techniques. However, as Kalman filtering typically goes, you end up guessing the covariance values until you like the results
Chad Rockey gravatar image Chad Rockey  ( 2011-04-05 15:25:32 -0600 )edit
0

answered 2017-08-09 11:05:07 -0600

Link gravatar image

I looked into this and found an implementation of a quaternion covariance to RPY covariance in the MIRA project in YawPitchRoll.h (documentation). See quaternionCovToYawPitchRollCov on line 318.

edit flag offensive delete link more

Comments

Funny... I ended up here looking for resources to help me debug exactly that code ;)

Christof Schroeter gravatar image Christof Schroeter  ( 2018-01-27 20:22:02 -0600 )edit

Question Tools

10 followers

Stats

Asked: 2011-03-16 06:52:10 -0600

Seen: 16,527 times

Last updated: Aug 09 '17