# Localizing using AR tags and wheel odometry

Hello everyone,

My team and I are working on a project where we need to localize our robots using AR tags (ar_track_alvar). Our robots will not always see tags, so we are using wheel odometry to track motion during these periods. We are trying to accomplish this through sensor fusion using the robot_pose_ekf or robot_localization package. We have tried both, but neither have given us the desired effect.

We have multiple tags in our environment which all have static transforms with respect to the map frame. The robot looks for tags and finds a transform from the tag to the robot. By extension, this yields a transform from the map to the robot. This map->robot transform is converted to an odometry message using only pose measurements. The covariance for the pose is set to:

|0.1 100 100 100 100 100 |
|100 0.1 100 100 100 100 |
|100 100 100 100 100 100 |
|100 100 100 100 100 100 |
|100 100 100 100 100 100 |
|100 100 100 100 100 0.1 |


The twist covariances are all set to 10000. The frame_id for this tag based odom message is map.

The wheel odometry is very standard, it is based off of the turtlebot odometry. One note is that when the first tag estimate is received, a transform is broadcasted which places the wheel odom frame at the initial location of the robot with respect to the map.

Using both of the aforementioned packages and setups, and keeping the robot motionless, it is observed that the filter estimates become increasingly erratic and tend to infinity in no particular direction. Removing the wheel odometry and keeping only the ar tag odometry yields the same effect. The ar tag odometry messages remain completely constant within three significant figures.

Thus we conclude that somehow our consistent tag odometry measurements are causing the kalman filters to behave erratically and tend to infinity. This happens even with the wheel odometry measurements being fused as well. Can anyone explain why this might be, and offer any suggestions to fix this? I would be happy to provide any extra information which I haven't provided.

Side note: As a naive test, we also set all of the covariances to large values and observed that no matter how we moved the robot the differences in the filter outputs were tiny (+/- 0.01), more likely drift than an actual reading.

EDIT:

Thank you for your advice Tom. We have already combed through all of the wiki, and we did see the part about inflating covariances being unhelpful. We were confused though, because we didn't know what to set them to if not an arbitrarily large number. We did use the config to turn them off. You can see this in the posted config file.

We do not have an IMU. We are using a very ragtag group of robots (its what our school had lying around) and we are lucky we even have wheel odometry!

Our AR ...

edit retag close merge delete

Just a word of caution: I'd turn off debug mode if you value disk space. :)

( 2015-04-15 14:11:59 -0600 )edit

Also, can you post a sample odom and ar_tag_odom message?

( 2015-04-15 14:13:03 -0600 )edit

Sort by » oldest newest most voted

My response is for robot_localization. Have a read through the wiki, but pay particular attention to this page. In particular, note that inflating covariances for variables you want to ignore is not going to produce good results for the state estimation nodes in robot_localization. Instead, use the configuration parameters to fuse only the variables you want.

Having said that, you need to make sure that you are accounting for all of your pose (position and orientation) variables. If you don't account for any one of them, you'll see the covariances explode, and your state estimate will act unpredictably. For position, you can either have absolute measurements (X, Y, and Z) or velocity measurements (X velocity, Y velocity, and Z velocity). For orientation, you will want to have at least one source of absolute measurement data.

Some questions for you:

1. Do you have an IMU on your robot?
2. Do your AR code-based measurements yield position only, or do they also produce orientation?
3. Is this robot only operating in 2D? If it is, then you should either set two_d_mode to true, or let your AR code measurements produce estimates for Z, roll, and pitch. Make them 0, and make the covariances very small.

Your setup is pretty much identical to a system with odometry and GPS, and that setup works well with robot_localization. Without knowing more, what I would do is this:

1. Run one instance of ekf_localization_node with the world_frame parameter set to the same value as the odom_frame parameter. In this instance, fuse your odometry and IMU if it's available. If you only have odometry, then don't run this instance. Instead, make sure that your odometry source is also producing an odom->base_link (or whatever your frames are) transform.
2. Run a second instance of ekf_localization_node, and fuse the odometry, IMU (again, if available), and AR measurements. Set the world_frame parameter set to the same value as the map_frame parameter.

How you deal with orientations will be important here, so I'll need more information before I can offer up any more advice. Please post a sample message from each input source, and also please post your ekf_localization_node launch file.

more

Note that in your original approach you set covariances (on the main diagonal) and correlations (the remaining entries) to 100, indicating a ~100 % correlation between, e.g., x and y. While a variance of 100 downweights the corresponding measurements (as intended), the large correlation injects misleading information to the system.

more

1

Yikes! I'm not sure how I missed that, but yes, the measurement covariance matrices should really only have values on the diagonal, unless you're absolutely certain that those correlations between variables exist.

( 2015-08-10 15:31:20 -0600 )edit