Ask Your Question

Revision history [back]

The ar_track_alvar package publishes transforms an it's own custom message type. The custom message type includes a confidence metric along with the pose for each detected tag, this confidence value is extremely valuable when used with a Kalman filter so I would encourage you to use this topic instead of the TFs.

Before you add each pose estimate to your Kalman filter if you apply a unique transformation to each tag estimate which transforms the tag location to the origin of your world coordinate frame. After this step each tag estimate is then an estimate of the location of the world frame in terms of the ar_track_alvar detection frame, this makes the C matrix of the Kalman filter unity since each measurement is estimating the same thing. The confidence value can be used to make a diagonal matrix for the covariance, then you should be setup.

If some form of odometry is also added to the Kalman filter then this should be a very robust localisation setup. Let us know how you get on.

The ar_track_alvar package publishes transforms an it's own custom message type. The custom message type includes a confidence metric along with the pose for each detected tag, this confidence value is extremely valuable when used with a Kalman filter so I would encourage you to use this topic instead of the TFs.

Before you add each pose estimate to your Kalman filter if you apply a unique transformation to each tag estimate which transforms the tag location to the origin of your world coordinate frame. After this step each tag estimate is then an estimate of the location of the world frame in terms of the ar_track_alvar detection frame, this makes the C matrix of the Kalman filter unity since each measurement is estimating the same thing. The confidence value can be used to make a diagonal matrix for the covariance, then you should be setup.

If some form of odometry is also added to the Kalman filter then this should be a very robust localisation setup. Let us know how you get on.

Update: Getting the robot pose estimate in the world frame

The tag poses from ar_track_alvar describe the position of the tag in the frame setup in it's launch file. If you store this pose in a transform object then it's quite easy to do the manipulation you'll need to do.

For example you can use the inverse method to reverse the transform, in this case this will give you the position of the ar_track_alvar frame in terms of the position of the marker. Getting closer to what you want.

Now if you define another transform object which describes the pose of the marker in question, in the world frame, then multiply your pose estimate from the previous step by the pose of the marker, you now have a pose estimate in the world frame. This pose along with its confidence value can now be used by the EKF to estimate your robots position. This is really hard thing to describe in words, so I hope this makes sense.