How to use robot_localization with ar_track_alvar?
I'm looking for a way to use visually detected markers from the ar_track_alvar
module as measurements for the ekf_localization_node
from the robot_localization
module.
It seems like robot_localization
has no message type for observed markers. Sure, I can invert an observed marker pose and use the result as pseudo-observed robot pose. But how to come up with realistic covariance information, which is needed for the message of type PoseWithCovarianceStamped? A marker gives me distance and direction (or x, y, z in a local coordinate frame). So my derived pose and orientation will be highly correlated and its covariance matrix singular.
Is that a reasonable approach?
If not, how to combine these nodes the right way?
Are there any resources and tutorials you can suggest? (I googled a lot, but couldn't find one.)
Edit: I'm thinking of implementing an Unscented Transform (UT) for deriving the pose uncertainty from pixel uncertainty. In other contexts I had positive experiences with the UT. What do you think?
There is an old package where they say they can integrate visual landmarks into the EKF. http://wiki.ros.org/pose_ekf_slam . maybe it could help