ar track alvar depth data
Hi, the ar_track_alvar wiki says: "Identifying and tracking the pose of individual AR tags, optionally integrating kinect depth data (when a kinect is available) for better pose estimates." Is there any work that measure the improvements of the use of depth camera against camera image? Do you know if somebody already quantified this better pose estimation with depth data? Thanks.