using VICON as groundtruth in robot localization
Hi All,
I am running amcl (localization) on my ground robot and I would like to know how good is the amcl accuracy. For that purpose, I am using a motion capture system (vicon) to generate the groundtruth poses of the robot. Everything works fine, except that I don't know how to relate the localization data coming from amcl in the /map frame with the poses data coming from vicon in a /vicon_world frame. I basically would like to accurately find the transformation between the /map frame and the /vicon_world frame. You can visualize my problem by the following sketch:
The availabe tf trees are:
/map ---(amcl filter)----> /amcl_pose
/vicon_world---(vicon markers)---> /robot_pose_vicon
I actually don't know how to link the two trees such that I can compare /amcl_pose and /robot_pose_vicon in the same reference frame.
I also can't close the trees by linking the frames /amcl_pose and /robot_pose_vicon because of the inherent error in /amcl_pose due to the filter used.
Your help is highly appreciated!