using VICON as groundtruth in robot localization
Hi All,
I am running amcl (localization) on my ground robot and I would like to know how good is the amcl accuracy. For that purpose, I am using a motion capture system (vicon) to generate the groundtruth poses of the robot. Everything works fine, except that I don't know how to relate the localization data coming from amcl in the /map frame with the poses data coming from vicon in a /viconworld frame. I basically would like to accurately find the transformation between the /map frame and the /viconworld frame. You can visualize my problem by the following sketch:
The availabe tf trees are:
/map ---(amcl filter)----> /amcl_pose
/viconworld---(vicon markers)---> /robotpose_vicon
I actually don't know how to link the two trees such that I can compare /amclpose and /robotpose_vicon in the same reference frame.
I also can't close the trees by linking the frames /amclpose and /robotposevicon because of the inherent error in /amclpose due to the filter used.
Your help is highly appreciated!
Asked by Mohamed on 2018-12-11 12:25:17 UTC
Comments