robot_localization for estimating 3D pose with 3x Gps antennas
Hello,
Say i have 3 Gps antennas, each providing an XYZ position vector with respect to a common global frame of reference (a base station in the case of RTK setups).
With these 3 points, one should be able to estimate the 3D pose of the rigid body.
Is it possible to stream the 3 topics (odometry topics with xyz info), into the robot_localization package, and get a 3D pose? Also, combining those 3 streams with wheel odometry (which estimates x y yaw, 2D information)
Assume I have the transformations between the antennas and the base_footprint, and we can assume they are provided as static transforms on /tf.
Thank you.