Adding transforms creates a not-tree
My understanding of ROS transforms is that they need to form a tree. Is it ever the case where it can be a non-tree graph? If not, then how should I form this tf tree?
Here's my situation: Let's say I have a typical map->odom transform, then an EKF that provides the odom->baselink transform. Now, figure that attached to that baselink is an AR tag, which has a static transform from baselink. An external camera exists which has a transform from map. That camera estimates a pose for the AR tag attached to baselink, however. So now I've created a situation where the AR tag is a child of both the external camera and base_link. What's the proper way to go about this?
map-->odom-->base_link-->ar_tag
\-->camera-------------/
Asked by seanarm on 2018-09-18 10:42:32 UTC
Answers
It either has to exist separately in parallel under a different name like 'ar_tag_camera', or you need a node that can fuse the camera information with the other sensor input to provide a higher quality position than the EKF or camera by themselves (or equivalently the EKF needs to take in the camera position as an input). Maybe there is a way to shoehorn the camera position to look like some other already supported sensor (it is roughly like gps), but the likelihood of getting that to work is probably lower than if the software was designed for a camera looking at a ar tag or similar.
Asked by lucasw on 2018-09-18 17:33:42 UTC
Comments
Thanks @lucasw .. Generally then, how are sensors that give absolute measurements (gps, mocap, etc), which would have a static transform from a map, represented alongside a robotic system that also uses proprioceptive sensors (IMU, visual odometry, etc)?
Asked by seanarm on 2018-09-18 17:42:48 UTC
http://docs.ros.org/api/robot_localization/html/index.html takes in a PoseWithCovarianceStamped
which seems like a fine fit for the camera derived ar tag position. The trick is to get the Covariance part right...
Asked by lucasw on 2018-09-18 18:25:13 UTC
... maybe someone else more knowledgeable can jump in (and you could look at other ros answers related to covariance)- it would be nice if there was a good tutorial on how to calibrate a pose sensor or an imu and get decent starting covariances.
Asked by lucasw on 2018-09-18 18:27:16 UTC
@lucasw I'm not sure that this is the correct answer either, but after some thought, it's the answer that makes the most sense to me. It's applicable to any sensor whose measurement device is "globally-anchored".
I started treating the marker (attached to the robot) and the estimate of the pose of that marker (determined by a camera mounted in the map frame) as separate entities. It makes sense because the physical link between marker and the robot is fixed, where as the link between the pose estimate of the marker and the robot is not guaranteed to be fixed (because of localization error). This method allows for comparing the global measurement to the fused odometry source the robot uses as its pose within the map.
map-->odom-->base_link-->ar_tag
\----->canera---------->arg_tag_estimate
A comparison between the physically-linked ar_tag and the ar_tag_estimate is just a transform to a common frame. Does that all sound reasonable?
Asked by seanarm on 2018-09-21 11:14:41 UTC
Comments
That's the 'separately in parallel' approach. A cheap 'fusion' additional step would be to occasionally reset the odometry to the transform necessary to make ar_tag and arg_tag_estimate the same but then let odometry run for a while (especially when the tag goes out of view).
Asked by lucasw on 2018-09-21 11:23:56 UTC
Comments