Ask Your Question

Adding transforms creates a not-tree

asked 2018-09-18 10:42:32 -0500

seanarm gravatar image

updated 2018-09-18 11:10:19 -0500

My understanding of ROS transforms is that they need to form a tree. Is it ever the case where it can be a non-tree graph? If not, then how should I form this tf tree?

Here's my situation: Let's say I have a typical map->odom transform, then an EKF that provides the odom->base_link transform. Now, figure that attached to that base_link is an AR tag, which has a static transform from base_link. An external camera exists which has a transform from map. That camera estimates a pose for the AR tag attached to base_link, however. So now I've created a situation where the AR tag is a child of both the external camera and base_link. What's the proper way to go about this?

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted

answered 2018-09-18 17:33:42 -0500

lucasw gravatar image

It either has to exist separately in parallel under a different name like 'ar_tag_camera', or you need a node that can fuse the camera information with the other sensor input to provide a higher quality position than the EKF or camera by themselves (or equivalently the EKF needs to take in the camera position as an input). Maybe there is a way to shoehorn the camera position to look like some other already supported sensor (it is roughly like gps), but the likelihood of getting that to work is probably lower than if the software was designed for a camera looking at a ar tag or similar.

edit flag offensive delete link more


Thanks @lucasw .. Generally then, how are sensors that give absolute measurements (gps, mocap, etc), which would have a static transform from a map, represented alongside a robotic system that also uses proprioceptive sensors (IMU, visual odometry, etc)?

seanarm gravatar image seanarm  ( 2018-09-18 17:42:48 -0500 )edit takes in a PoseWithCovarianceStamped which seems like a fine fit for the camera derived ar tag position. The trick is to get the Covariance part right...

lucasw gravatar image lucasw  ( 2018-09-18 18:25:13 -0500 )edit

... maybe someone else more knowledgeable can jump in (and you could look at other ros answers related to covariance)- it would be nice if there was a good tutorial on how to calibrate a pose sensor or an imu and get decent starting covariances.

lucasw gravatar image lucasw  ( 2018-09-18 18:27:16 -0500 )edit

answered 2018-09-21 11:14:41 -0500

seanarm gravatar image

updated 2018-09-21 11:15:20 -0500

@lucasw I'm not sure that this is the correct answer either, but after some thought, it's the answer that makes the most sense to me. It's applicable to any sensor whose measurement device is "globally-anchored".

I started treating the marker (attached to the robot) and the estimate of the pose of that marker (determined by a camera mounted in the map frame) as separate entities. It makes sense because the physical link between marker and the robot is fixed, where as the link between the pose estimate of the marker and the robot is not guaranteed to be fixed (because of localization error). This method allows for comparing the global measurement to the fused odometry source the robot uses as its pose within the map.


A comparison between the physically-linked ar_tag and the ar_tag_estimate is just a transform to a common frame. Does that all sound reasonable?

edit flag offensive delete link more



That's the 'separately in parallel' approach. A cheap 'fusion' additional step would be to occasionally reset the odometry to the transform necessary to make ar_tag and arg_tag_estimate the same but then let odometry run for a while (especially when the tag goes out of view).

lucasw gravatar image lucasw  ( 2018-09-21 11:23:56 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2018-09-18 10:42:32 -0500

Seen: 86 times

Last updated: Sep 21 '18