ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange

# Relative tf between two cameras looking at the same AR marker

Hi everyone!

I was thinking about using tf lookup to obtain the relative transform between two cameras looking at the same AR marker (as seen in ar_pose). Looking at tf/FAQ, though, I stumbled upon this comment:

The frames are published incorrectly. tf expects the frames to form a tree. This means that each frame can only have one parent. If you publish for example both the transforms: from "A (parent) to B (child)" and from "C (parent) to B (child)". This means that B has two parents: A and C. tf cannot deal with this.

Is there a standard alternative to obtain the transform between A and C?

EDIT: What I am looking for is a way to daisy chain transforms between Kinects in order to place multiple clouds in a single reference frame. See the "tf graph" below, for instance:

Nodes represent frames and arrows represent transforms. Suppose I want all my point clouds in marker M3's frame (visible only to camera C2). I would have to figure out the transform from C1 and C3 to C2, taking advantage of the common markers visible to them (i.e. M2 and M4).

What I got from tf/FAQ is that I cannot look up these transforms natively, so I wanted to know if there is a standard way to do it. Otherwise, I will just have to request the relevant transforms (e.g. C2->M4 and C3->M4), invert them as appropriate (e.g. M4->C2) and compose them into a single transform through multiplication (e.g. C3->M4->C2).

edit retag close merge delete

Hello georgebrindeiro, how does your work going? Could you solve your problem at merging the estimation? I am working on the same problem and im very interested in your results. Could you please update these article?

( 2015-11-16 05:25:33 -0600 )edit

Sort by ยป oldest newest most voted

ar_pose has a reverse_transform parameter. When set to true, it will publish the tf from the marker to the camera. Let's say the marker frame is M, and you have two cameras, A, and B. You will obtain two transforms:

• M2A (M to A)
• M2B (M to B)

The transform from A to B, expressed in A's frame, is:

M2A * A2B = M2B
A2B = M2A.inverse() * M2B
`

EDIT

I see. There was once upon a time a discussion about TF design decisions. I remember I brought up a similar scenario where it would be nice to support frames with mulitple parents, as long as there are no conflicts in the graph. I don't belive much came of it.

more

Thanks for the answer, Ivan! It's not exactly what I wanted, though... What I expected was a general way to obtain that kind of transform from tf, so I could daisy-chain transforms and ultimately have clouds coming from different Kinects be placed on a single frame.

( 2013-02-28 09:34:03 -0600 )edit

You need to prevent cycles or multiple parents in your tf tree. There are a few ways to do this. The technique I would recommend would have multiple estimates of where each tag is from each camera(with unique frame_ids), and then compute the "best" one by combining the estimates.

more

Thanks for the answer, Tully! I am actually working on something along those lines.

( 2013-03-28 12:32:48 -0600 )edit

I really think tf should somehow support this kind of use case, though. In my case, the fixed reference should be the marker itself (any one of them) and not the camera, but the only way ar_pose can support multiple marker tracking is by adding them as children to the camera frame.

( 2013-03-28 12:33:35 -0600 )edit

I realize cycles should be avoided, but IMHO a tf acyclic graph structure make more sense in general. I guess originally tf was thought out considering mostly traditional use cases. I'm not sure if that's possible, but I'd be willing to contribute for a change if given some direction.

( 2013-03-28 12:35:37 -0600 )edit