How does TF determine the transforms between different sensors?

asked 2022-03-09 08:26:23 -0500

Question gravatar image

updated 2022-04-30 13:46:46 -0500

lucasw gravatar image

Hi,

I have read a lot about the transform publisher and it's function, this is all very clear to me but I have a couple of questions about the underlying working of the transform publisher. I would like to use an example for this, let's say a lidar is attached on top of a robot and on the front of this robot a radar is connected.

Now since the Lidar spins around and can map it's surroundings in 360 degrees, it would be a good choice to use the Lidar as the reference sensor. Now both the Lidar and the Radar have their own frame.

So my question is, if you give these sensors their own frame, and the Lidar is the reference frame, do you need to specify the coordinates when giving these sensors their frames? Do you need to give a fixed coordinate to the reference sensor's frame? Or does the TF publisher handle it all by itself.

To me it isn't very clear if you are supposed to give the reference frame, which is the Lidar, a fixed coordinate manually so that the TF function can calculate the transform to the radar, since you can't give the exact coordinates of the radar frame.

Any explanation would be appreciated!

edit retag flag offensive close merge delete

Comments

Have you gone through the tf tutorials?

parzival gravatar image parzival  ( 2022-03-09 08:36:38 -0500 )edit

Yes I have, but I am stuck with this question.

Question gravatar image Question  ( 2022-03-09 09:20:11 -0500 )edit