ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Transform relative to absolute position

asked 2016-09-20 06:28:57 -0500

RichardS gravatar image

updated 2016-09-21 15:55:52 -0500

lucasw gravatar image

Hello everyone,

i am looking for a way to transform a tf message to the absolute position.

My project: I have 5 cams wich should observe a certain area in wich i detect ar tags using ar_track_alvar. I set up 3 cams so far. Theire fields of view overlap to make enable sensor fusion. I am working with python usung ros indigo on a ubuntu 14.04

My Problem: I am listening to the tf tree updates to get to know where the markers are detected but the tf tree only gives me the relative position to the camera which is detecting the marker at the moment. I can't use lookupTransform because i wont be able to tell which camera gave me that transform (there are switching back and forth quickly). This is important for me because i am gathering data for the sensor fusion. I know there is a method in cpp to do just what i need but i am not yet willing to rewrite my whole code to cpp only for that function. Is there a quick way to solve my problem in python or do i need to code it myself? If i need to code it myself i would be thankfull for a quick description how this is done since i am not that fit in linear algebra.

I hope you get my question and i have not overseen an easy way to solve this


edit retag flag offensive close merge delete



Doesn't each camera have a separate reference frame and camera matrix? I don't understand what you mean by "i wont be able to tell which camera gave me that transform".

Mark Rose gravatar image Mark Rose  ( 2016-09-20 11:07:11 -0500 )edit

If using the lookupTransform Method i wont be able. By the time this method needs to look up the total position of the detected marker, an other camera pulishes its position and therefore i cant be sure wichs cameras data i got. I can tell from the tf tree but cant make the transformation to total.

RichardS gravatar image RichardS  ( 2016-09-22 01:04:47 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted

answered 2016-10-12 18:31:38 -0500

hoveidar gravatar image

Hey RichardS,

I believe you are fusing images of the camera before detecting the ARtags. If at the end of the day, this fusion is only to show the whole map of your area of interest, to solve this issue you can do these steps:

1) Detect the ARtags individually in each camera frame (for example you can define a boolean variable as AR_detected_N for camera_N, which will be 1 if camera_N detects the tag and 0 otherwise). This will help you understand which camera actually is detecting the tag.

2) Fuse all the camera images afterwards for screening the whole area.

I hope it helps you. Hoveidar

edit flag offensive delete link more

Question Tools

1 follower


Asked: 2016-09-20 06:28:57 -0500

Seen: 1,189 times

Last updated: Sep 21 '16