ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

TF on multiple robots gets crowded

asked 2012-02-03 06:55:02 -0500

Ross gravatar image

I have been running experiments with a team of four KUKA youBots communicating via ROS over WiFi. I have observed that the /tf topic is crowded with a lot of transforms not of interest to most of the TF consumers. By the design of TF, all transforms are broadcast globally, and it's up to each ROS node to sort through them and find the ones it is interested in. Are there any plans to implement partial TF sharing according to the hierarchy, perhaps by a publish/subscribe model---maybe hidden from users within the TF library?

For example, suppose robot1 broadcasts the frame of each joint in its arm as well as the fingers, wheel orientations, etc. These transforms are of interest to processes onboard robot1 and to rviz running on a workstation. But robot2-4 do not care about most of these transforms. They only want to know the basic position of the robot (i.e. /robot1/base_link) for collision avoidance purposes.

How big an impact are these excess TF messages? Let's say we have 2 ROS nodes on each robot that use the tf library. There are 5 arm joints, 2 fingers, and 4 wheel positions = 11 frames that no other robot cares about. There are 3 other robots. Since it's a managed WiFi network, each message must go up to the AP and then down to the neighboring robots. By default, all these frames are broadcast at 20 Hz. So that is 2311220 = 2640 WiFi messages per second. We have observed severe WiFi lag, which was significantly attenuated by turning off transmission of the arm joints and dropping broadcast rates on the rest. However, this is not a desirable solution in the long run.

I can imagine a robot-local TF bus on a topic, say /robot1/tf. A TF server running on robot1 would maintain a list of transforms of interest to others. Thus, it would send only /robot1/base_link to the topics /robot2/tf, robot3/tf, etc. and to the global /tf for rviz. Thus, we could run high-speed mobile manipulation loops on robot1 while sending only 2420 = 160 WiFi messages per second, plus 11*20 = 220 WiFi messages destined for rviz (thus not essential to the experiment).

Is anything like this in the works? I realize that network topology is a complex topic, but the current solution appears inadequate for all but the simplest multi-robot installations. Thank you!


edit retag flag offensive close merge delete


This. One possible solution would be for TF to be more of an 'on-demand' vs. 'broadcast' service. Each consumer of TF data could have a way to let TF data producers know which transforms they are interested in. Then TF producers would only send the transforms of interest to the consumers that asked.
Patrick Bouffard gravatar image Patrick Bouffard  ( 2012-02-03 10:46:32 -0500 )edit

2 Answers

Sort by ยป oldest newest most voted

answered 2012-02-03 17:45:38 -0500

tfoote gravatar image

updated 2012-02-03 17:45:59 -0500

tf data traveling overwireless is often too much for anything less than the simplest robots. In general the multi robot case over wireless is a challenge and most of the focus for solutions for that is to support multiple masters. There's a multi master fuerte planning SIG if you want more information on that.

With resepect to tf in particular there is a draft implementation of a new version of tf in the geometry experimental stack. There is development time planned for it to land in Groovy. There are many different aspects it seeks to improve including wireless operations. The especially relevant infrastructure upgrade is that it has the ability to remotely ask for a query to be performed in a different node. Thus trading off the bandwidth saved from continuously monitoring the /tf topic for the extra latency of a query over the network. This is targeted at a common use case discovered using tf where a process would siimply be monitoring /tf for a specific state and then change based on the result, such as marking a goal reached. This process could now not subscribe to the whole bandwidth of /tf and simply poll a shared tf server, this is only effective if there are more than one of these processes, which in the cases like the PR2 demos, there are many.

Another approach which as been required for remote access to tf streams, especially for web interfaces is a store and forward approach sending a downsampled set of the tree over a separate topic. The change_notifier in the tf package is an example implementation, where desired frames are noted with their desired frequency. There are several other implementations of this floating around, if people see this and they could link to other implementations in the comments that would be great. Unfortunately so far each application has had it's own specific requirements and tends to lead to very specific implementation details, which don't generalize to other use cases.

Other challenges to come when dealing with multiple robots are the fact that frame_id names start colliding when data is transferred between robots. The tf_prefix was an attempt to deal with this, however it puts too many requirements on the developer to fully implement the specification and we never got compliant code. To deal with this when passing messages between robots with similar link names some form of frame_id remapping needs to be applied, both to tf data as well as all data with a frame_id embedded.

There's lots of improvements to make tf work better across multiple robots. If you have suggestions and would like to start a discussion we can do so on the mailing list. I expect we'll start a SIG for the new tf in the next planning cycle for Groovy.

edit flag offensive delete link more

answered 2012-02-03 17:49:10 -0500

ahendrix gravatar image

I believe this is a known weakness in the design of the current tf system.

I think tf2 may address some of these problems, but I'm not sure what the state of development on it is.

edit flag offensive delete link more

Question Tools

1 follower


Asked: 2012-02-03 06:55:02 -0500

Seen: 1,544 times

Last updated: Feb 03 '12