ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

tf data traveling overwireless is often too much for anything less than the simplest robots. In general the multi robot case over wireless is a challenge and most of the focus for solutions for that is to support multiple masters. There's a [multi master fuerte planning SIG}(http://www.ros.org/wiki/fuerte/Planning/Multimaster) if you want more information on that.

With resepect to tf in particular there is a draft implementation of a new version of tf in the geometry experimental stack. There is development time planned for it to land in Groovy. There are many different aspects it seeks to improve including wireless operations. The especially relevant infrastructure upgrade is that it has the ability to remotely ask for a query to be performed in a different node. Thus trading off the bandwidth saved from continuously monitoring the /tf topic for the extra latency of a query over the network. This is targeted at a common use case discovered using tf where a process would siimply be monitoring /tf for a specific state and then change based on the result, such as marking a goal reached. This process could now not subscribe to the whole bandwidth of /tf and simply poll a shared tf server, this is only effective if there are more than one of these processes, which in the cases like the PR2 demos, there are many.

Another approach which as been required for remote access to tf streams, especially for web interfaces is a store and forward approach sending a downsampled set of the tree over a separate topic. The change_notifier in the tf package is an example implementation, where desired frames are noted with their desired frequency. There are several other implementations of this floating around, if people see this and they could link to other implementations in the comments that would be great. Unfortunately so far each application has had it's own specific requirements and tends to lead to very specific implementation details, which don't generalize to other use cases.

Other challenges to come when dealing with multiple robots are the fact that frame_id names start colliding when data is transferred between robots. The tf_prefix was an attempt to deal with this, however it puts too many requirements on the developer to fully implement the specification and we never got compliant code. To deal with this when passing messages between robots with similar link names some form of frame_id remapping needs to be applied, both to tf data as well as all data with a frame_id embedded.

There's lots of improvements to make tf work better across multiple robots. If you have suggestions and would like to start a discussion we can do so on the mailing list. I expect we'll start a SIG for the new tf in the next planning cycle for Groovy.

tf data traveling overwireless is often too much for anything less than the simplest robots. In general the multi robot case over wireless is a challenge and most of the focus for solutions for that is to support multiple masters. There's a [multi multi master fuerte planning SIG}(http://www.ros.org/wiki/fuerte/Planning/Multimaster) SIG if you want more information on that.

With resepect to tf in particular there is a draft implementation of a new version of tf in the geometry experimental stack. There is development time planned for it to land in Groovy. There are many different aspects it seeks to improve including wireless operations. The especially relevant infrastructure upgrade is that it has the ability to remotely ask for a query to be performed in a different node. Thus trading off the bandwidth saved from continuously monitoring the /tf topic for the extra latency of a query over the network. This is targeted at a common use case discovered using tf where a process would siimply be monitoring /tf for a specific state and then change based on the result, such as marking a goal reached. This process could now not subscribe to the whole bandwidth of /tf and simply poll a shared tf server, this is only effective if there are more than one of these processes, which in the cases like the PR2 demos, there are many.

Another approach which as been required for remote access to tf streams, especially for web interfaces is a store and forward approach sending a downsampled set of the tree over a separate topic. The change_notifier in the tf package is an example implementation, where desired frames are noted with their desired frequency. There are several other implementations of this floating around, if people see this and they could link to other implementations in the comments that would be great. Unfortunately so far each application has had it's own specific requirements and tends to lead to very specific implementation details, which don't generalize to other use cases.

Other challenges to come when dealing with multiple robots are the fact that frame_id names start colliding when data is transferred between robots. The tf_prefix was an attempt to deal with this, however it puts too many requirements on the developer to fully implement the specification and we never got compliant code. To deal with this when passing messages between robots with similar link names some form of frame_id remapping needs to be applied, both to tf data as well as all data with a frame_id embedded.

There's lots of improvements to make tf work better across multiple robots. If you have suggestions and would like to start a discussion we can do so on the mailing list. I expect we'll start a SIG for the new tf in the next planning cycle for Groovy.