Ask Your Question

Node Latency Questions

asked 2014-03-16 12:19:12 -0600

AlphaSierra gravatar image

Hello everyone. I'm trying to understand the different latency issues that occur when using topics to stream data between nodes.

The default method is using TCPROS, which from how it was explained to me converts the data into XML data before it gets transmitted, and then has to be parsed back by the receiving nodes. Even if they're on the same processor. I'm going to conduct some of my own experiments to determine the latency, but since my setup is in a VM it might skew the results. So I'm wondering how bad the latency using this method is.

From some further reading it looks like UDPROS is faster at transmitting the data. I don't really understand why though. While looking around trying to learn more I also came across the ETHAZSL library

If I want to eliminate this issue altogether I could just make everything exist as nodelets under a single node, as they then just share memory. However there are some nodes that will need to exist in fully separate nodes (possibly over a network).

So in summary I'd like to understand how each of these methods works on the backend, and what some good approaches are to reducing latency in the system. How bad is the latency using TCPIP vs UDP?


edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted

answered 2014-03-16 14:30:03 -0600

ahendrix gravatar image

In general, ROS doesn't make any guarantees about latency, but in my experience it's generally quite low (<1ms locally) for TCPROS and UDPROS. Over a network, the biggest factor behind latency is the network itself.

Both transports use a serialized binary format - not XML. That said, there is still some latency from the serialization process, but the designers took great pains to minimize that latency.

In addition to trying to reduce the latency in the transport layer, you should spend a little time trying to understand how much latency you application can tolerate, and trying to measure the existing latency; these will tell you when you're reached "good enough" so that you can stop optimizing.

edit flag offensive delete link more


Thanks for the info. Is there anywhere where I can get more technical documentation on the format that is used?

AlphaSierra gravatar image AlphaSierra  ( 2014-03-16 16:25:18 -0600 )edit

answered 2014-03-16 15:00:06 -0600

hsu gravatar image

For reference, we closed 1kHz loop over TCP with TCPNODELAY between 2 computers on a fast local network in the cloud. Here's the example I am referring to.

edit flag offensive delete link more


Was it a gigabit network? That's what I'm aiming to have on the vehicle I'm working on.

AlphaSierra gravatar image AlphaSierra  ( 2014-03-16 16:08:35 -0600 )edit

It had a 10GB backbone. I remember the actual time consumed by the network transport layer altogether was around ~0.25ms, given the nodes needed to do ~0.65ms of computing work.

hsu gravatar image hsu  ( 2014-03-17 11:02:13 -0600 )edit

That sounds pretty reasonable. If I may ask what sort of computing work was being done? For my application there's going to be a bunch of data aggregation, and navigation. Also, in your experience sort of latency I can expect locally between nodes?

AlphaSierra gravatar image AlphaSierra  ( 2014-03-17 20:15:17 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2014-03-16 12:19:12 -0600

Seen: 2,430 times

Last updated: Mar 16 '14