ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange

# Rostopic Hz inaccurate?

Hello everyone.

Sometimes when I measure the frequency of messages in a topic, the numbers printed don't make sense.

For example:

rostopic hz /robot/head/head_state


gives:

subscribed to [/robot/head/head_state]
average rate: 100.000
min: 0.007s max: 0.015s std dev: 0.00097s window: 96
average rate: 100.000
min: 0.004s max: 0.016s std dev: 0.00111s window: 192
average rate: 100.000
min: 0.004s max: 0.016s std dev: 0.00100s window: 289
average rate: 100.000
min: 0.004s max: 0.016s std dev: 0.00112s window: 386
average rate: 100.000
min: 0.004s max: 0.016s std dev: 0.00105s window: 483
average rate: 100.000
min: 0.004s max: 0.017s std dev: 0.00109s window: 581
average rate: 100.000
min: 0.004s max: 0.017s std dev: 0.00116s window: 678
average rate: 100.000
min: 0.004s max: 0.017s std dev: 0.00115s window: 776


So it measures 776 messages in 8 seconds and still tells me that the average rate is 100.000? My calculater tells me that there are 776 messages over 8 seconds -> = 97Hz. Why aren't the numbers in my terminal not corresponding with this?

edit retag close merge delete

Sort by ยป oldest newest most voted

After a suggestion of tfoote, I found out that the bias was caused by simulation time. (I did not know there was such a thing). The added picture is a part of the gazebo simulator that I ran today. For a reason that I don't know, there is a real time factor of 0.97 and not 1.00. This explains the 97Hz.

My conclusion is that the Rostopic Hz function prints every REAL second the amount of messages observed, but when calculating the average rate it compensates for the Real Time FActor, so the outcome is 100 Hz.

Thank you for your help tfoote.

more

I would guess that your assumption of sampling being conducted for exactly 8 seconds might be off by 0.24 seconds due to startup connection times etc.

Feel free to dig into the source at: https://github.com/ros/ros_comm/blob/...

more

Thank you for responding. But if the bias would be caused by startup connection times, then why does it occur not only in the first second, but in all seconds after that aswel? I mean, for example 386 - 289 = 97. 97 messages detected in that second, but still it prints an average rate 100 Hz?

( 2015-04-05 10:44:55 -0500 )edit

Can you provide instructions for how to reproduce your output?

( 2015-04-05 13:06:58 -0500 )edit

Unfortunatly I can't because a piece of software that is being used is from a private repository which I am not the owner of and I haven't been able to reproduce the output with something else. If I find out what the answer is I will post. Thank you for your effort.

( 2015-04-07 01:59:34 -0500 )edit

One other question, are you running with simulated time?

( 2015-04-07 02:37:47 -0500 )edit

Ah. I did not know of its existence. This solves my problem. Thank you. I wrote an answer for it.

( 2015-04-07 07:10:33 -0500 )edit