Ask Your Question
1

ROS callback invoked faster than publishing rate

asked 2016-01-23 12:50:04 -0500

marcofe gravatar image

updated 2016-01-24 05:48:25 -0500

Hi all, My ROS node communicates with V-REP on the topic /vrep/visionSensorData to catch images coming from a vision sensor object of the scene. The publishing rate of this topic depends on the simulation time step, that in my scene is set to 50ms, so the publishing rate is 20 Hz. As a further proof of that, I checked this value on my terminal by typing the command rostopic hz /vrep/visionSensorData.

In addition, in order to evaluate the processing time of my algorithm, I also measured the actual elapsed time between two consecutive callbacks of the topic, expecting that it was higher than 50ms because of the required time for image processing.

Well, my measurements revealed that in some iterations the elapsed time is even lower, resulting into a frequency higher than the claimed publishing rate of 20Hz. I cannot attach a plot of the measured frequency because I have not enough points on this forum.

Can anyone explain me why this happens? Do I have to conclude that the publishing rate is not always the same, but it can vary in a fixed range?

EDIT 1: I tried to set this option tcpNoDelay() as suggested, but I still measure frequencies higher than 20Hz. My subscriber is an image_transport subscriber, so following the documentation I added this option in the following way (hope it's well written):

image_vrep_sub_ = it_.subscribe("/vrep/visionSensorData",1, &ImageConverter::imageCb, this, image_transport::TransportHints("raw",ros::TransportHints().tcpNoDelay()));

EDIT 2: yes, actually my measurements are an average of 20 Hz, so they could be consistent with rostopic hz command, but I didn't expect it could change this way. Here is the plot as requested, thanks for the bump. So do you think it is a reasonable behaviour?

Ah, I forgot to say that I tried to set the parameter use_sim_time, but this prevents me to use keyboard inputs through the /keyboard topic. Should I set something else to make it work?

image description

Thanks in advance for your answers

edit retag flag offensive close merge delete

Comments

Did you compare behavior before and after using tcpNoDelay()? I think the plot looks reasonable. If you require hard real-time guarantees, you might want to use a corresponding Kernel, scheduling policy and/or OROCOS/ROS integration.

Stefan Kohlbrecher gravatar image Stefan Kohlbrecher  ( 2016-01-24 09:09:10 -0500 )edit

The update plot looks reasonable to me. The variation you see corresponds to about 50ms +/- 10ms, which is pretty typical for non-realtime applications on Linux.

ahendrix gravatar image ahendrix  ( 2016-01-24 13:58:14 -0500 )edit

2 Answers

Sort by ยป oldest newest most voted
2

answered 2016-01-23 15:18:22 -0500

This sounds somewhat similar to the effect described in this Q/A. Messages probably come in in irregular intervals as described there, resulting in very short times between them in some cases, and longer times in others. As mentioned in the linked post, I'd expect the effect to disappear when using the TcpNoDelay option for your subscriber. I'd certainly be interested in a in-depth explanation/discussion of this.

edit flag offensive delete link more

Comments

Seconded. While there is a workaround, it feels like it shouldn't be necessary to get the desired behavior.

Tom Moore gravatar image Tom Moore  ( 2016-06-16 03:23:49 -0500 )edit

Nagle's algorithm has advantages for certain use cases. Buffering is not always bad. I'm not sure disabling it by default would be a good thing.

gvdhoorn gravatar image gvdhoorn  ( 2016-06-16 04:07:13 -0500 )edit

I completely agree, but as this only happens for large messages, I wonder if it wouldn't be advantageous to automatically detect the message size and change the behavior (unless specifically configured not to). I would think consistent frequency is the desired behavior in most cases.

Tom Moore gravatar image Tom Moore  ( 2016-06-16 04:17:30 -0500 )edit
0

answered 2016-01-23 15:02:08 -0500

ahendrix gravatar image

updated 2016-01-23 15:05:03 -0500

rostopic hz will print not just the message frequency, but also the minimum, maximum and std deviation of the time between messages. Do those numbers match your callback rate?

Is V-REP publishing the ROS clock? Do you have the use_sim_time parameter set so that your node uses simulation time instead of actual time?

Is your callback called at an average rate of 20Hz, or is it consistently faster? I'll bump your karma so that you can post your plot.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

2 followers

Stats

Asked: 2016-01-23 12:50:04 -0500

Seen: 1,017 times

Last updated: Jan 24 '16