ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Image subscriber lag (despite queue = 1)

asked 2015-11-05 21:42:34 -0500

sr71 gravatar image

I am having some lag issues with a rospy subscriber listening to image messages.


I have a rosbag streaming images to /camera/image_raw at 5Hz. I also have an image_view node for displaying the images for reference. This image_view shows them at 5Hz.

In my rospy subscriber (initialized with queue = 1), I also display the image (for comparing lag time against the image_view node). The subscriber subsequently does some heavy processing.

Expected result:

Since queue size is 1, the subscriber should process the latest frame, and skip all other frames in the meanwhile. Once it completes processing, it should then move on to the next latest frame. There should be no queuing of old frames. This would result in a choppy, but not laggy video (low fps, but no "delay" wrt rosbag stream, if that makes sense)

Actual result:

The subscriber is lagging behind the published stream. Specifically, the image_view node displays the images at 5Hz, and the subscriber seems to queue up all the images and processes them one by one, instead of just grabbing the latest image. The delay also grows over time. When I stop the rosbag stream, the subscriber continues to process images in the queue (even though queue = 1).

Note that if I change the subscriber to have a very large buffer size, as below, the expected behavior is produced:

self.subscriber = rospy.Subscriber("/camera/image_raw", Image, self.callback,  queue_size = 1, buff_size=2**24)

However, this is not a clean solution.

This issue has also been reported in the following links, where I found the buffer size solution. The official explanation hypothesizes that the publisher may actually be slowing down, but this is not the case since the image_view subscriber displays the images at 5Hz. , ,

Any help is appreciated. Thanks!


def callback(self, msg):
    print "Processing frame | Delay:%6.3f" % ( - msg.header.stamp).to_sec()
    orig_image = self.bridge.imgmsg_to_cv2(msg, "rgb8")
    if (self.is_image_show_on):
        bgr_image = cv2.cvtColor(orig_image, cv2.COLOR_RGB2BGR)
        cv2.imshow("Image window", bgr_image)

    result = process(orig_image) #heavy processing task
    print result
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2015-11-05 23:34:53 -0500

ahendrix gravatar image

The description in the official bug report is very clear: this is not a bug.

This is what is happening:

  1. With a queue size of 1, and a buffer size smaller than your message size, the first message is processed with little delay, but the second message gets stuck in limbo: 64k of the message is in the subscriber's buffer, and the remainder is stuck in either the OS buffer or the publisher's buffer.

  2. While the second message is stuck halfway transmitted in the publisher's queue, it may or may not drop subsequent message, depending on the publisher queue size.

  3. Once your subscriber is done with the first message, rospy is free to finish the receive of the second message, and then your subscriber is triggered. The following message (3 if the publisher has not dropped any messages, or a later message if it is dropping messages), now becomes stuck halfway.

  4. (2) and (3) repeat, and the delay should eventually settle out to the publisher queue size times your processing time.

Increasing the subscriber buffer size allows the subscriber to receive a complete message, instead of having it stuck halfway, and then the message can be put into the pending message queue and dealt with according to the queue size.

edit flag offensive delete link more


I see, thank you for the explanation. For image msgs with varying resolution, is a good rule of thumb to set the buffer as their mean size? Or is it better to use a very large buffer that captures all the images?

sr71 gravatar image sr71  ( 2015-11-06 08:39:38 -0500 )edit

Read my answer carefully and think about the consequences of setting the buffer size to a value which is smaller than your maximum image size. (hint: you'll end up right back where you started)

ahendrix gravatar image ahendrix  ( 2015-11-06 13:59:34 -0500 )edit

I have the same problem, but there is no buff_size to set in a C++ subscriber. Let alone that I'm using an image_transport to do this job...

Avio gravatar image Avio  ( 2017-09-27 06:19:24 -0500 )edit

I encounter the same problem, thanks a lot.

Qinsheng gravatar image Qinsheng  ( 2019-10-04 22:47:19 -0500 )edit

It seems I have this problem as well. I'm using c++ subscriber. So, how to set the buff_size in this case?

Danielfann gravatar image Danielfann  ( 2019-10-15 20:40:14 -0500 )edit

@Danielfann in C++, the subscriber queue size is typically the second argument to subscribe():

ahendrix gravatar image ahendrix  ( 2019-10-16 13:07:52 -0500 )edit

Ahendrix, thanks for reply. But, I really need to keep the queue size as 1 since I have to use the latest msg only, which is a real-time video frame. Currently, I have to check the time stamp and skip the frame with older time. The ROS queue doesn't behave like a simple queue model describes, in which the older frame should be discarded immediately once a new frame is pushed in since the queue size of 1. I searched and found people had similar problems in rospy, as you answered previously, if the message size is bigger than the buff_size, the older message can't be dropped properly. I'm just wondering how we can handle it in c++ (roscpp), any equivalence to buff_size in rospy, not queue_size. Thanks!

Danielfann gravatar image Danielfann  ( 2019-10-16 20:09:16 -0500 )edit

The roscpp TCP transport does not have a buff_size argument. It reads the incoming message length, allocates a buffer of the required size, and then reads the entire incoming message in a single read.

ahendrix gravatar image ahendrix  ( 2019-10-17 11:33:23 -0500 )edit

Question Tools



Asked: 2015-11-05 21:40:20 -0500

Seen: 11,147 times

Last updated: Nov 05 '15