ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
2 | No.2 Revision |
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
I think there is a deserialization performance hit involved there, which throttle does not have (though it ends up copying the messages that do get forwarded?).
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?): http://wiki.ros.org/nodelet_topic_tools. You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. It's possible copying 50 messages a second is not a big proportion of the cpu being wasted by all the excess messages in your system though.
A different and hacky thing to compare performance with would be to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with most of the remaining 1/50th of a second while also having a small subscriber queue_size. If it turns out the queued messages are only deserialized when delivered to the callback then all the messages that are pushed out ought to not incur a deserialization performance penalty... but the nodelet throttle is probably the best bet if you can't alter the imu bridge node.
3 | No.3 Revision |
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
I think there is a deserialization performance hit involved there, which throttle does not have (though it ends up copying the messages that do get forwarded?).
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?): http://wiki.ros.org/nodelet_topic_tools. nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. It's possible copying 50 messages a second is not a big proportion of the cpu being wasted by all the excess messages in your system though.
(Though looking at the code for this nodelet throttle it appears to be using a template system rather than a generic message that the regular throttle uses, which means it is deseerializing and reserializing everything?)
A different and hacky thing to compare performance with would be to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with most of the remaining 1/50th of a second while also having a small subscriber queue_size. If it turns out the queued messages are only deserialized when delivered to the callback then all the messages that are pushed out ought to not incur a deserialization performance penalty... but the nodelet throttle is probably the best bet if you can't alter the imu bridge node.
4 | No.4 Revision |
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
I think there is a deserialization performance hit involved there, which throttle does should not have depending on how ShapeShifter
works (though it ends up copying the messages that do get forwarded?).forwarded?).
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. It's possible copying 50 messages a second is not a big proportion of the cpu being wasted by all the excess messages in your system though.
(Though looking at the code for this nodelet throttle it appears to be using a template system rather than a generic message that the regular throttle uses, which means it is deseerializing and reserializing everything?)
A different and hacky thing to compare performance with would be to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with most of the remaining 1/50th of a second while also having a small subscriber queue_size. If it turns out the queued messages are only deserialized when delivered to the callback then all the messages that are pushed out dropped ought to not incur a deserialization performance penalty... but the nodelet throttle is probably the best bet if you can't alter the imu bridge node.
5 | No.5 Revision |
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
I think there is a deserialization performance hit involved there, which throttle should not have depending on how ShapeShifter
works (though it ends up copying the messages that do get forwarded?). https://github.com/ros/ros_comm/blob/lunar-devel/tools/topic_tools/src/throttle.cpp
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. It's possible copying 50 messages a second is not a big proportion of the cpu being wasted by all the excess messages in your system though.
(Though looking at the code for this nodelet throttle it appears to be using a template system rather than a generic message that the regular throttle uses, which means it is deseerializing and reserializing everything?)everything? https://github.com/ros/nodelet_core/blob/indigo-devel/nodelet_topic_tools/include/nodelet_topic_tools/nodelet_throttle.h)
A different and hacky thing to compare performance with would be to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with most of the remaining 1/50th of a second while also having a small subscriber queue_size. If it turns out the queued messages are only deserialized when delivered to the callback then all the messages that are dropped ought to not incur a deserialization performance penalty... but the throttle is probably the best bet if you can't alter the imu bridge node.
6 | No.6 Revision |
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
I think there is a deserialization performance hit involved there, which throttle should not have depending on how ShapeShifter
works (though it ends up copying the messages that do get forwarded?). works. https://github.com/ros/ros_comm/blob/lunar-devel/tools/topic_tools/src/throttle.cpp
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. It's possible copying 50 messages a second is not a big proportion of the cpu being wasted by all the excess messages in your system though.
(Though looking at the code for this nodelet throttle it appears to be using a template system rather than a generic message that the regular throttle uses, which means it is deseerializing and reserializing everything? https://github.com/ros/nodelet_core/blob/indigo-devel/nodelet_topic_tools/include/nodelet_topic_tools/nodelet_throttle.h)
A different and hacky thing to compare performance with would be to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with most of the remaining 1/50th of a second while also having a small subscriber queue_size. If it turns out the queued messages are only deserialized when delivered to the callback then all the messages that are dropped ought to not incur a deserialization performance penalty... but the throttle is probably the best bet if you can't alter the imu bridge node.
7 | No.7 Revision |
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
I think there is a deserialization performance hit involved there, which throttle should not have depending on how ShapeShifter
works. https://github.com/ros/ros_comm/blob/lunar-devel/tools/topic_tools/src/throttle.cpp
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. It's possible copying 50 messages a second is not a big proportion of the cpu being wasted by all the excess messages in your system though.
(Though looking at the code for this nodelet throttle it appears to be using a template system rather than a generic message that the regular throttle uses, which means it is deseerializing and reserializing everything? https://github.com/ros/nodelet_core/blob/indigo-devel/nodelet_topic_tools/include/nodelet_topic_tools/nodelet_throttle.h)
A different and hacky thing to compare performance with would be to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with most of the remaining 1/50th of a second while also having a small subscriber queue_size. If it turns out the queued messages are only deserialized when delivered to the callback then all the messages that are dropped ought to not incur a deserialization performance penalty... but the throttle is probably the best bet if you can't alter the imu bridge node.
8 | No.8 Revision |
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
Update
I think there is a deserialization performance hit involved there, which throttle should not have depending on how ShapeShifter
works. https://github.com/ros/ros_comm/blob/lunar-devel/tools/topic_tools/src/throttle.cpp
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. It's possible copying 50 messages a second is not a big proportion of the cpu being wasted by all the excess messages in your system though.
(Though looking at the code for this nodelet throttle it appears to be using a template system rather than a generic message that the regular throttle uses, which means it is deseerializing and reserializing everything? https://github.com/ros/nodelet_core/blob/indigo-devel/nodelet_topic_tools/include/nodelet_topic_tools/nodelet_throttle.h)
A different and hacky thing to compare performance with would be to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with most of the remaining 1/50th of a second while also having a small subscriber queue_size. If it turns out the queued messages are only deserialized when delivered to the callback then all the messages that are dropped ought to not incur a deserialization performance penalty.
9 | No.9 Revision |
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
Update
I think there is a deserialization performance hit involved there, which throttle should not have depending on how ShapeShifter
works. https://github.com/ros/ros_comm/blob/lunar-devel/tools/topic_tools/src/throttle.cpp
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. It's possible copying 50 messages a second is not a big proportion of the cpu being wasted by all the excess messages in your system though.
(Though looking at the code for this nodelet throttle it appears to be using a template system rather than a generic message that the regular throttle uses, which means it is deseerializing and reserializing everything? https://github.com/ros/nodelet_core/blob/indigo-devel/nodelet_topic_tools/include/nodelet_topic_tools/nodelet_throttle.h)https://github.com/ros/nodelet_core/blob/indigo-devel/nodelet_topic_tools/include/nodelet_topic_tools/nodelet_throttle.h .
The template system also requires you to create your own nodelet that instantiates the type you want to throttle, which is non-trivial, but there is an example here https://github.com/jon-weisz/camera_throttler_nodelets)
A different and hacky thing to compare performance with would be to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with most of the remaining 1/50th of a second while also having a small subscriber queue_size. If it turns out the queued messages are only deserialized when delivered to the callback then all the messages that are dropped ought to not incur a deserialization performance penalty.
10 | No.10 Revision |
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
Update
I think there is a deserialization performance hit involved there, which throttle should not have depending on how ShapeShifter
works. https://github.com/ros/ros_comm/blob/lunar-devel/tools/topic_tools/src/throttle.cpp
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. It's possible copying 50 messages a second is not a big proportion of the cpu being wasted by all the excess messages in your system though.
(Though looking at the code for this nodelet throttle it appears to be using a template system rather than a generic message that the regular throttle uses, which means it is deseerializing and reserializing everything? https://github.com/ros/nodelet_core/blob/indigo-devel/nodelet_topic_tools/include/nodelet_topic_tools/nodelet_throttle.h .
The template system also requires you to create your own nodelet that instantiates the type you want to throttle, which is non-trivial, but there is an example here https://github.com/jon-weisz/camera_throttler_nodelets)
A different and hacky thing to compare performance with would be to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with most of the remaining 1/50th of a second while also having a small subscriber queue_size. If it turns out the queued messages are only deserialized when delivered to the callback then all the messages that are dropped ought to not incur a deserialization performance penalty.
Update 2
I did a test to compare nodelet_topic_tools throttle to regular throttle, and the hacky queue solution- the code is on https://github.com/lucasw/topic_throttle:
One unexpected result is that a python subscriber plus regular throttle doesn't run any faster than a python or C++ node subscribing to the full rate Imu topic.
And the other is how much faster the queue throttler is
#!/usr/bin/env python
import rospy
from sensor_msgs.msg import Imu
def callback(data):
global count
count += 1
global pub
pub.publish(data)
global rate
rate.sleep()
rospy.init_node('imu_sub')
# sub = rospy.Subscriber("imu_throttle", Imu, callback)
count = 0
rate = rospy.Rate(50)
sub = rospy.Subscriber("imu", Imu, callback, queue_size=3)
pub = rospy.Publisher("imu_throttle", Imu, queue_size=2)
rospy.spin()
https://github.com/lucasw/topic_throttle/blob/master/scripts/imu_sub.py
11 | No.11 Revision |
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
Update
I think there is a deserialization performance hit involved there, which throttle should not have depending on how ShapeShifter
works. https://github.com/ros/ros_comm/blob/lunar-devel/tools/topic_tools/src/throttle.cpp
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. It's possible copying 50 messages a second is not a big proportion of the cpu being wasted by all the excess messages in your system though.
(Though looking at the code for this nodelet throttle it appears to be using a template system rather than a generic message that the regular throttle uses, which means it is deseerializing and reserializing everything? https://github.com/ros/nodelet_core/blob/indigo-devel/nodelet_topic_tools/include/nodelet_topic_tools/nodelet_throttle.h .
The template system also requires you to create your own nodelet that instantiates the type you want to throttle, which is non-trivial, but there is an example here https://github.com/jon-weisz/camera_throttler_nodelets)
A different and hacky thing to compare performance with would be to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with most of the remaining 1/50th of a second while also having a small subscriber queue_size. If it turns out the queued messages are only deserialized when delivered to the callback then all the messages that are dropped ought to not incur a deserialization performance penalty.
Update 2
I did a test to compare nodelet_topic_tools throttle to regular throttle, and the hacky queue solution- the code is on https://github.com/lucasw/topic_throttle:
One unexpected result is that a python subscriber plus regular throttle doesn't run any faster than a python or C++ node subscribing to the full rate Imu topic.
And the other is how much faster the queue throttler is
#!/usr/bin/env python
import rospy
from sensor_msgs.msg import Imu
def callback(data):
global count
count += 1
global pub
pub.publish(data)
global rate
rate.sleep()
rospy.init_node('imu_sub')
# sub = rospy.Subscriber("imu_throttle", Imu, callback)
count = 0
rate = rospy.Rate(50)
sub = rospy.Subscriber("imu", Imu, callback, queue_size=3)
pub = rospy.Publisher("imu_throttle", Imu, queue_size=2)
rospy.spin()
https://github.com/lucasw/topic_throttle/blob/master/scripts/imu_sub.py
(It wouldn't be hard to make the above generic to any ros message)
12 | No.12 Revision |
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
Update
I think there is a deserialization performance hit involved there, which throttle should not have depending on how ShapeShifter
works. https://github.com/ros/ros_comm/blob/lunar-devel/tools/topic_tools/src/throttle.cpp
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. It's possible copying 50 messages a second is not a big proportion of the cpu being wasted by all the excess messages in your system though.
(Though looking at the code for this nodelet throttle it appears to be using a template system rather than a generic message that the regular throttle uses, which means it is deseerializing and reserializing everything? https://github.com/ros/nodelet_core/blob/indigo-devel/nodelet_topic_tools/include/nodelet_topic_tools/nodelet_throttle.h .
The template system also requires you to create your own nodelet that instantiates the type you want to throttle, which is non-trivial, but there is an example here https://github.com/jon-weisz/camera_throttler_nodelets)
A different and hacky thing to compare performance with would be to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with most of the remaining 1/50th of a second while also having a small subscriber queue_size. If it turns out the queued messages are only deserialized when delivered to the callback then all the messages that are dropped ought to not incur a deserialization performance penalty.
Update 2
I did a test to compare nodelet_topic_tools throttle to regular throttle, and the hacky queue solution- the code is on https://github.com/lucasw/topic_throttle:
One unexpected result is that a python subscriber plus regular throttle doesn't run any faster than a python or C++ node subscribing to the full rate Imu topic.
And the other is how much faster the queue throttler isis, though it is looking like behind the scenes the most recent message isn't getting save (do I have it backwards, new messages aren't pushing the old ones out? Maybe the queue can be flushed)
#!/usr/bin/env python
import rospy
from sensor_msgs.msg import Imu
def callback(data):
global count
count += 1
global pub
pub.publish(data)
global rate
rate.sleep()
rospy.init_node('imu_sub')
# sub = rospy.Subscriber("imu_throttle", Imu, callback)
count = 0
rate = rospy.Rate(50)
sub = rospy.Subscriber("imu", Imu, callback, queue_size=3)
pub = rospy.Publisher("imu_throttle", Imu, queue_size=2)
rospy.spin()
https://github.com/lucasw/topic_throttle/blob/master/scripts/imu_sub.py
(It wouldn't be hard to make the above generic to any ros message)
13 | No.13 Revision |
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
Update
I think there is a deserialization performance hit involved there, which throttle should not have depending on how ShapeShifter
works. https://github.com/ros/ros_comm/blob/lunar-devel/tools/topic_tools/src/throttle.cpp
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. It's possible copying 50 messages a second is not a big proportion of the cpu being wasted by all the excess messages in your system though.
(Though looking at the code for this nodelet throttle it appears to be using a template system rather than a generic message that the regular throttle uses, which means it is deseerializing and reserializing everything? https://github.com/ros/nodelet_core/blob/indigo-devel/nodelet_topic_tools/include/nodelet_topic_tools/nodelet_throttle.h .
The template system also requires you to create your own nodelet that instantiates the type you want to throttle, which is non-trivial, but there is an example here https://github.com/jon-weisz/camera_throttler_nodelets)
A different and hacky thing to compare performance with would be to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with most of the remaining 1/50th of a second while also having a small subscriber queue_size. If it turns out the queued messages are only deserialized when delivered to the callback then all the messages that are dropped ought to not incur a deserialization performance penalty.
Update 2
I did a test to compare nodelet_topic_tools throttle to regular throttle, and the hacky queue solution- the code is on https://github.com/lucasw/topic_throttle:
One unexpected result is that a python subscriber plus regular throttle doesn't run any faster than a python or C++ node subscribing to the full rate Imu topic.
And the other is how much faster the queue throttler is, though it is looking like behind the scenes the most recent message isn't getting save used (do I have it backwards, new messages aren't pushing the old ones out? Maybe the queue can be flushed)
#!/usr/bin/env python
import rospy
from sensor_msgs.msg import Imu
def callback(data):
global count
count += 1
global pub
pub.publish(data)
global rate
rate.sleep()
rospy.init_node('imu_sub')
# sub = rospy.Subscriber("imu_throttle", Imu, callback)
count = 0
rate = rospy.Rate(50)
sub = rospy.Subscriber("imu", Imu, callback, queue_size=3)
pub = rospy.Publisher("imu_throttle", Imu, queue_size=2)
rospy.spin()
https://github.com/lucasw/topic_throttle/blob/master/scripts/imu_sub.py
(It wouldn't be hard to make the above generic to any ros message)
14 | No.14 Revision |
Subscriber Counter Throttle
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
Update
I think - but there is a deserialization performance hit involved there, which there
Standard Throttle
The topic_Tools throttle should not have the deserialization penalty depending on how ShapeShifter works. https://github.com/ros/ros_comm/blob/lunar-devel/tools/topic_tools/src/throttle.cpp
Nodelet Throttle
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. It's possible copying Copying 50 messages a second is not a big proportion of the cpu being wasted by all the excess messages compared to the resources required for handling the other 950 messages, so the nodelet isn't worth the trouble.
Additionally, in your system though.
(Though looking at the code for this nodelet throttle throttle it appears to be using a template system rather than a the generic ShapeShifter message that the regular throttle uses, which means it is deseerializing and reserializing everything? https://github.com/ros/nodelet_core/blob/indigo-devel/nodelet_topic_tools/include/nodelet_topic_tools/nodelet_throttle.h .everything, making it slower than regular throttle as the test result below shows.
The template system also requires you to create your own nodelet that instantiates the type you want to throttle, which is non-trivial, but there is an example here https://github.com/jon-weisz/camera_throttler_nodelets)Image example and I've made an imu nodelet throttle.
Queue Throttling
A different and hacky thing to compare performance with would be approach is to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with most of the remaining 1/50th of a second while also having a small subscriber queue_size. queue_size of 1.
If
it turns out the queued messages are onlythe queue is full and a new message arrives, the oldest message will be thrown out. Additionally, the message is not actually deserializedwhen delivered to theuntil the first callbackthen all the messages that are dropped ought to not incur a deserialization performance penalty.which needs it is about to be called.
http://wiki.ros.org/roscpp/Overview/Publishers%20and%20Subscribers#Queueing_and_Lazy_Deserialization
Update 2Test Results
I did a test to compare nodelet_topic_tools throttle to regular throttle, and the hacky queue solution- the code is on https://github.com/lucasw/topic_throttle:
One unexpected result is that a python subscriber plus regular throttle doesn't run any faster than a python or C++ node subscribing to the full rate Imu topic.
And the other is how much faster the The rospy queue throttler is, though it is looking like throttling node got increasingly behind the scenes with the messages it was receiving, while the C++ version worked fine and so far is the best solution for the most recent message isn't getting used (do I have it backwards, new messages aren't pushing the old ones out? Maybe the queue can be flushed)
#!/usr/bin/env python
import rospy
from sensor_msgs.msg import Imu
def callback(data):
global count
count += 1
global pub
pub.publish(data)
global rate
rate.sleep()
rospy.init_node('imu_sub')
# sub = rospy.Subscriber("imu_throttle", Imu, callback)
count = 0
rate = rospy.Rate(50)
sub = rospy.Subscriber("imu", Imu, callback, queue_size=3)
pub = rospy.Publisher("imu_throttle", Imu, queue_size=2)
rospy.spin()
https://github.com/lucasw/topic_throttle/blob/master/scripts/imu_sub.py
(It wouldn't be hard to make the above generic to any ros message)
efficient throttle.All the code is on https://github.com/lucasw/topic_throttle
15 | No.15 Revision |
Subscriber Counter Throttle
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
- but there is a deserialization performance hit involved there
Standard Throttle
The topic_Tools topic_tools throttle should not the deserialization penalty depending on how ShapeShifter works.
Nodelet Throttle
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. Copying 50 messages a second is not a big proportion of the cpu being compared to the resources required for handling the other 950 messages, so the nodelet isn't worth the trouble.
Additionally, in looking at the code for this nodelet throttle it appears to be using a template system rather than the generic ShapeShifter message that the regular throttle uses, which means it is deseerializing and reserializing everything, making it slower than regular throttle as the test result below shows.
The template system also requires you to create your own nodelet that instantiates the type you want to throttle, which is non-trivial, but there is an Image example and I've made an imu nodelet throttle.
Queue Throttling
A different and hacky approach is to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with the remaining 1/50th of a second while also having a subscriber queue_size of 1.
If the queue is full and a new message arrives, the oldest message will be thrown out. Additionally, the message is not actually deserialized until the first callback which needs it is about to be called.
http://wiki.ros.org/roscpp/Overview/Publishers%20and%20Subscribers#Queueing_and_Lazy_Deserialization
Test Results
One unexpected result is that a python subscriber plus regular throttle doesn't run any faster than a python or C++ node subscribing to the full rate Imu topic.
The rospy queue throttling node got increasingly behind with the messages it was receiving, while the C++ version worked fine and so far is the best solution for the most efficient throttle.
All the code is on https://github.com/lucasw/topic_throttle
16 | No.16 Revision |
Subscriber Counter Throttle
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
- but there is a deserialization performance hit involved there
Standard Throttle
The topic_tools throttle should not the deserialization penalty depending on how ShapeShifter works.
Nodelet Throttle
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. Copying 50 messages a second is not a big proportion of the cpu being compared to the resources required for handling the other 950 messages, so the nodelet isn't worth the trouble.
Additionally, in looking at the code for this nodelet throttle it appears to be using a template system rather than the generic ShapeShifter message that the regular throttle uses, which means it is deseerializing and reserializing everything, making it slower than regular throttle as the test result below shows.
The template system also requires you to create your own nodelet that instantiates the type you want to throttle, which is non-trivial, but there is an Image example and I've made an imu nodelet throttle.
Queue Throttling
A different and hacky approach is to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with the remaining 1/50th of a second while also having a subscriber queue_size of 1.
If the queue is full and a new message arrives, the oldest message will be thrown out. Additionally, the message is not actually deserialized until the first callback which needs it is about to be called.
http://wiki.ros.org/roscpp/Overview/Publishers%20and%20Subscribers#Queueing_and_Lazy_Deserialization
Test Results
One unexpected result is that a python subscriber plus regular throttle doesn't run any faster than a python or C++ node subscribing to the full rate Imu topic.
The rospy queue throttling node got increasingly behind with the messages it was receiving, while the C++ version worked fine and so far is the best solution for the most efficient throttle.
All the code is on https://github.com/lucasw/topic_throttle
17 | No.17 Revision |
Subscriber Counter Throttle
Keep a counter in the callback that returns without doing anything unless counter % 50 == 0
- but there is a deserialization performance hit involved there
Standard Throttle
The topic_tools throttle should not have the deserialization penalty depending on how ShapeShifter works.
Nodelet Throttle
A nodelet version of throttle would be nice to eliminate additional copies between throttle and your subscriber- and it turns out it already exists: http://wiki.ros.org/nodelet_topic_tools (is there any reason why there should be a non-nodelet version of throttle that isn't just a wrapper around the nodelet?). You would have to make your subscriber a nodelet and put it into the same nodelet manager as the throttle. Copying 50 messages a second is not a big proportion of the cpu being compared to the resources required for handling the other 950 messages, so maybe the nodelet isn't worth the trouble.
Additionally, in looking at the code for this nodelet throttle it appears to be using a template system rather than the generic ShapeShifter message that the regular throttle uses, which means it is deseerializing and reserializing everything, making it slower than regular throttle as the test result below shows.
The template system also requires you to create your own nodelet that instantiates the type you want to throttle, which is non-trivial, but there is an Image example and I've made an imu nodelet throttle.
Queue Throttling
A different and hacky approach is to keep track of time since the last processed subscribed message in your callback, and do a blocking sleep with the remaining 1/50th of a second while also having a subscriber queue_size of 1.
If the queue is full and a new message arrives, the oldest message will be thrown out. Additionally, the message is not actually deserialized until the first callback which needs it is about to be called.
http://wiki.ros.org/roscpp/Overview/Publishers%20and%20Subscribers#Queueing_and_Lazy_Deserialization
Test Results
One unexpected result is that a python subscriber plus regular throttle doesn't run any faster than a python or C++ node subscribing to the full rate Imu topic.
The rospy queue throttling node got increasingly behind with the messages it was receiving, while the C++ version worked fine and so far is the best solution for the most efficient throttle.
All the code is on https://github.com/lucasw/topic_throttle