ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

The message queue does not do any parallel processing; it just holds (up to) the specified number of messages before dropping old messages. This can be useful if you do not process messages very often, or if your algorithm is quick most of the time, but occasionally takes a long time, and you don't want to miss messages.

In your case, a larger queue size means you'll probably be processing older messages instead of the most recent message, which will result in higher latency for position estimates, and is probably not what you want.

A multi-threaded spinner has a thread pool and can run multiple callbacks in parallel (which is what you're asking for), but it is probably not the best solution. Running multiple callbacks in parallel might improve the throughput of your particle filter, but it won't improve the latency.

instead, you might want to compute the cost metric for each particle in parallel (probably using a thread pool with four threads).