Ask Your Question
0

Subscribe to laser scan queue size

asked 2019-01-05 18:15:14 -0500

AutoCar gravatar image

Hi, my robot car has a Jetson tx2 board and a Hokuyo lidar. I have developed my own particle filter which read lidar and localize the robot itself. My particle filter node subscribe to the lidar message with a queue size of 1:

_subScan = _nh.subscribe("/scan",     1, &ConeSlam::scanCallback, this);

Since jetson has 4 cores, I am wondering if it is a good idea to increase the queue size to 4:

_subScan = _nh.subscribe("/scan",     4, &ConeSlam::scanCallback, this);

This way, 4 scans can be process at the same time at different threads.

Am I correct?

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
2

answered 2019-01-05 20:11:11 -0500

ahendrix gravatar image

The message queue does not do any parallel processing; it just holds (up to) the specified number of messages before dropping old messages. This can be useful if you do not process messages very often, or if your algorithm is quick most of the time, but occasionally takes a long time, and you don't want to miss messages.

In your case, a larger queue size means you'll probably be processing older messages instead of the most recent message, which will result in higher latency for position estimates, and is probably not what you want.

A multi-threaded spinner has a thread pool and can run multiple callbacks in parallel (which is what you're asking for), but it is probably not the best solution. Running multiple callbacks in parallel might improve the throughput of your particle filter, but it won't improve the latency.

instead, you might want to compute the cost metric for each particle in parallel (probably using a thread pool with four threads).

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

2 followers

Stats

Asked: 2019-01-05 18:15:14 -0500

Seen: 60 times

Last updated: Jan 05