Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

More than an answer, this may serve as a first diagnostics step.

How loaded is your system when this happens?. I recommend a first sweep with the following tools:

  • top to check idle CPU resources and load average.

  • iftop (may require sudo) to query the traffic on your network interfaces, eg. sudo iftop -i lo for loopback only.

  • It might also be good to check the number of sockets with a given status, eg. netstat | grep ESTABLISHED | wc -l. ESTABLISHED, CONNECTED correspond to sockets currently in use, while TIME_WAIT, CLOSE_WAIT are pending to close. Pay special attention to the latter, as large counts here can indicate lots of short-lived sockets, which usually occur in ROS environments when you frequently query the master (non-persistent service calls or parameter reads). Many socket opening/closing operations will increase your system CPU load (shown in top under Cpu(s) .... sy).

More than an answer, this may serve as a first diagnostics step.

How loaded is your system when this happens?. I recommend a first sweep with the following tools:

  • top to check idle CPU resources and load average.

  • iftop (may require sudo) to query the traffic on your network interfaces, eg. sudo iftop -i lo for loopback only.

  • It might also be good to check the number of sockets with a given status, eg. netstat | grep ESTABLISHED | wc -l. ESTABLISHED, CONNECTED correspond to sockets currently in use, while TIME_WAIT, CLOSE_WAIT are pending to close. Pay special attention to the latter, as large counts here can indicate lots of short-lived sockets, which usually occur in ROS environments when you frequently query the master (non-persistent service calls or parameter reads). Many socket opening/closing operations will increase your system CPU load (shown in top under Cpu(s) .... sy).

Edit: From the updated question details.

Could you post for completeness the CPU load and network traffic values with pointcloud perception disabled?.

It seems that the pointcloud messages are taking up a lot of bandwidth, and (de)serializing + processing them (coordinate system change, self-filtering, object detection, etc.) is in turn consuming significant CPU resourecs (maxing out a core, leaving no room to the scheduler to process all incoming messages).

What kind of pointcloud input are you feeding move_group?. If it's the raw input from a Kinect-like RGBD sensor, that might indeed prove prohibitive. Preprocessing the pointcloud might help. These are some indicative numbers I took some time ago:

  • Original cloud contains 200k-300k valid points.
  • Crop to a bounding volume of interest (~1 order of magnitude less points)
  • Downsample with octomap (additional ~2 orders of magnitude reduction)

Finally, if you need point clouds at discrete time instances (as opposed to a continuous stream), gate pointcloud traffic through an on-demand snapshotter.