Are you saying that you want to try and reduce the number of points in each cloud? Or are you trying to say that you want to reduce the number of clouds?
If it's the former, you have several options. However, note that none of these may actually fix your issue. It's possible that your processor is simply overwhelmed by the type of computations you are trying to do and these fixes won't change anything:
- You may be able to modify your driver settings to reduce the size of each depth map/point cloud. For example, with something like
openni_launch
you can use dynamic_reconfigure
to change the size of point clouds. - It's possible that the slow rate is due to the conversion from laser scanner to point cloud. Do you need to be doing this? Could you use laser_filters to reduce the laser scan size before converting to a point cloud?
- Again, depending on where the bottleneck is, you could use something like a VoxelGrid Filter to reduce the size of the point clouds.
If it's the latter, your driver may be able to reduce its publish rate, or you could use a tool like the throttle node to automatically throw some point clouds away.
Note that often a bottleneck when working with this type of large data is memory allocation. Often, using nodelets can dramatically speed up a pipeline of nodes.