ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

How to do point cloud data filtering?

asked 2018-05-01 09:27:17 -0600

Mekateng gravatar image

Hi, I need help with point cloud filtering. I converted laser scan data to point cloud data using PC. But now ı use jetson tx1 development kit. Therefore, the point cloud data flow rate is seriously reduced compared to the computer. That's why ı want to reduce the number of the point cloud data.Because I think the flow rate will increase if the number of point cloud data decreases. Could you please help me this situation? Thanks in advance.

edit retag flag offensive close merge delete

1 Answer

Sort by » oldest newest most voted

answered 2018-05-01 10:11:21 -0600

updated 2018-05-01 10:13:01 -0600

Are you saying that you want to try and reduce the number of points in each cloud? Or are you trying to say that you want to reduce the number of clouds?

If it's the former, you have several options. However, note that none of these may actually fix your issue. It's possible that your processor is simply overwhelmed by the type of computations you are trying to do and these fixes won't change anything:

  • You may be able to modify your driver settings to reduce the size of each depth map/point cloud. For example, with something like openni_launch you can use dynamic_reconfigure to change the size of point clouds.
  • It's possible that the slow rate is due to the conversion from laser scanner to point cloud. Do you need to be doing this? Could you use laser_filters to reduce the laser scan size before converting to a point cloud?
  • Again, depending on where the bottleneck is, you could use something like a VoxelGrid Filter to reduce the size of the point clouds.

If it's the latter, your driver may be able to reduce its publish rate, or you could use a tool like the throttle node to automatically throw some point clouds away.

Note that often a bottleneck when working with this type of large data is memory allocation. Often, using nodelets can dramatically speed up a pipeline of nodes.

edit flag offensive delete link more



Sorry if this isn't a very helpful answer. I think the answer to your question depends on what packages you are already using, what hardware/driver you are using, and what you are ultimately trying to do. Feel free to edit your question if you have more details.

jarvisschultz gravatar image jarvisschultz  ( 2018-05-01 10:22:31 -0600 )edit

Question Tools

1 follower


Asked: 2018-05-01 09:27:17 -0600

Seen: 1,263 times

Last updated: May 01 '18