ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Using the raw cloud from the Kinect passes a ton of points to the navigation stack for it to ray-trace when clearing. A couple of questions:

1) Have you considered downsampling the point cloud a bit to a more reasonable resolution, something like 1-5mm?

2) Are you seeing any warnings or errors from the navigation stack when you run?

3) Are you sure that there is enough information in the cloud you're passing to the navigation stack to allow it to clear obstacles through raytracing? Specifically, you'll want to check whether or not you have a lot of missing depth readings from the sensor which you'd probably need to fill in artificially for things to work well.