ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

It is not a question of reliability; it's a major difference between 2D/3D sensors:

2D sensors

You always get information from the sensors for each pixel of the image (as long as they are no dead pixels); if no light hits the sensor the image will be plain black; meaning the 3 channels R,G,B (from the bayer filter) will be 0; that is the information.

3D sensors

3D sensors are different in many ways; while most of all rely on 2D sensors (one or multiple of them); the depth of a point might not always be possible to be computed.

Imagine a stereo camera, the stereo-correspondence between the pixel of the left image and the right pixel might sometimes fail because of occlusions, algorithm weaknesses, bad lightning etc... When the stereo-correspondence fails; the Z coordinate of the point cannot be calculated; thus it is filled with NaN. The X and Y coordinates should never be NaN because these are the coordinates of the point related to the pixel matrix.

Conclusion

It is perfectly normal to have "holes" in the Kinect (or any other 3D sensor) cloud; it's very unlikely you'll get a point cloud with 0 NaN values.