ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
Hopefully I can shed some light on this for you.
Q1: Reading a PointCloud2 using readXYZ(ptCloud) is not producing a depth image it's producing a structured point cloud, they are different data structures. Each pixel is either null (zero usually) or a Cartesian point in the sensors optical frame. By convention this has Z pointing into the view, X pointing right and Y pointing down.
Q2: You're trying to convert a structured point cloud into a depth image, this is easy you just throw away the X and Y data. The Z values are the depth, if you know the lens model and calibration parameters of the depth camera then you can always re-create those X and Y values later. To convert to unsigned 16 bit integers use the uint16() function or to convert to 32 bit floating points use the single() function.
Q3: The array size in the data array is simply the number of bytes of data stored, each pixel of your depth image requires two bytes. So there are 480 x 640 x 2 bytes in the message.
Re Update: As said readXYZ() does not give you a depth image. Your last point is correct each pixel is two Uint8's otherwise known as bytes.
Hope this helps.