# Understanding /camera/depth/points

What does the value in the 'data' field of the data published by /camera/depth/points represent? They are supposed to represent the distance of every pixel from the Kinect center point? How to I convert the data generated as point cloud into corresponding cartesian 3D coordinates ? An array of size [4915200x1] is generated. Which means distance to each pixel is represented using 16 entries in matrix (because 480 x 640 x 16 = 4915200). How do I understand this. Please explain the program behind visualization of this pointcloud data in rviz .

I kept my kinect in front of an plain 2D wall parallel with it. I thought that I will get a matrix with uniform value through out as the wall is equidistant from kinect. But I get a data which I could not understand and decode!