Hi, I am doing a research about sensor fusion, and I need to reconstruct a depth image, only with the contain of the topic: depth_registered/points. I already know that is a PointCloud2 type, and it has associated a PointField in order to interpret the data. I am following more or less this algorithm to get the Z coordinate it is also the same thing to get X, Y and RGB just moving the offsets, which are provide from the pointField information.
Get the 9 , 10, 11 and 12 elements from the 32 elements matrix for each pixel
- Convert to binary each element (9,10,11,12)
- Concatenate the binary representations (for each elements).
Invert all the concatenated representation (because is not a big endian)
- Convert the inverted representation in float32 value.
Well I have some results for my research and I am worried because making this algorithm I have some result for the Z matrix and the has to much noise (I think) I attached this result to show you my results.
One of the restrictions of my research is that I am not able to use the switch between this topic and PCL (pcl_conversions ), I am just using a file which contains all the data (the data field for the PointCloud2 topic ) of the topic in each moment and working with it in Matlab.
I have attached the original image reconstruction (taking the 19, 18 and 17 for RGB ) from the 32 elements and the Z graphic for my Z result where each color represents an depth value .
Original Image

Z Matriz
