ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Hello mattso,

I am not sure if I understood the question entirely, but I will try to give you the best possible answer.

I am guessing that you are using registered depth images which means that both depth and color images are in the same coordinate system. This is important because, in that case, you can match the pixels between color and depth images - meaning, a pixel at (x1,y1) in color image has a depth encoded at pixel (x1,y1) in depth image.

That is true for every pixel in the image. However, you need to have in mind that depth sensors are not perfect and not all pixels will show the real depth. Some will just have noise added and others will end up as NaNs or zeros because the sensor was not able to determine the depth (or it was not certain enough - depends on the RealSense presets you are using).

One more thing to pay attention to is the depth image encoding. You need to check how are the depth values encoded - in millimeters or in meters. The easiest way to do it is by checking the value of the encoding field in your sensor_msgs::Image (Image message). It can be either float(in m) or 16-bit unsigned integer (in mm). You can read more about it here.

Moreover, if you are interested in creating a point cloud based on the input depth image check out the depth_image_proc package - especially this part.

If you have any further questions, feel free to ask.