# Kinect Depth

Hi There, I don't know why I'm losing my depth data when I display it from the Kinect through my program, it's appears like black and whit I'm losing the grayscale

void imageCallbackdepth(const sensor_msgs::ImageConstPtr& msg) {

cv_bridge::CvImagePtr cv_ptr;

try
{
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::TYPE_16SC1);

}

catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}

cv::imshow("OpenCV viewer Kinect depth", cv_ptr->image);

cv::waitKey(3);


}

In the main function: image_transport::Subscriber sub = it.subscribe("/camera/depth/sw_registered/image_rect_raw", 1, imageCallbackdepth);

http://s9.postimg.org/6b800dbn3/Image...

I think that the problem is here: image_encodings::TYPE_16SC1 !!? How can I solve this ?

edit retag close merge delete

Sort by » oldest newest most voted

Hello.

I am probably one of the least recommended persons to be answering this, for I myself have just joined ROS communities, and with zero coding experience so far. I am learning.

But I came across some documentation about openCV and kinect - one of my areas of interest - and indeed, your gut feeling is right; The image encoding seems to be the culprit here.

I found this reference that helps a bit:

"Note that mono8 and bgr8 are the two image encodings expected by most OpenCV functions."

You can find the entire reference, and the actual parameters here:

wiki.ros.org/cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImages

Now, it is important to remember that:

"The depth sensor consists of an infrared laser projector combined with a monochrome CMOS sensor..." (from en.wikipedia.org/wiki/Kinect )

This means that the depth is indeed, monochromatic, but the camera is not. You have both. Are you reading the right one ?

Additionally, you are using the Kinect from Xbox 360, right ? Not the one from Xbox One. I am asking because until a few months ago, the new Kinect was not yet supported/"cracked".

"I had the same problem. Taking a point-cloud approach is one way of doing it. I was trying to do this purely in Python, so that wouldn't work for me. For the problem where the kinect rounded the depth values, that was easily fixed. We discovered that, when we converted to OpenCv, we used the command bridge.imgmsg_to_cv(data, "mono8") where bridge was an OpenCv bridge. However, this is looking for an 8 bit integer, and the camera produces a 32-bit floating point number, so this command should be: bridge.imgmsg_to_cv(data, "32FC1")

answers.ros.org/question/9260/depth-wrong-using-openni_camera-depth-image/ " And, because my bloody karma will not allow me, add the http before each URL.

Hope this sheds some light.

more