ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Kinect Depth

asked 2014-06-05 13:22:33 -0600

ROSkinect gravatar image

Hi There, I don't know why I'm losing my depth data when I display it from the Kinect through my program, it's appears like black and whit I'm losing the grayscale

void imageCallbackdepth(const sensor_msgs::ImageConstPtr& msg) {

cv_bridge::CvImagePtr cv_ptr;

    cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::TYPE_16SC1);


catch (cv_bridge::Exception& e)
    ROS_ERROR("cv_bridge exception: %s", e.what());

cv::imshow("OpenCV viewer Kinect depth", cv_ptr->image);



In the main function: image_transport::Subscriber sub = it.subscribe("/camera/depth/sw_registered/image_rect_raw", 1, imageCallbackdepth);

I think that the problem is here: image_encodings::TYPE_16SC1 !!? How can I solve this ?

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2014-06-05 16:37:42 -0600

ccapriotti gravatar image


I am probably one of the least recommended persons to be answering this, for I myself have just joined ROS communities, and with zero coding experience so far. I am learning.

But I came across some documentation about openCV and kinect - one of my areas of interest - and indeed, your gut feeling is right; The image encoding seems to be the culprit here.

I found this reference that helps a bit:

"Note that mono8 and bgr8 are the two image encodings expected by most OpenCV functions."

You can find the entire reference, and the actual parameters here:

Now, it is important to remember that:

"The depth sensor consists of an infrared laser projector combined with a monochrome CMOS sensor..." (from )

This means that the depth is indeed, monochromatic, but the camera is not. You have both. Are you reading the right one ?

Additionally, you are using the Kinect from Xbox 360, right ? Not the one from Xbox One. I am asking because until a few months ago, the new Kinect was not yet supported/"cracked".

Additionally there is this:

"I had the same problem. Taking a point-cloud approach is one way of doing it. I was trying to do this purely in Python, so that wouldn't work for me. For the problem where the kinect rounded the depth values, that was easily fixed. We discovered that, when we converted to OpenCv, we used the command bridge.imgmsg_to_cv(data, "mono8") where bridge was an OpenCv bridge. However, this is looking for an 8 bit integer, and the camera produces a 32-bit floating point number, so this command should be: bridge.imgmsg_to_cv(data, "32FC1") " And, because my bloody karma will not allow me, add the http before each URL.

Hope this sheds some light.

edit flag offensive delete link more

Question Tools

1 follower


Asked: 2014-06-05 13:22:33 -0600

Seen: 766 times

Last updated: Jun 05 '14