ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

We encountered the same problem last time. Dan is correct, the format is not matched.

If you look into rgbdslam package, there is a small piece of code does this conversion.

void depthToCV8UC1(const cv::Mat& float_img, cv::Mat& mono8_img){
  //Process images
  if(mono8_img.rows != float_img.rows || mono8_img.cols != float_img.cols){
    mono8_img = cv::Mat(float_img.size(), CV_8UC1);}
  cv::convertScaleAbs(float_img, mono8_img, 100, 0.0);
  //The following doesn't work due to NaNs
  //double minVal, maxVal; 
  //minMaxLoc(float_img, &minVal, &maxVal);
  //ROS_DEBUG("Minimum/Maximum Depth in current image: %f/%f", minVal, maxVal);
  //mono8_img = cv::Scalar(0);
  //cv::line( mono8_img, cv::Point2i(10,10),cv::Point2i(200,100), cv::Scalar(255), 3, 8);
}

After this, you change the image to a CV_8UC1, this is a 256 greyscale image. You can write a small node, subscribe to the depth image and publish this greyscale image, than use your extract...should work.

Actually, the depth image is using 11 bit to store the depth info (I remember, the other 5 bit is using to label different object), so it is OK to save it to a 8 bit greyscale image, you will not lose too much info.

We encountered the same problem last time. Dan is correct, the format is not matched.

If you look into rgbdslam package, there is a small piece of code does this conversion.

void depthToCV8UC1(const cv::Mat& float_img, cv::Mat& mono8_img){
  //Process images
  if(mono8_img.rows != float_img.rows || mono8_img.cols != float_img.cols){
    mono8_img = cv::Mat(float_img.size(), CV_8UC1);}
  cv::convertScaleAbs(float_img, mono8_img, 100, 0.0);
  //The following doesn't work due to NaNs
  //double minVal, maxVal; 
  //minMaxLoc(float_img, &minVal, &maxVal);
  //ROS_DEBUG("Minimum/Maximum Depth in current image: %f/%f", minVal, maxVal);
  //mono8_img = cv::Scalar(0);
  //cv::line( mono8_img, cv::Point2i(10,10),cv::Point2i(200,100), cv::Scalar(255), 3, 8);
}

After this, you change the image to a CV_8UC1, this is a 256 greyscale image. You can write a small node, subscribe to the depth image and publish this greyscale image, than use your extract...should work.

Actually, the depth image is using 11 bit to store the depth info (I remember, the other 5 bit is using to label different object), so it is OK to save it to a 8 bit greyscale image, you will not lose too much info.

------ edit ------ to answer the questions in comments.

in main function, create the callback

image_transport::Subscriber depth_sub = it.subscribe("camera/depth/image", 1, depthCallback);

and my callback function (just a simple void function for your reference)

void depthCallback(const sensor_msgs::ImageConstPtr& original_image)
{
    cv_bridge::CvImagePtr cv_ptr;
    //Convert from the ROS image message to a CvImage suitable for working with OpenCV for processing
    try
    {
        //Always copy, returning a mutable CvImage
        //OpenCV expects color images to use BGR channel order.
        cv_ptr = cv_bridge::toCvCopy(original_image);
    }
    catch (cv_bridge::Exception& e)
    {
        //if there is an error during conversion, display it
        ROS_ERROR("tutorialROSOpenCV::main.cpp::cv_bridge exception: %s", e.what());
        return;
    }

    //Copy the image.data to imageBuf.
    cv::Mat depth_float_img = cv_ptr->image;
    cv::Mat depth_mono8_img;
    depthToCV8UC1(depth_float_img, depth_mono8_img);
}

We encountered the same problem last time. Dan is correct, the format is not matched.

If you look into rgbdslam package, there is a small piece of code does doing this conversion.

void depthToCV8UC1(const cv::Mat& float_img, cv::Mat& mono8_img){
  //Process images
  if(mono8_img.rows != float_img.rows || mono8_img.cols != float_img.cols){
    mono8_img = cv::Mat(float_img.size(), CV_8UC1);}
  cv::convertScaleAbs(float_img, mono8_img, 100, 0.0);
  //The following doesn't work due to NaNs
  //double minVal, maxVal; 
  //minMaxLoc(float_img, &minVal, &maxVal);
  //ROS_DEBUG("Minimum/Maximum Depth in current image: %f/%f", minVal, maxVal);
  //mono8_img = cv::Scalar(0);
  //cv::line( mono8_img, cv::Point2i(10,10),cv::Point2i(200,100), cv::Scalar(255), 3, 8);
}

After this, you change the image to a CV_8UC1, this is a 256 greyscale image. You can write a small node, subscribe to the depth image and publish this greyscale image, than use your extract...should work.

Actually, the depth image is using 11 bit to store the depth info (I remember, the other 5 bit is using to label different object), so it is OK to save it to a 8 bit greyscale image, you will not lose too much info.

------ edit ------ to answer the questions in comments.

in main function, create the callback

image_transport::Subscriber depth_sub = it.subscribe("camera/depth/image", 1, depthCallback);

and my callback function (just a simple void function for your reference)

void depthCallback(const sensor_msgs::ImageConstPtr& original_image)
{
    cv_bridge::CvImagePtr cv_ptr;
    //Convert from the ROS image message to a CvImage suitable for working with OpenCV for processing
    try
    {
        //Always copy, returning a mutable CvImage
        //OpenCV expects color images to use BGR channel order.
        cv_ptr = cv_bridge::toCvCopy(original_image);
    }
    catch (cv_bridge::Exception& e)
    {
        //if there is an error during conversion, display it
        ROS_ERROR("tutorialROSOpenCV::main.cpp::cv_bridge exception: %s", e.what());
        return;
    }

    //Copy the image.data to imageBuf.
    cv::Mat depth_float_img = cv_ptr->image;
    cv::Mat depth_mono8_img;
    depthToCV8UC1(depth_float_img, depth_mono8_img);
}