Ask Your Question
0

Format of depth value openni2_camera

asked 2014-02-26 19:29:40 -0600

domikilo gravatar image

updated 2014-02-27 22:10:01 -0600

I used the openni2 to install in my ARM-Board from this website link, now I want to know clearly how is the format of the depth data get from Asus Xtion looks like? Because I met some confuses when I display the depth data in image, it was just the black-white image not gray image. Now maybe I should calibration of this data but I dont know the format.

  void imageCb_d(const sensor_msgs::ImageConstPtr& msg)
  {
cv_bridge::CvImagePtr cv_ptr;
try
{
  cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::TYPE_32FC1);

}
catch (cv_bridge::Exception& e)
{
  ROS_ERROR("cv_bridge exception: %s", e.what());
  return;
}

cv::imshow("Depth", cv_ptr->image);

cv::waitKey(3);


image_pub_.publish(cv_ptr->toImageMsg());
}
edit retag flag offensive close merge delete

2 Answers

Sort by » oldest newest most voted
1

answered 2014-03-01 17:54:22 -0600

domikilo gravatar image

I got the solution from Opencv tutorial web site, just use one simple function to convert data 32FC1 to 8UC1, and the result looks good as same sam the image of depth/image from the other openni2_camera source

 double minVal, maxVal;
    minMaxLoc(cv_ptr->image, &minVal, &maxVal); //find minimum and maximum intensities
    //Mat draw;
    cv_ptr->image.convertTo(blur_img, CV_8U, 255.0/(maxVal - minVal), -minVal * 255.0/(maxVal - minVal));

P/S: minVal, maxVal you can set it constant in mm based on your camera properties

edit flag offensive delete link more

Comments

Thanks! this fixed the issues I've been having

WillAndrew gravatar imageWillAndrew ( 2015-07-02 07:52:16 -0600 )edit

I try to use it to convert camera/depth/image_raw, which is 16UC1 data, to OpenCV format, but it doesn't work.

张京林 gravatar image张京林 ( 2017-01-10 02:48:43 -0600 )edit
0

answered 2014-02-28 01:52:18 -0600

ahoarau gravatar image

Hello,

The Xtion depth images are 32FC1 images, but opencv can't really plot those floats (what pixel value for 3.22242 m ?). Checking out the image_view code, you can see that they are converting it to a 8UC3 for plotting :

void DisparityNodelet::imageCb(const stereo_msgs::DisparityImageConstPtr& msg)
{
  // Check for common errors in input
  if (msg->min_disparity == 0.0 && msg->max_disparity == 0.0)
  {
    NODELET_ERROR_THROTTLE(30, "Disparity image fields min_disparity and "
                           "max_disparity are not set");
    return;
  }
  if (msg->image.encoding != sensor_msgs::image_encodings::TYPE_32FC1)
  {
    NODELET_ERROR_THROTTLE(30, "Disparity image must be 32-bit floating point "
                           "(encoding '32FC1'), but has encoding '%s'",
                           msg->image.encoding.c_str());
    return;
  }

  // Colormap and display the disparity image
  float min_disparity = msg->min_disparity;
  float max_disparity = msg->max_disparity;
  float multiplier = 255.0f / (max_disparity - min_disparity);

  const cv::Mat_<float> dmat(msg->image.height, msg->image.width,
                             (float*)&msg->image.data[0], msg->image.step);
  disparity_color_.create(msg->image.height, msg->image.width);

  for (int row = 0; row < disparity_color_.rows; ++row) {
    const float* d = dmat[row];
    for (int col = 0; col < disparity_color_.cols; ++col) {
      int index = (d[col] - min_disparity) * multiplier + 0.5;
      index = std::min(255, std::max(0, index));
      // Fill as BGR
      disparity_color_(row, col)[2] = colormap[3*index + 0];
      disparity_color_(row, col)[1] = colormap[3*index + 1];
      disparity_color_(row, col)[0] = colormap[3*index + 2];
    }
  }

So I guess you could do the same for plotting. In python, I'm converting it to a 16UC1 (for saving) :

bridge = CvBridge()
cv_image_tmp = bridge.imgmsg_to_cv2(msg)
height, width, depth = cv_image_tmp.shape
max_m_for_kinect = 5.0 ## You'll have to check out online for exact value 
cv_image_tmp = np.clip(cv_image_tmp,0,max_m_for_kinect) ## Filter all values > 5.0 m
scaling_factor = 65535/max_m_for_kinect 
cv_image_tmp = cv_image_tmp*scaling_factor #scaling the image to [0;65535]
cv_image = np.array(cv_image_tmp,dtype=np.uint16) ## Creating the cv2 image

Or to a 8UC1 :

bridge = CvBridge()
cv_image_tmp = bridge.imgmsg_to_cv2(msg)
height, width, depth = cv_image_tmp.shape
max_m_for_kinect = 5.0 ## You'll have to check out online for exact value 
cv_image_tmp = np.clip(cv_image_tmp,0,max_m_for_kinect) ## Filter all values > 5.0 m
scaling_factor = 255/max_m_for_kinect 
cv_image_tmp = cv_image_tmp*scaling_factor #scaling the image to [0;255]
cv_image = np.array(cv_image_tmp,dtype=np.uint8) ## Creating the cv2 image
edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2014-02-26 19:29:40 -0600

Seen: 2,863 times

Last updated: Mar 01 '14