ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
2

Problem of saving depth image

asked 2016-03-23 23:23:44 -0500

Henschel.X gravatar image

I try to use the code below to save one frame of depth iamge from topic /camera/depth/image_raw, but the problem is I could only get a image looks black. When I read it in Matlab, the matrix elements are basically from number 1 to 7, I guess that's why it looks black. But when I rosrun the image_view (camera/depth/image_raw), it looks just fine. I guess probably it's the problem of encoding, can someone guide me what to do to save image just like the image_view showed to me. plz, thank you!

from __future__ import print_function
import sys
import rospy
import cv2
import cv
from std_msgs.msg import String
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError

class TakePhoto:
    def __init__(self):

        self.bridge = CvBridge()
        self.image_received = False

        # Connect image topic
        img_topic = "/camera/depth/image_raw"
        self.image_sub = rospy.Subscriber(img_topic, Image, self.callback)

        # Allow up to one second to connection
        rospy.sleep(1)

    def callback(self, data):

        # Convert image to OpenCV format
        try:
            cv_image = self.bridge.imgmsg_to_cv2(data,'16UC1')
        except CvBridgeError as e:
            print(e)

        self.image_received = True
        self.image = cv_image

    def take_picture(self, img_title):
        if self.image_received:
            # Save an image
            cv2.imwrite(img_title, self.image)
            return True
        else:
            return False

if __name__ == '__main__':

    # Initialize
    rospy.init_node('take_photo', anonymous=False)
    camera = TakePhoto()

    # Take a photo

    # Use '_image_title' parameter from command line
    # Default value is 'photo.jpg'
    img_title = rospy.get_param('~image_title', 'photo16U.jpg')

    if camera.take_picture(img_title):
        rospy.loginfo("Saved image " + img_title)
    else:
        rospy.loginfo("No images received")

    # Sleep to give the last log messages time to be sent
    rospy.sleep(1)
edit retag flag offensive close merge delete

Comments

when you divide the image by 7.0 and multiply by 255, can you see something in Matlab?

Mehdi. gravatar image Mehdi.  ( 2016-03-24 09:34:04 -0500 )edit

yes, but because those numbers from 1 to 7 are all int. so when I see it on Matlab it lacks detail.

Henschel.X gravatar image Henschel.X  ( 2016-03-24 20:45:16 -0500 )edit

is this the full code? If not could you give me please, I want to work with images in depth but I find somewhat complicated to start since 0. thank you :)

Jluis619 gravatar image Jluis619  ( 2016-06-06 15:58:13 -0500 )edit

I run into the same problem. The difference is that I cannot even visualize the depth image using image_view. Have you run into this before?

wy3 gravatar image wy3  ( 2017-06-26 16:41:30 -0500 )edit

3 Answers

Sort by ยป oldest newest most voted
1

answered 2017-08-02 17:17:01 -0500

Abhi-venkat gravatar image

I faced the same problem and I was breaking my head over it. However, I recently came up with a solution that solves it. I looked at the source code of both nodes, image_view and image_saver. Looks like there is an explicit conversion that is implemented in image_view, but the same is not added in image_saver. So I cloned the source code from git (link) and made the following modification in the image_saver code. After this you can save exactly what you see in image_view

bool saveImage(const sensor_msgs::ImageConstPtr& image_msg, std::string &filename) {
cv::Mat image;
cv_bridge::CvImageConstPtr cv_ptr;
cv_bridge::CvtColorForDisplayOptions options;
options.bg_label=-1;
options.colormap=-1;
options.do_dynamic_scaling=0;
options.min_image_value=0;
options.max_image_value=10;
try
{
  image = cv_bridge::toCvShare(image_msg, encoding)->image;
} catch(cv_bridge::Exception)
{
   cv_ptr = cv_bridge::cvtColorForDisplay(cv_bridge::toCvShare(image_msg), "", options);
   image=cv_ptr->image;
  // ROS_ERROR("Unable to convert %s image to %s", image_msg->encoding.c_str(), encoding.c_str());
  // return false;
} catch(cv_bridge::Exception)
{

  ROS_ERROR("Unable to convert %s image to %s", image_msg->encoding.c_str(), encoding.c_str());
  return false;
}

if (!image.empty()) {
  try {
    filename = (g_format).str();
  } catch (...) { g_format.clear(); }
  try {
    filename = (g_format % count_).str();
  } catch (...) { g_format.clear(); }
  try { 
    filename = (g_format % count_ % "jpg").str();
  } catch (...) { g_format.clear(); }

  if ( save_all_image || save_image_service ) {
    cv::imwrite(filename, image);
    ROS_INFO("Saved image %s", filename.c_str());

    save_image_service = false;
  } else {
    return false;
  }
} else {
  ROS_WARN("Couldn't save image, no data!");
  return false;
}
return true;

}

edit flag offensive delete link more

Comments

Hi, can you explain the parameters you set? How can I get the specific depth value(in meters) after the scaling? thx

fredli gravatar image fredli  ( 2017-12-22 06:44:40 -0500 )edit
1

answered 2016-09-02 03:40:02 -0500

AndyMunich gravatar image

Hi, I know this is an old thread but since it cost me a few days to solve the problem and some others like Jluis619 are also looking for a quick solution here is my code and what I have figured out. Since I am also a beginner with ros please let me know if my solution is suboptimal in some regard.

This is the sensor plugin i have used in gazebo (openni kinect) to get the depth data

  <gazebo reference="camera_link">
<sensor type="depth" name="camera1">
<pose>0 0 0 0 0 0</pose>
  <update_rate>1.0</update_rate>
  <camera name="head">
    <horizontal_fov>1.3962634</horizontal_fov>
    
    <depth_camera>
      <width>640</width>
      <height>480</height>
      <format>L8</format>
       <!-- <output>points</output> -->
    </depth_camera>
    <clip>
      <near>0.02</near>
      <far>300</far>
    </clip>
    <noise>
      <type>gaussian</type>
      <!-- Noise is sampled independently per pixel on each frame.  
           That pixel's noise value is added to each of its color
           channels, which at that point lie in the range [0,1]. -->
      <mean>0.0</mean>
      <stddev>0.007</stddev>
    </noise>
  </camera>
    <plugin name="camera_controller" filename="libgazebo_ros_openni_kinect.so">
        <topicName>/kinect_sim/kinect</topicName>
        <baseline>0.2</baseline>
        <alwaysOn>true</alwaysOn>
        <updateRate>1.0</updateRate>
        <cameraName>kinect_sim/camera1</cameraName>
        <imageTopicName>rgb/image_raw</imageTopicName>
        <cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName>
        <depthImageTopicName>depth/image_raw</depthImageTopicName>
        <depthImageInfoTopicName>depth/camera_info</depthImageInfoTopicName>
        <pointCloudTopicName>points</pointCloudTopicName>
        <frameName>camera_link</frameName>
        <pointCloudCutoff>0.5</pointCloudCutoff>
        <distortionK1>0.00000001</distortionK1>
        <distortionK2>0.00000001</distortionK2>
        <distortionK3>0.00000001</distortionK3>
        <distortionT1>0.00000001</distortionT1>
        <distortionT2>0.00000001</distortionT2>
        <CxPrime>0</CxPrime>
        <Cx>0</Cx>
        <Cy>0</Cy>
        <focalLength>0</focalLength>
        <hackBaseline>0</hackBaseline>
      </plugin>
</sensor>

</gazebo>

Afterwards I tried to read the depth data similar to Henschel.X but after "cv_image = self.bridge.imgmsg_to_cv2" a lot of values in the array are "nan" values while the real depth values are floats in the range of 0 - 5. These floats are rounded in a png image to the int values 0 - 5 hence the image is very dark and all the information which was encoded in the float values is discretized to the 6 values 0 - 5. Even if I tried to use cv2.normalize(depth_array, depth_array, 0, 1, cv2.NORM_MINMAX) cv2.imwrite('/home/andi/camera_depth.pgm', depth_array*255) The image got brighter but I had just the 6 discretized values and lost the most of the depth information. The solution for me was to replace the nan values with some numbers for example zeros or the maximal reach of the depth sensor. Here is the code which worked for me:

#!/usr/bin/env python
import roslib
roslib.load_manifest('kinect_pylistener')
import sys
import rospy
import cv2
from std_msgs.msg import String
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
import time
import numpy as np
import message_filters

class image_converter:

  def __init__(self):
    self.bridge = CvBridge()
    image_sub = message_filters.Subscriber("/kinect_sim/camera1/rgb/image_raw", Image)
    depth_sub = message_filters.Subscriber("/kinect_sim/camera1/depth/image_raw", Image)
    self.ts = message_filters.ApproximateTimeSynchronizer([image_sub, depth_sub], 1, 1)
    self ...
(more)
edit flag offensive delete link more

Comments

Hi Andy thanks for your answer. I also run into the nan with cv_bridge. I then decrease the pointcloudcutoff from 0.5 to a lower value and the nan values are filled since there are stuff within 0.5m. Somehow that parameter for point cloud also affects the valid range of depth image if I'm correct

wy3 gravatar image wy3  ( 2017-06-27 11:58:43 -0500 )edit
0

answered 2016-05-09 18:59:31 -0500

aguadopd gravatar image

I suspect that the problem is that you are saving your images as JPEG, and as far as I can remember it cannot handle 16 bit resolution. You can solve that by saving in PNG or TIFF.

Depth information is in the least significant 11 bits of the data, so you will see a dark image when you visualize it. Try normalizing or equalizing histogram; anyway remember that the information is still there, even if you can't see it. Some programs that can edit 16 bit images are Krita and Gimp (only lastest versions).

If useful, here is a ROS Indigo node that saves depth and color images from a Turtlebot's Kinect. Useful code is in guardar_imagenes.cpp

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2016-03-23 23:23:44 -0500

Seen: 4,877 times

Last updated: May 09 '16