Hi, I know this is an old thread but since it cost me a few days to solve the problem and some others like Jluis619 are also looking for a quick solution here is my code and what I have figured out. Since I am also a beginner with ros please let me know if my solution is suboptimal in some regard.
This is the sensor plugin i have used in gazebo (openni kinect) to get the depth data
<gazebo reference="camera_link">
<sensor type="depth" name="camera1">
<pose>0 0 0 0 0 0</pose>
<update_rate>1.0</update_rate>
<camera name="head">
<horizontal_fov>1.3962634</horizontal_fov>

<depth_camera>
<width>640</width>
<height>480</height>
<format>L8</format>
<!-- <output>points</output> -->
</depth_camera>
<clip>
<near>0.02</near>
<far>300</far>
</clip>
<noise>
<type>gaussian</type>
<!-- Noise is sampled independently per pixel on each frame.
That pixel's noise value is added to each of its color
channels, which at that point lie in the range [0,1]. -->
<mean>0.0</mean>
<stddev>0.007</stddev>
</noise>
</camera>
<plugin name="camera_controller" filename="libgazebo_ros_openni_kinect.so">
<topicName>/kinect_sim/kinect</topicName>
<baseline>0.2</baseline>
<alwaysOn>true</alwaysOn>
<updateRate>1.0</updateRate>
<cameraName>kinect_sim/camera1</cameraName>
<imageTopicName>rgb/image_raw</imageTopicName>
<cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName>
<depthImageTopicName>depth/image_raw</depthImageTopicName>
<depthImageInfoTopicName>depth/camera_info</depthImageInfoTopicName>
<pointCloudTopicName>points</pointCloudTopicName>
<frameName>camera_link</frameName>
<pointCloudCutoff>0.5</pointCloudCutoff>
<distortionK1>0.00000001</distortionK1>
<distortionK2>0.00000001</distortionK2>
<distortionK3>0.00000001</distortionK3>
<distortionT1>0.00000001</distortionT1>
<distortionT2>0.00000001</distortionT2>
<CxPrime>0</CxPrime>
<Cx>0</Cx>
<Cy>0</Cy>
<focalLength>0</focalLength>
<hackBaseline>0</hackBaseline>
</plugin>
</sensor>
</gazebo>
Afterwards I tried to read the depth data similar to Henschel.X but after "cv_image = self.bridge.imgmsg_to_cv2" a lot of values in the array are "nan" values while the real depth values are floats in the range of 0 - 5.
These floats are rounded in a png image to the int values 0 - 5 hence the image is very dark and all the information which was encoded in the float values is discretized to the 6 values 0 - 5.
Even if I tried to use
cv2.normalize(depth_array, depth_array, 0, 1, cv2.NORM_MINMAX)
cv2.imwrite('/home/andi/camera_depth.pgm', depth_array*255)
The image got brighter but I had just the 6 discretized values and lost the most of the depth information.
The solution for me was to replace the nan values with some numbers for example zeros or the maximal reach of the depth sensor.
Here is the code which worked for me:
#!/usr/bin/env python
import roslib
roslib.load_manifest('kinect_pylistener')
import sys
import rospy
import cv2
from std_msgs.msg import String
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
import time
import numpy as np
import message_filters
class image_converter:
def __init__(self):
self.bridge = CvBridge()
image_sub = message_filters.Subscriber("/kinect_sim/camera1/rgb/image_raw", Image)
depth_sub = message_filters.Subscriber("/kinect_sim/camera1/depth/image_raw", Image)
self.ts = message_filters.ApproximateTimeSynchronizer([image_sub, depth_sub], 1, 1)
self ...
(more)