ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

In my experience, whenever I've used a simulated Kinect in Gazebo the point cloud produced (in your case /camera/depth/points) is already an XYZRGB templated pointcloud; i.e. there are already depth and color fields filled out.

For example, using the turtlebot_simulator the header for a pointcloud produced on the /camera/depth/points topic is as follows:

header: 
  seq: 8148
  stamp: 
    secs: 996
    nsecs: 190000000
  frame_id: "camera_depth_optical_frame"
height: 480
width: 640
fields: 
  - 
    name: "x"
    offset: 0
    datatype: 7
    count: 1
  - 
    name: "y"
    offset: 4
    datatype: 7
    count: 1
  - 
    name: "z"
    offset: 8
    datatype: 7
    count: 1
  - 
    name: "rgb"
    offset: 16
    datatype: 7
    count: 1

Notice there are fields for position and color. For reference, the configuration for Kinect Gazebo plugin used by the turtlebot sim is here. As an additional piece of info, note that this snippet of code is where the simulated depth map is turned into a point cloud inside of the Kinect Gazebo plugin, and you can see they also add the RGB fields.

I suppose it would be possible to instead mimic something like freenect_launch and use depth_image_proc to register the RBG/depth map, and build the point cloud outside of the Gazebo plugin (if you had a need for such a thing).