ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
2

What is the API to generate a registered point cloud from raw kinect streams

asked 2012-09-11 11:26:53 -0500

Arrakis gravatar image

updated 2016-10-24 08:34:10 -0500

ngrennan gravatar image

As specified on the openni_launch web page, I have been recording live Kinect data by recording the following 4 topics with rosbag record

camera/depth_registered/image_raw
camera/depth_registered/camera_info
camera/rgb/image_raw
camera/rgb/camera_info
/tf

To generate registered pcl::PointXYZRGB point cloud topics from this bag file, I have been playing the bag file at the same time as:

roslaunch openni_launch _load_driver:=false

I now need to read the bag file in my source code using the rosbag python API. However, I do not know how to produce the registered pcl::PointXYZRGB point cloud from these recorded topics in code. Is there a library function that takes in these 5 image topics and outputs the registered pcl::PointXYZRGB point cloud?

Many Thanks

edit retag flag offensive close merge delete

3 Answers

Sort by ยป oldest newest most voted
2

answered 2012-09-12 06:54:38 -0500

RossK gravatar image

updated 2012-09-15 11:57:01 -0500

There is a script provided with the RGBDSLAM benchmark data set from Freiburg that adds a pointcloud to a bagfile given these topics. Take a look here http://vision.in.tum.de/data/datasets/rgbd-dataset/tools#adding_point_clouds_to_ros_bag_files

Take a look at the script 'add_pointclouds_to_bagfile.py'. You can either use it on your bagfile or look at the code. Note if you want to use it you will need to change the topic names in the script, as the script was written for the electric openni drivers.

EDIT: Sorry I didn't see that you had recorded the raw images. I don't know how to rectify these images, however recording /camera/depth_registered/image and /camera/rgb/image_rect_color instead of the raw images would solve this.

edit flag offensive delete link more
0

answered 2012-09-12 11:46:12 -0500

Arrakis gravatar image

Thank you RossK, however this unfortunately doesn't completely solve the problem. The script assumes that you have the RGB image, and the depth image. I only have the non-rectified bayered image, and the raw depth image. This is what the openni_launch tutorial recommends recording.

On top of the code you provided, I think I need to rectify the raw depth image and then convert its UINT values into valid depth values. Additionally I need to rectify and debayer the RGB image.

I assume since I recorded the registered depth image that I do not have to do anything else? Is there an API available to do this? Many thanks again

edit flag offensive delete link more
-1

answered 2012-09-12 08:22:09 -0500

Flowers gravatar image

Did you try to record /camera/depth_registered/points, because this is the pointcloud according to the wiki(and it works fine as an PCL2 input in rviz).

And, if I understood that part of the wiki correct, to get the RGB version, you simply need to add _depth_registration:=true while launching or set it manually after the launch.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2012-09-11 11:26:53 -0500

Seen: 1,571 times

Last updated: Sep 15 '12