ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

RGB and depth image to point cloud data

asked 2014-10-10 11:35:29 -0500

gilmour gravatar image

updated 2014-10-10 11:56:48 -0500

My System : Ubuntu 12.04, ROS Fuerte, Python

Goal: Want to combine rgb and depth image to point cloud data. I am using depth_image_proc/point_cloud_xyzrgb nodelet to achieve this. Just to test it, I used the kinect topics depth_registered/image_rect, rgb/image_rect_color, and rgb/camera_info and it works (I can visualize it in RViZ).

Now, the issue is, I subscribe the kinect topic rgb/image_rect_color and use Python OpenCV to change some pixel colors and publish it back as a new topic. When I use depth_image_proc/point_cloud_xyzrgb with this new topic and try to get a point cloud from the new rgb image and original depth image, it does not output anything. I can view the changed image from the new topic in RViZ but the point cloud is not generated.

To test it further with a simpler problem, I just subscribe to the rgb/image_rect_color, do no processing at all, and publish the same image with a different topic name. Still, it does not work. Is this a time synchronization issue. Any pointers will be helpful.

Thanks -gilmour

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2014-10-12 18:04:42 -0500

paulbovbel gravatar image

updated 2014-10-13 16:41:09 -0500

It could definitely be a time synchronization issue, since depth_image_proc uses an approximate time synchronizer with a queue size of 5 (Default). It could still work, assuming you're:

  • modifying and publishing very quickly
  • not modifying the timestamp of the original message

The issue with the queue is that if you take to long to publish your modified message, the associated depth image may have already been overwritten on the queue. With a queue size of 5, and an fps of 30, that would only take a fraction of a second. Try increasing the queue size on depth_image_proc to something significantly larger and see if that helps.

To process your images significantly faster, try developing your node in C++ as a nodelet, and load the nodelet into the kinect's nodelet manager. That way, your node will be running in the same process as both the producer and the consumer of your image data, which should allow you to bypass the expensive serialization process normally incurred for node-to-node messaging.

edit flag offensive delete link more

Comments

Thanks a lot for the pointers. There were two issues : even though I was not explicitly modifying the time-stamp, when I use CV-bridge, it overwrites the original time stamp values. So, I had to manually assign them again. And increasing the queue size helped too.

gilmour gravatar image gilmour  ( 2014-10-13 16:39:06 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2014-10-10 11:35:29 -0500

Seen: 2,680 times

Last updated: Oct 13 '14