ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

How to convert pointcloud2 to separate depth and rgb image messages

asked 2013-02-22 05:54:15 -0600

ee.cornwell gravatar image

updated 2016-10-24 09:02:35 -0600

ngrennan gravatar image

I'm wanting to convert a filtered pointcloud2 message into separate depth and rgb image messages. RGBDSLAM requires an input of a RBG image topic as well as a depth image topic. I have tried to just pass in the filtered pointcloud2 message into "config/topic_points", but no frames are captured. Any ideas?

System: Ubuntu, Fuerte, Precise

edit retag flag offensive close merge delete

Comments

RGBDSLAM requires "sensor_msgs:Image" whereas my filtered message is "sensor_msgs:Pointcloud2". Looks as though I will have to break my filter into two parts: one for depth, one for rgbd so I can publish two separate filtered image messages.

ee.cornwell gravatar image ee.cornwell  ( 2013-02-22 09:20:39 -0600 )edit

Was this comment meant for my post? If so, you can also acquire depth and rgb images separately, which sounds exactly like what you need.

Hansg91 gravatar image Hansg91  ( 2013-02-22 09:42:10 -0600 )edit

Yes, I realized I should break up the filter. I think I might run into pcl issues involving fromROSmsg() because I am trying to grab the data from a sensor_msg::Image and not a sensor_msg::Pointcloud2. I'll make those changes and post an update.

ee.cornwell gravatar image ee.cornwell  ( 2013-02-22 17:25:35 -0600 )edit

2 Answers

Sort by ยป oldest newest most voted
2

answered 2013-02-22 08:22:52 -0600

Hansg91 gravatar image

I assume you are using a Kinect, so then my question is why you would want to convert this? Originally the images received from the Kinect are depth images and RGB images already, the two are just combined to form a pointcloud with RGB data in one matrix. Splitting combined images seems a bit redundant then

In other words, I would advice you to retrieve RGB and depth images instead, from their own respective frames.

edit flag offensive delete link more
0

answered 2013-02-22 12:13:27 -0600

jkammerl gravatar image

I agree with Hansg91. Obtaining depth and color images from kinect-based pointclouds do not make sense. In this case you shouldn't generate a pointcloud in first place. However as long as your pointcloud is organized (aligned in image coordinates), you can easily revert the conversion by extracting the z-axes which gives you the depth image and use the corresponding rgb values of the points to reconstruct the color image.

edit flag offensive delete link more

Question Tools

Stats

Asked: 2013-02-22 05:54:15 -0600

Seen: 2,360 times

Last updated: Feb 22 '13