ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

registered depth image and color image vs XYZRGB point cloud

asked 2020-02-14 01:20:22 -0500

machinekoder gravatar image

In my robot setup, I combine a 2D color camera with a ToF/3D camera. For this purpose, I use depth_image_proc/register. Since my color camera streams 13MP at 8fps image, generating the point cloud is very slow, even on the hexacore robot computer. However, generating the registered depth image is relatively fast and I almost get the full 8fps.

So I'm wondering, is the registered depth image equivalent to the registered point cloud? My idea is to detect objects in the color image and then just select the ROI in the depth image. The "pixels" in the depth image should give me the distance to the sensor.

Does this sound correct or am I missing something?

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2020-02-15 02:41:36 -0500

ajal gravatar image

I have used depth_image_proc/register for combining thermal image with a realsense camera. As the wiki says, this nodelet publishes a registered depth image. Which was then subscribed in depth_image_proc/point_cloud_xyzrgb nodelet to get the corresponding point cloud.

I would say you are on the right direction. Please look at the type of depth image obtained from yout 3D camera ( As there are two representations. please refer: https://www.ros.org/reps/rep-0118.html to know what kind of depth image you have). The hard part was to get the right TF transformation between the two cameras for me. I could not get a proper registered depth image as I could see that clearly in the point cloud.

I hope this helped you in some way. Good Luck!

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2020-02-14 01:20:22 -0500

Seen: 711 times

Last updated: Feb 14 '20