registered depth image and color image vs XYZRGB point cloud
In my robot setup, I combine a 2D color camera with a ToF/3D camera. For this purpose, I use depth_image_proc/register
.
Since my color camera streams 13MP at 8fps image, generating the point cloud is very slow, even on the hexacore robot computer. However, generating the registered depth image is relatively fast and I almost get the full 8fps.
So I'm wondering, is the registered depth image equivalent to the registered point cloud? My idea is to detect objects in the color image and then just select the ROI in the depth image. The "pixels" in the depth image should give me the distance to the sensor.
Does this sound correct or am I missing something?