ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Matching RGB image with Point Cloud

asked 2018-01-08 20:04:34 -0500

pointsnadpixels gravatar image

I have two different sensors,one for capturing an RGB image (from an Intel RealSense SR300), and one for giving me a 3D Point Cloud(from a DUO MC stereo camera). How can I integrate the two? Is it possible to match the pixels of the RGB image to the points in the Point Cloud? I need a separate depth sensor because the SR300 does not work in the presence of ambient light.

Is there any method for this? Can it be done while retaining accuracy?

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
4

answered 2018-01-09 04:22:59 -0500

In principle this is the same problem that is solved on standard RGBD cameras like the Kinect or Realsense R200, in that RGB and depth camera which are offset from each other need to be registered. Of course, in that use case, cameras are close and well calibrated to each other.

Complete launch setups for doing this kind of RGBD processing can be found in rgdb_launch. Specifically, the depth_registered.launch registers depth image to RGB (which is the opposite way around to what you request) using the depth_image_proc/register nodelet and then generates RGB point cloud data using the depth_image_proc/point_cloud_xyzrgb nodelet. This is the same setup that is used with the Kinect, R200 and other sensors and thus proven to work in multiple use cases.

So it looks like the easiest option would be to supply the depth_registered.launch.xml with the correct arguments for your setup (i.e. remappings for rgb and depth data) and if everything goes well you should get a registered RGB point cloud out. The caveat is that this requires your cameras to be well calibrated against each other and their tf frames to be set up accordingly (commonly done via a URDF model). Also, I'm not sure how well things work (or if they break down) when the cameras are farther apart than in typical standard RGBD sensors.

Another example of performing registration of camera streams is part of the Intel Realsense driver, but that is outside of ROS, so would require quite some work to adapt. See the rs-align for a high-level usage example.

edit flag offensive delete link more
1

answered 2018-01-09 07:59:26 -0500

I have developed a setup that does exactly this last year. If you have the calibration information for your RGB camera and you know the relative transform between this and the sensor frame of the DUO MC then this is straightforward using OpenCV.

First you need to transform your point cloud so it is in the optical coordinate frame of your RGB camera, then use the OpenCV function cv::projectPoints which will project these 3D points into 2D pixel locations within the image of the RGB camera.

When I implemented this it was actually simpler than I was expecting this part of the project to be, you may have to filter the resulting 2D points since some may be outside of the image frame or even NAN in some cases. There will also be cases where some points may be occuluded by other parts of the scene resulting in the wrong color being projected onto them. This problem is non-trivial to fix but is farily minor in more scenes if the baseline between the two sensors is small.

Hope this helps.

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2018-01-08 20:04:34 -0500

Seen: 3,499 times

Last updated: Jan 09 '18