ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
In principle this is the same problem that is solved on standard RGBD cameras like the Kinect or Realsense R200, in that RGB and depth camera which are offset from each other need to be registered. Of course, in that use case, cameras are close and well calibrated to each other.
Complete launch setups for doing this kind of RGBD processing can be found in rgdb_launch. Specifically, the depth_registered.launch registers depth image to RGB (which is the opposite way around to what you request) using the depth_image_proc/register nodelet and then generates RGB point cloud data using the depth_image_proc/point_cloud_xyzrgb nodelet. This is the same setup that is used with the Kinect, R200 and other sensors and thus proven to work in multiple use cases.
So it looks like the easiest option would be to supply the depth_registered.launch.xml with the correct arguments for your setup (i.e. remappings for rgb and depth data) and if everything goes well you should get a registered RGB point cloud out. The caveat is that this requires your cameras to be well calibrated against each other and their tf frames to be set up accordingly (commonly done via a URDF model). Also, I'm not sure how well things work (or if they break down) when the cameras are farther apart than in typical standard RGBD sensors.
Another example of performing registration of camera streams is part of the Intel Realsense driver, but that is outside of ROS, so would require quite some work to adapt. See the rs-align for a high-level usage example.