ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

The first step will be getting two Kinects working on a single machine. Generally, this requires somewhat specialized hardware. Most solutions I've seen require that each Kinect be on a separate USB bus. There are many posts on the internet about how to get multiple Kinects running at the same time. As far as ROS is concerned, to be able to open both, you will likely need to pass a few args to either freenect_launch or openni_launch (depending on which one you are using) to make sure the topics provided by each Kinect don't have ROS name conflicts, and to ensure that each instance of the driver is opening the correct Kinect.

Once you have a computer hooked up to your turtlebot that is capable of collecting simultaneous streams, you'll need to modify your turtlebot's URDF to ensure that the ROS world knows where this new Kinect is relative to the turtlebot. Deriving the transforms for the new Kinect will either involve creating a precise mounting scheme for the second Kinect, or some sort of calibration procedure.

The next step would be to integrate both streams into your desired application. With the navstack, this may just mean adding a new layer to the costmap that contains the second stream. Or you may find it more appropriate to combine the streams from both Kinects into a single point cloud to feed into the navstack (or whatever your application is). This could be done with several of the nodelets available in pcl_ros, or you could use PCL directly.

People have done exactly what you are looking to do. For example, check out the paper Visual SLAM using Multiple RGB-D Cameras

As a final note, a research group here at Northwestern was doing exactly what you are talking about, and they actually found it more convenient to connect each Kinect to small, embedded computers (I believe they were using Odroids), and then stream the data over Gigabit Ethernet connections from the Odroids to a third embedded computer for merging the point clouds and doing some of the data processing. For them, it was easier to find a reliable hardware solution this way, and it had the added bonus of offloading some computation off of their main navigation computer.