ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A
Ask Your Question

Can i put a second kinect in turtlebot?

asked 2016-02-09 10:47:31 -0500

Xitoi gravatar image

updated 2016-02-09 13:18:39 -0500

Hi, I need to put a second kinect to a turtlebot, is there a way to do that?

Thx in advantadge

edit retag flag offensive close merge delete


Why do you expect problems?

NEngelhard gravatar image NEngelhard  ( 2016-02-09 13:51:44 -0500 )edit

I just don't know how to do it

Xitoi gravatar image Xitoi  ( 2016-02-09 14:33:29 -0500 )edit

Please don't add answers that are not answers. Either reply to comments with comments, or update your original question with more information.

jarvisschultz gravatar image jarvisschultz  ( 2016-02-09 14:42:18 -0500 )edit

I just moved your answer to a comment.

jarvisschultz gravatar image jarvisschultz  ( 2016-02-09 14:42:46 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted

answered 2016-02-09 15:01:59 -0500

The first step will be getting two Kinects working on a single machine. Generally, this requires somewhat specialized hardware. Most solutions I've seen require that each Kinect be on a separate USB bus. There are many posts on the internet about how to get multiple Kinects running at the same time. As far as ROS is concerned, to be able to open both, you will likely need to pass a few args to either freenect_launch or openni_launch (depending on which one you are using) to make sure the topics provided by each Kinect don't have ROS name conflicts, and to ensure that each instance of the driver is opening the correct Kinect.

Once you have a computer hooked up to your turtlebot that is capable of collecting simultaneous streams, you'll need to modify your turtlebot's URDF to ensure that the ROS world knows where this new Kinect is relative to the turtlebot. Deriving the transforms for the new Kinect will either involve creating a precise mounting scheme for the second Kinect, or some sort of calibration procedure.

The next step would be to integrate both streams into your desired application. With the navstack, this may just mean adding a new layer to the costmap that contains the second stream. Or you may find it more appropriate to combine the streams from both Kinects into a single point cloud to feed into the navstack (or whatever your application is). This could be done with several of the nodelets available in pcl_ros, or you could use PCL directly.

People have done exactly what you are looking to do. For example, check out the paper Visual SLAM using Multiple RGB-D Cameras

As a final note, a research group here at Northwestern was doing exactly what you are talking about, and they actually found it more convenient to connect each Kinect to small, embedded computers (I believe they were using Odroids), and then stream the data over Gigabit Ethernet connections from the Odroids to a third embedded computer for merging the point clouds and doing some of the data processing. For them, it was easier to find a reliable hardware solution this way, and it had the added bonus of offloading some computation off of their main navigation computer.

edit flag offensive delete link more


Thak you for your tiem and the info. The paper was interesting and helpful.

Also, my problem was i hadn't modified correctly the turtlebot URDF. I just found the answer in another thread.

Thank you all,


Xitoi gravatar image Xitoi  ( 2016-02-09 15:29:35 -0500 )edit

Hi, Xitoi, where did you find the answer about modify turtlebot URDF to fit multiple camera?

niuxianzhuo gravatar image niuxianzhuo  ( 2016-06-18 05:20:17 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2016-02-09 10:47:31 -0500

Seen: 228 times

Last updated: Feb 09 '16