Can i put a second kinect in turtlebot?
Hi, I need to put a second kinect to a turtlebot, is there a way to do that?
Thx in advantadge
ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
Hi, I need to put a second kinect to a turtlebot, is there a way to do that?
Thx in advantadge
The first step will be getting two Kinects working on a single machine. Generally, this requires somewhat specialized hardware. Most solutions I've seen require that each Kinect be on a separate USB bus. There are many posts on the internet about how to get multiple Kinects running at the same time. As far as ROS is concerned, to be able to open both, you will likely need to pass a few args to either freenect_launch or openni_launch (depending on which one you are using) to make sure the topics provided by each Kinect don't have ROS name conflicts, and to ensure that each instance of the driver is opening the correct Kinect.
Once you have a computer hooked up to your turtlebot that is capable of collecting simultaneous streams, you'll need to modify your turtlebot's URDF to ensure that the ROS world knows where this new Kinect is relative to the turtlebot. Deriving the transforms for the new Kinect will either involve creating a precise mounting scheme for the second Kinect, or some sort of calibration procedure.
The next step would be to integrate both streams into your desired application. With the navstack, this may just mean adding a new layer to the costmap that contains the second stream. Or you may find it more appropriate to combine the streams from both Kinects into a single point cloud to feed into the navstack (or whatever your application is). This could be done with several of the nodelets available in pcl_ros, or you could use PCL directly.
People have done exactly what you are looking to do. For example, check out the paper Visual SLAM using Multiple RGB-D Cameras
As a final note, a research group here at Northwestern was doing exactly what you are talking about, and they actually found it more convenient to connect each Kinect to small, embedded computers (I believe they were using Odroids), and then stream the data over Gigabit Ethernet connections from the Odroids to a third embedded computer for merging the point clouds and doing some of the data processing. For them, it was easier to find a reliable hardware solution this way, and it had the added bonus of offloading some computation off of their main navigation computer.
Hi, Xitoi, where did you find the answer about modify turtlebot URDF to fit multiple camera?
Asked: 2016-02-09 10:47:31 -0600
Seen: 307 times
Last updated: Feb 09 '16
pointcloud to laserscan with transform?
Pointcloud_to_laserscan ranges angular min and max?
VSLAM based navigation with Kinect, where do I begin?
Problems with python when trying to control a Nao robot with a Kinect
Problem pertaining to DepthGenerator
Why do you expect problems?
I just don't know how to do it
Please don't add answers that are not answers. Either reply to comments with comments, or update your original question with more information.
I just moved your answer to a comment.