ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

Depends on what material you are referring to.

Both devices are capable of producing point clouds and depth maps. So if you are interested in doing a project with 3D machine vision (perhaps using OpenCV or PCL), there isn't really a difference in the mathematics involved. For getting this data into ROS, you could use freenect_launch or openni_launch for Kinect XBox 360, or you could use kinect2_bridge for the Kinect 2 (Kinect for XBox One).

If you are trying to do skeleton tracking with NITE, that is a bit of a different story. The XBox 360 Kinect can easily be made to work with openni_tracker, but AFAIK OpenNI2 doesn't yet support Kinect 2 (although people are trying). This means that it is likely difficult to get the new Kinect working with skeleton tracking in Linux/ROS. You could potentially use Microsoft SDKs and transfer the necessary data to a ROS/Linux computer over a network or even try to run ROS on a Windows computer, but IMO these options are not out-of-the-box solutions.

The devices certainly have different capabilities, and there are pros and cons to each. The old Kinect (360) is a structured light sensor and the new Kinect (One) is a time-of-flight sensor. The new one is higher resolution, more accurate, and less sensitive to surface conditions and ambient lighting. At the same time, the new Kinect produces far more data, often requiring offloading computational burdens to a GPU. This also means that it can be far more difficult to get the new Kinect running.