Robotics StackExchange | Archived questions

Kinect v2 and ROS - Tutorials

Hello,

I recently got a Kinect v2, I am also on ROS Kinetic. I have setup up the driver and the ROS interface of the device from these links:

https://github.com/wiedemeyer/libfreenect2

https://github.com/code-iai/iai_kinect2

and I have seen some of the topics that the sensor publishes.

I am totally new to depth sensors, so I am kind of lost on what basics should I know so I can procceed with my application. I don't know if the Kinect v2 has the same basics as its predecessor, the Kinect for xbox 360. I tried to google for tutorials, but most people in the community prefer the kinect for the xbox 360 as I saw. Could someone who has worked with Kinect v2 share some links/tutorials from which I could learn the basics and start building something on my own ?

Thank you for your time and for your answers in advance,

Chris

Asked by patrchri on 2016-09-01 23:21:39 UTC

Comments

Afaik (and remember) there isn't really that much difference between the v1 and the v2 (at least from a software perspective): the nodes 'just' publish a PointCloud2 msg, which can then be used by 'anything'. It just happens that the v2 cloud has a much higher resolution.

Asked by gvdhoorn on 2016-09-02 00:52:54 UTC

Thanks for answering! If you would like, post this comment as an answer so I can mark it as a correct answer.

I will try to search for some algorithms manipulating the data I get. If I meet a more specific issue, I will ask again :)

Asked by patrchri on 2016-09-02 01:20:30 UTC

Let's wait a bit to see if anyone else responds. I'm not a computer vision expert, so my qualitative assessment may not actually be entirely correct / complete. On the other hand: if you are looking for noise / accuracy figures, I'd search for relevant literature.

Asked by gvdhoorn on 2016-09-02 01:38:19 UTC

O, what I do remember: you'll want to make sure you have the OpenCL/GL acceleration support in the driver working. That will free up quite a significant amount of CPU, and make 60Hz low-latency publishing possible.

Asked by gvdhoorn on 2016-09-02 01:39:11 UTC

For a start,what I am aiming to do are simple tasks like object recognition in coordination with depth measurements. I just started working with depth sensors so I don't know how "simple" is what I am looking for. I haven't thought about the noise issue yet,although I have read it's a major problem.

Asked by patrchri on 2016-09-02 02:22:56 UTC

Oh thanks a lot. I read a more general description of that here.

Asked by patrchri on 2016-09-02 02:26:09 UTC

Your "how do I do X" are really more general questions about point cloud processing. I wouldn't worry too much about what using the Kinect v2 changes there: fundamentally nothing. The algorithms will be roughly the same, as they are typically device independent.

Asked by gvdhoorn on 2016-09-02 02:34:19 UTC

Thanks a lot for all the tips :)

Asked by patrchri on 2016-09-02 02:38:18 UTC

Answers

Any updates in this matter ?

Asked by sitherix on 2017-07-18 04:04:05 UTC

Comments