ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

ROS Kinect Programming Tutorial

asked 2012-03-26 05:22:10 -0500

Haikal Pribadi gravatar image

updated 2016-10-24 09:03:22 -0500

ngrennan gravatar image

Hi everybody.. I'm quite new to ROS - well not completely new, i've done some development with ROS and grasp the basics. But i'm completely new to Kinect. I've checked out all the Kinect drivers in ROS, installed them, run them (e.g. rviz ). But I still dont have a complete understanding on how to develop programs for Kinect with ROS, i.e. using the Kinect drivers programmatically. Does anybody know any good programming tutorial for ROS-Kinect? I've searched all over the internet for the last 2 days (or I might have just been looking in the wrong places).

My specific task i'm trying to accomplish is to locate a person in a room (e.g. raising a hand, or something) then start tracking the person - and perhaps following them. I've seen a similar demo of this from MIT's ros package, but I don't see any tutorial. Furthermore, MIT's packages are all from the diamondback distribution, while all my packages are build from electric.

Thank you, everyone.

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
1

answered 2012-03-26 07:13:35 -0500

Thomas D gravatar image

The answer from @dlaz is correct in the general case, but based on your comment about just wanting to use the skeletal tracker you can check out a project that some students worked on for me. They took the output of the Kinect skeletal tracker and added the ability to parse out some new poses other than the default psi pose. When a new pose is detected they simply print out to the screen what pose the user is doing and publish the name of the pose. There is also the beginnings of a node that will take the pose topic and try to determine if a gesture is being performed by the user based on a sequence of poses with time constraints.

You can check out the code using

svn co http://ibotics.ucsd.edu/svn/stingray/trunk/cse_kinect/

Then do

cmake .
rosmake

and you should be able to run it with the included launch file

roslaunch cse_kinect cse_kinect.launch

or

rosrun cse_kinect cse_kinect
rosrun cse_kinect cse_gestures

You still have to perform the psi pose at the beginning, but after that new poses should be detected. One caveat is that it only works for User1, so if you start, go out of range, then come back in the Kinect skeletal tracker may think you are User2 and these nodes will stop detecting new poses. Hope that helps.

edit flag offensive delete link more

Comments

thank you Thomas, I will get my hands on this works of yours today and will let get back to this forum. Thank you so much for the help. But in the mean time i'll leave this topic without any marked answer just to keep people coming, in case there are some other great resources out there. Cheers.

Haikal Pribadi gravatar image Haikal Pribadi  ( 2012-03-26 20:56:42 -0500 )edit

Hi, thomas. Running it returns [ERROR] [1332848883.240844802]: Frame id /openni_depth_frame does not exist! Frames (6): Frame /camera_depth_frame exists with parent /camera_link. ......Do you think you might know what the problem is?

Haikal Pribadi gravatar image Haikal Pribadi  ( 2012-03-27 01:49:21 -0500 )edit

I expect that to happen until you do the psi pose and the user is recognized. Look at http://www.ros.org/wiki/openni_tracker to see what the psi pose is and then you should see it print out 'New User 1' and some calibration messages. Or try http://answers.ros.org/question/12866/openni-tracker.

Thomas D gravatar image Thomas D  ( 2012-03-27 04:00:37 -0500 )edit

Yes, it worked after i did a PSI position. I just didn't know I had to do that first. Thanks, Thomas. Oh, do you happen to remember the configs you have for rviz? cause i'm still learning that too, and dont fully understand it yet. Thanks, Thomas.

Haikal Pribadi gravatar image Haikal Pribadi  ( 2012-03-27 08:59:19 -0500 )edit

I don't remember exactly how I set up RViz, but all I really show is a PointCloud2 with the Kinect data, then set up the Fixed Frame as openni_depth_optical_frame or openni_rgb_optical_frame and the Target Frame as <Fixed Frame>. There isn't much more to it than that.

Thomas D gravatar image Thomas D  ( 2012-03-27 11:22:20 -0500 )edit

The link does not exist anymore http://ibotics.ucsd.edu/svn/stingray/...

blackmamba591 gravatar image blackmamba591  ( 2015-11-12 00:02:59 -0500 )edit

I resurrected that package and put it at https://github.com/tdenewiler/cse_kinect . It has not been updated to work with catkin, and I haven't tried to run it in years, but maybe the example will help.

Thomas D gravatar image Thomas D  ( 2015-11-12 09:13:07 -0500 )edit
0

answered 2012-03-26 05:45:04 -0500

In ROS, working with Kinect data is no different from working with any other point cloud data. PCL (http://pointclouds.org/) is probably the best way to work with this data. the pcl_ros wiki page has an example of how to subscribe to point clouds (such as those from the Kinect).

edit flag offensive delete link more

Comments

I kind of have a limited time constraint on my project. Since my goal is simply using the skeletal tracking function (which is already established), do you really think I need to go through the learning curve of learning pointcloud data?

Haikal Pribadi gravatar image Haikal Pribadi  ( 2012-03-26 06:47:13 -0500 )edit

Well, if I really do and there's no other way, then I guess I should. But do I really need it in my case here right now?? Thanks dlaz ..

Haikal Pribadi gravatar image Haikal Pribadi  ( 2012-03-26 06:47:44 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2012-03-26 05:22:10 -0500

Seen: 4,086 times

Last updated: Mar 26 '12