ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Openni Kinect + ROS - detect user within a particular frame

asked 2011-11-28 22:16:39 -0500

Poppy gravatar image

Hello,

I have successfully installed ROS and OpenNI libraries. The openni_tracker is also working fine.

I would like to now modify the openni_tracker, so that when the user moves out of a predefined frame ( say a square of particular dimension), then I want to print some messages like the user went out from the right/left of this frame.

I am confused as how to define the dimension of this square( in what units) and how to check with the users skeleton co-ordinates.

TIA!

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
1

answered 2011-11-29 03:21:04 -0500

Constantin S gravatar image

I have not played with the openni_tracker, but I have spent a decent amount of time with the Microsoft SDK. Its probably best to simply use the data from the OpenNI libraries in your own node. You would subscribe to openni's topics (or write a node which publishes openni information on a topic) and publish whatever message you want when a person is inside your box. I have in mind two approaches you could take to define the square, each with their own disadvantages/advantages:

1) As Martin suggested, use real world units (meters). This allows you to specify a bounding box in terms of real world coordinates. This is helpful when you have a transform for your kinect and you want to focus on an area in the real world. Here you would simply be checking bounding box values using euclidean data (for example: left < x < right && bottom < y < top). (I use x and y here because with the Microsoft SDK, z is the depth dimension, perhaps its the same with the OpenNI libraries.)

2) If you're interested instead in what a kinect can see, regardless of where it is in the world, the best way would be with pixels. For the Microsoft SDK, the skeleton frames are printed in pixel coordinates (here you use pixels instead of meters). The advantage of this is that no matter where in the world you are, how far away the person is, if they are inside that view area, you'll capture them. Remember that pixel coordinate system is different, y is positive in the "down" direction, and all values are lower bound by 0. You would choose the bounding box using pixels (for example, with some arbitrary values: 400(left) < x < 800(right) && 100(top) < y < 400(bottom)). You can determine the pixel coordinates to use by looking up the field of view in the kinect specifications and choosing an average distance from the kinect you wish to track at.

I hope this is helpful.

Constantin

edit flag offensive delete link more

Comments

@constantin, Thanks for the detailed answer. Now my job is to understand the kinect co-ordinate system :)
Poppy gravatar image Poppy  ( 2011-11-30 23:46:53 -0500 )edit
1

answered 2011-11-29 01:11:29 -0500

I don't know if this could be a valid answer for you, but how about this?

Instead of modifying openni_tracker you could write a node that listens to the user pose published by openni_tracker. The user pose is published as a set of transforms, one for each part of the body, (details here) in 3D space (in meters), so instead of a square you should check that all the parts of the body are inside of a cube defined by you and print your messages accordingly.

Hope this could help you.

edit flag offensive delete link more

Comments

@Martin, Thank you !. I will try your suggestion and let you know how it goes.
Poppy gravatar image Poppy  ( 2011-11-30 23:45:06 -0500 )edit

Question Tools

Stats

Asked: 2011-11-28 22:16:39 -0500

Seen: 970 times

Last updated: Nov 29 '11