ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Data fusion using two (or more) Kinect

asked 2012-06-29 00:14:44 -0500

this post is marked as community wiki

This post is a wiki. Anyone with karma >75 is welcome to improve it.

Hi community, I am writing about something that I did not manage to find anywhere on internet.

I am trying to achieve fusion of results obtained by more than one Kinect in real time. For example, we can consider the skeleton information (of same person) obtained by two (individual) Kinect devices tracking a person. I want to combine both Skeleton information to remove occlusions and other artifacts that would allow us to achieve 360 degree tracking.

This means the depth information obtained from each Kinect device has to be converted to a global coordinate system which will be independent of the frame of reference of either Kinect devices.

Has anyone thought or worked on this or does it already exists?

Regards Pankaj

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2012-06-29 03:45:17 -0500

this post is marked as community wiki

This post is a wiki. Anyone with karma >75 is welcome to improve it.

One issue to keep in mind is that kinects cannot be synchronized, but you will most likely be combining the data as if they were (probably using an Approximate Time Synchronizer). Depending on your application, the time offsets in your data could lead to issues that you simply wont be able to get around.

Also, due to the volume of data coming off a Kinect I would suggest starting them in QVGA.

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2012-06-29 00:14:44 -0500

Seen: 557 times

Last updated: Jun 29 '12