openni_launch + Kinect: Image Coordinate to Point? [closed]

asked 2013-05-06 16:38:58 -0500

itsachen gravatar image

updated 2016-10-24 08:34:49 -0500

ngrennan gravatar image

Using openni_launch and a Kinect I'm in the process of writing a function that obtains the nearest 3D point (from topic /camera/depth_registered/points) from an image coordinate (from topic /camera/rgb/image_color).

It seems that the process of registration dictated by the tutorial (setting depth_registration as True in /camera/driver) accomplishes the inverse behavior of what I want, as it interpolates point color from /camera/rgb/image_color.

Is there a way to obtain this transformation utilized by the registration (image => point) or do I have to calculate the transformation I'm looking for. It seems that openni provides a lot of options and though I've searched through the documentation I can't find anything super helpful. The only other method I can think of is an awkward nearest neighbor interpolation over the registered point..

EDIT: After further searching it seems that I could use the registered depth data to determine 3D coordinates. However, I'm having trouble determining how to initialize a DepthGenerator from the registered data..

edit retag flag offensive reopen merge delete

Closed for the following reason question is not relevant or outdated by tfoote
close date 2015-11-26 03:26:25.272261