openni_launch + Kinect: Image Coordinate to Point? [closed]
Using openni_launch and a Kinect I'm in the process of writing a function that obtains the nearest 3D point (from topic /camera/depth_registered/points) from an image coordinate (from topic /camera/rgb/image_color).
It seems that the process of registration dictated by the tutorial (setting depth_registration as True in /camera/driver) accomplishes the inverse behavior of what I want, as it interpolates point color from /camera/rgb/image_color.
Is there a way to obtain this transformation utilized by the registration (image => point) or do I have to calculate the transformation I'm looking for. It seems that openni provides a lot of options and though I've searched through the documentation I can't find anything super helpful. The only other method I can think of is an awkward nearest neighbor interpolation over the registered point..
EDIT: After further searching it seems that I could use the registered depth data to determine 3D coordinates. However, I'm having trouble determining how to initialize a DepthGenerator from the registered data..