Getting an (x, y) -> depth map in Python from a Kinect? [closed]
I'm going around in circles trying to solve what I think is a simple problem: using Python (not C++), I would like to get the depth values corresponding to the points in the RGB image returned from the Kinect at a given (x, y) coordinate. I am subscribing to /camera/rgb/image_color and using cv_bridge and OpenCV's face detector to get the bounding box around a face. I then want to get the depth values for each point (or a subsample of points) in that box. So I need to be able to pick (x, y) coordinates (which I know) from within the box and get back the depth values for those points.
I tried subscribing to /camera/depth/image which is a 32FC1 image type and I thought this was going to work but I ran into "index out of range" errors when accessing points that should have been within the proper width and height boundaries of the image.
Although I realize there are no Python bindings for PCL )-:, I found this nice utility on this forum so I'm wondering if I should be using point clouds rather than images to get the depth values?