How to get real-world width & height of Kinect video patch?
Hello,
I am trying to compute the real-life size (width and height in meters) of a face patch that I have detected using a Kinect and OpenCV's Haar detector. I know the pixel dimensions of the patch and I know the distance to the points in the patch. My guess on how to get the approximate real-life dimensions of the patch is to simply multiply the average distance to the patch by the arc lengths subtended by the width and height of the patch. To get the arc lengths in radians, I use the size of the FOV of the Kinect which is around 57.8 degrees for the IR image and 62.7 degrees for the RGB image (http://www.ros.org/wiki/kinect_calibration/technical).
Is this the best way to do this, or is there a way I can use the calibration data from the Kinect's camera_info topic to do the same thing more directly?
Thanks!
patrick
EDIT: I am using Python which limits my use of PCL for this task.