Ask Your Question

Converting Kinect depth image to Real world coordinate.

asked 2013-07-15 02:51:24 -0500

Subhasis gravatar image

updated 2016-10-24 08:33:19 -0500

ngrennan gravatar image

Hi All,

I need a small help with converting kinects depth image to the real world coordinates of each depth pixel.

I used the below mentioned formulae found from a paper. But, I assume the Z value is not the correct real world z. Rather, it gives the distance of the pixel from the camera centre. In this (x,y) on the right hand side represent the pixel values and z is the corresponding depth in meters. inv_fx,inv_fy,ox,oy are from the camera matrix.

const float inv_fx = 1.f/cameraMatrix(0,0);
const float inv_fy = 1.f/cameraMatrix(1,1);
const float ox = cameraMatrix(0,2);
const float oy = cameraMatrix(1,2);

realworld.x = z; //The depth value from the Kinect in meters
realworld.y = -(x - ox) * z * inv_fx; 
realworld.z = -(y - oy) * z * inv_fy;

If I am wrong in the formulae, plese correct the formulae mentioned above for calculating the real world point coordinates for each pixel in the depth image.

Regards, Subhasis

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted

answered 2013-07-15 03:58:59 -0500

tbh gravatar image

The openni driver does this already. It doesn't just produce one depth image. Check here and here. image_raw contains the raw uint16 depths in mm from the kinect. image contains float depths in m. And finally image_rect also contains float depths in m, but is rectified like you want. The driver even publishes points, a point cloud that puts all the coordinates together for you.

If you really need to know the formula, maybe just look at the code in the driver.

edit flag offensive delete link more


Well if I subscribe to /camera/depth/image_rect I just get values from 0 - 255 and no floats .... --> Do I have to read in that data in a particular way?
cv_bridge::CvImageConstPtr cv_ptr; cv_ptr = cv_bridge::toCvShare(current_image_); cv::convertScaleAbs(cv_ptr->image, mono8_img, 100,0.0);

Markus gravatar image Markus  ( 2018-06-12 03:36:48 -0500 )edit

answered 2013-11-20 09:22:14 -0500

barcelosandre gravatar image

Someone knows how to discover the pointcloud "pixel"(x,y,z in real world) giving a pixel from RGB camera?

I am using a feature descriptor (ORB) in RGB frame (/camera/rgb/image_color) its returns the x,y from the plannar image, I want a relation of this RGB pixel (feature) to get the real world coordinate in pointcloud topic (/camera/depth_registered/points).

edit flag offensive delete link more


I want the x,y,z from real world. OpenNi has a pointcloud topic, in rviz i see a "color transform" (RGB8) option, each point is related with an pixel from the rgb camera. I just want to know how to return the specific point from pointcloud giving the rgb pixel, thus, To get the x,y,z from real world

barcelosandre gravatar image barcelosandre  ( 2013-11-22 03:27:22 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools



Asked: 2013-07-15 02:51:24 -0500

Seen: 17,777 times

Last updated: Nov 20 '13