ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Sekocan's profile - activity

2015-06-18 09:14:20 -0500 received badge  Famous Question (source)
2015-05-22 02:27:10 -0500 received badge  Notable Question (source)
2014-12-01 18:08:49 -0500 received badge  Famous Question (source)
2014-10-20 06:52:09 -0500 received badge  Self-Learner (source)
2014-10-20 06:52:09 -0500 received badge  Teacher (source)
2014-10-20 06:46:34 -0500 answered a question Coordinates of a specific pixel in depthimage published by Kinect

I was looking for the exact formula to calculate the real world X,Y,Z values by the depth image and finally I got it.

First of all, it appears that the values in the depth image are not giving the distance between any point in the real world and the origin of the Kinect. It appears that it is the distance between the point and Kinect's XY plane (the plane which is parallel to the front surface of Kinect). So, if Kinect is looking at a wall, all the depth values give about the same value. It doesn't matter the real distance between a point on the wall and Kinect's origin.

I found the calculations in the NuiSkeleton.h file: https://code.google.com/p/stevenhickson-code/source/browse/trunk/blepo/external/Microsoft/Kinect/NuiSkeleton.h?r=14

For the Z axis, in line 625, it says:

FLOAT fSkeletonZ = static_cast<FLOAT>(usDepthValue >> 3) / 1000.0f;

You don't have to worry about the bitshift operation because in the description of the method it says:

/// <param name="usDepthValue">
/// The depth value (in millimeters) of the depth image pixel, shifted left by three bits. The left
/// shift enables you to pass the value from the depth image directly into this function.
/// </param>

So, you can use the equation:

FLOAT fSkeletonZ = static_cast<FLOAT>(usDepthValue) / 1000.0f;

the unit is in meters (because of the division by 1000).

For X and Y axes, line 633 and 634 gives:

FLOAT fSkeletonX = (lDepthX - width/2.0f) * (320.0f/width) * NUI_CAMERA_DEPTH_IMAGE_TO_SKELETON_MULTIPLIER_320x240 * fSkeletonZ;

FLOAT fSkeletonY = -(lDepthY - height/2.0f) * (240.0f/height) * NUI_CAMERA_DEPTH_IMAGE_TO_SKELETON_MULTIPLIER_320x240 * fSkeletonZ;

The calculation of the Y axis starts with a minus sign because in a picture (like depth image) the Y value increases when you go down but in real world coordinates, conventionally, Y increases when you go up.

For the NUI_CAMERA_DEPTH_IMAGE_TO_SKELETON_MULTIPLIER_320x240 constant, line 349 defines:

#define NUI_CAMERA_DEPTH_IMAGE_TO_SKELETON_MULTIPLIER_320x240 (NUI_CAMERA_DEPTH_NOMINAL_INVERSE_FOCAL_LENGTH_IN_PIXELS)

a short Googling gives this page: http://msdn.microsoft.com/en-us/library/hh855368.aspx where it says

NUI_CAMERA_DEPTH_NOMINAL_INVERSE_FOCAL_LENGTH_IN_PIXELS (3.501e-3f)

So, using these formulas gives you the real world coordinates within the coordinate frame given in the image: image description

(ref: http://pille.iwr.uni-heidelberg.de/~kinect01/doc/reconstruction.html)

2014-10-20 06:25:24 -0500 received badge  Notable Question (source)
2014-10-16 14:31:30 -0500 received badge  Popular Question (source)
2014-10-16 13:41:36 -0500 received badge  Student (source)
2014-10-15 09:55:14 -0500 asked a question Coordinates of a specific pixel in depthimage published by Kinect

I am looking for an answer for hours and finally I am asking it here.

I am using CMvision to find a specific color in Kinect's sight and I want to find the real world coordinates of the object with that color. I am planning to use CMvision to find the frame coordinates (as X and Y pixel values on the picture) and use these coordinates and the depth value of that pixel to calculate the real world coordinates.

As I understand, /camera/depth_registered/points topic already gives the real world coordinate but I couldn't find how to retrieve the X,Y,Z values of a specific pixel that I've choosen on the depth (or RGB) image.

Thanks in advance.

2014-10-13 08:29:44 -0500 received badge  Popular Question (source)
2014-10-10 10:16:40 -0500 received badge  Supporter (source)
2014-10-10 10:16:34 -0500 commented answer Position of a point on expanding map of GMapping

Thank you! It was so simple yet hidden :) I did some experiments and saw that the MapMetaData origin values change according to the expansion of the map and always give the real world coordinates of the bottom left corner of the map.

2014-10-10 10:14:25 -0500 received badge  Scholar (source)
2014-10-10 04:38:27 -0500 asked a question Position of a point on expanding map of GMapping

Hi, I am trying something and I need some help.

I have two robots which are mapping the same environment by using separate GMapping algorithms and building separate maps. During the mapping process, I want to send the maps to a host computer time to time. On this host computer, I want to merge these maps but as you can guess, unless the maps have a large area of intersection, using ICP or a similar algorithm wont work.

So I placed distinct landmarks in the environment (like the soccer playing robots use, cylinders with colored stripes) and I can detect them with the help of Kinect which enables me to know the position of the landmark relative to the robot. And with the help of GMapping, I know the position and orientation of the robot according to the map. The plan is, sending the seperate maps to the host with landmarks tagged on it and the host can easily match the position of the landmarks and merge the maps (with necessary translations, rotations and distortions if necessary).

What my problem is, GMapping expands the map if it needs to. And coordinates of the points on the map change if that happens (I am planning to use the pgm files as maps but any other suggestions would be appreciated). How can I get the coordinates of the points and my robot on the map (with negative coordinates if necessary) relative to an absolute position (like the start position of the robot)?