ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Points in a pointcloud and their distance from camera

asked 2012-06-10 02:47:27 -0500

kameleon gravatar image

updated 2016-10-24 09:01:50 -0500

ngrennan gravatar image

Hi.

I wrote a node that gets the pointcloud from the Kinect and converts it from PointCloud2 to the pcl PointXYZ cloud.

Now, for starts, I wanted to take some middle point in the structure ([319][239] for a 640x480 2d pointcloud, i guess) and measure it's distance from the camera, and I did it by accessing points[76241].z, and likewise for other two coordinates, but now I see that it doesn't work like that, because in each frame i get completely different value, not only on the z coordinate, but also on x and y, and they should stay fixed because it's the same point in the 2d matrix.

First of all, if I access it that way, am I really looking at the same point in each frame? Or is it just that the origin of the coordinate system is not at the position of the camera? How would I get for example, that middle point and it's distance from the camera?

edit retag flag offensive close merge delete

Comments

Hello.. I have the same problem. I wont to know the minimum distance from the obstacle to the robot(camera). Any help on this??Thanks

Astronaut gravatar image Astronaut  ( 2013-06-27 19:30:36 -0500 )edit

Have you manage to get the solution for this problem? If yes please let me know the solution even i am facing the same problem.

stark gravatar image stark  ( 2015-01-08 06:22:54 -0500 )edit

Hello, Can you explain the logic of picking 76241 (I do realize that it is 319*239). But shouldn't it be (318 * 480)+238?

skr_robo gravatar image skr_robo  ( 2016-07-15 14:19:08 -0500 )edit

3 Answers

Sort by ยป oldest newest most voted
3

answered 2012-06-10 03:41:52 -0500

updated 2012-06-10 03:55:36 -0500

I've said this many, many times on ros answers - point.z IS NOT distance from the camera. I don't know where this misconception keeps coming from, but how coordinate frames work is clearly laid out in this REP. To summarize, here's what the fields on a point mean:

  • x forward
  • y left
  • z up

Or in an optical frame

  • z forward
  • x right
  • y down

Note that forward is not the same as depth.

To answer your question, using linear indexing there's no guarantee on the ordering of the points, so trying to access the "middle" point like that could just return some arbitrary point. Additionally, since the Kinect is a noisy sensor, even if you are getting the "same" point, all the values will be slightly different each time. If you actually just want the distance from the center of the camera, you should consider using the depth image instead. In the depth image, each pixel represents a distance in mm, and you can index into it as you would in any image.

edit flag offensive delete link more

Comments

Sorry for a newbie question like this.

Now that you mention it, depth image seems to be something that I could use for my project, but I'll probably need surface reconstruction later and so on... but thanks.

kameleon gravatar image kameleon  ( 2012-06-10 05:56:47 -0500 )edit

No worries. The fact that the question keeps coming up probably means that it needs to be documented better/more visibly.

Dan Lazewatsky gravatar image Dan Lazewatsky  ( 2012-06-10 09:36:51 -0500 )edit

Sorry for continuing with the noob questions, but what would forward mean then? If I was using a _optical frame, z is forward, right? Isn't that the transformation distance relative to the _optical frame? wouldn't that make it depth?

fersarr gravatar image fersarr  ( 2013-05-04 11:42:32 -0500 )edit

I think the misconception of point.z beign depth might come from PCL/openni having the viewing direction along the z-axis by default.

MartinH gravatar image MartinH  ( 2015-08-18 02:31:58 -0500 )edit
0

answered 2016-06-29 17:36:17 -0500

AbhijithNair gravatar image
edit flag offensive delete link more
0

answered 2012-06-10 04:08:19 -0500

Hansg91 gravatar image

I would also suggest using the depth image instead. Only use the PointCloud if you are absolutely sure you need xyz coordinates. The PointCloud is in fact a depth image, with its indices and depth values used to project the point in real world positions.

In fact, I rarely need an entire PointCloud. What I usually do is find the points I need, take their indices and depth and project only those points. Gives a slight efficiency boost I imagine too :)

edit flag offensive delete link more

Comments

Are you saying that the x,y,z values in PointCloud data are just coordinates of points in a projected image of the real world scene, not the actual x,y,z coordinates of the real world point which corresponds to that pixel?

skr_robo gravatar image skr_robo  ( 2016-07-15 14:47:35 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2012-06-10 02:47:27 -0500

Seen: 7,537 times

Last updated: Jun 29 '16