ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange

# Understanding /camera/depth/points

What does the value in the 'data' field of the data published by /camera/depth/points represent? They are supposed to represent the distance of every pixel from the Kinect center point? How to I convert the data generated as point cloud into corresponding cartesian 3D coordinates ? An array of size [4915200x1] is generated. Which means distance to each pixel is represented using 16 entries in matrix (because 480 x 640 x 16 = 4915200). How do I understand this. Please explain the program behind visualization of this pointcloud data in rviz .

I kept my kinect in front of an plain 2D wall parallel with it. I thought that I will get a matrix with uniform value through out as the wall is equidistant from kinect. But I get a data which I could not understand and decode!

edit retag close merge delete

Sort by ยป oldest newest most voted

The data is all the point clouds data fields (x, y, z) encoded into that array. Usually you don't read that by hand. Use a PCL Subscriber that allows you to directly get point data with x,y,z coordinates.

more

( 2015-01-29 11:03:10 -0500 )edit

i have converted pointcloud2 from kinect to pointcloudxyz. Are x,y,z coordinates in meters or mm?

( 2015-03-26 00:20:32 -0500 )edit

m ,

( 2015-03-26 05:12:35 -0500 )edit

for the camera/depth/points topic. does it publish coordinates of pixel or world coordinate. Im a bit confused

( 2015-04-07 21:56:51 -0500 )edit

These are x,y,z points in the frame that is passed in the message header.

( 2015-04-08 03:52:23 -0500 )edit

my frame id is camera_depth_optical_frame. are the xyz in my frame world coordinates? which will give depth from kinect to objects?

( 2015-04-08 08:43:05 -0500 )edit

They are very likely relative to the kinect sensor.

( 2015-04-08 09:33:21 -0500 )edit

i still dont get what you mean

( 2015-04-08 19:39:19 -0500 )edit