ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A
Ask Your Question

Difference between depth point cloud and image depth (in their usage)

asked 2016-10-07 15:04:05 -0500

patrchri gravatar image

updated 2016-10-10 13:41:51 -0500


I have recently started doing image processing with the Kinect v2 (with opencv) and I was thinking of adding the third dimension to this processing, but I have a basic question about the usage of depth pointcloud and the image depth. I think I know their differences (the pc is a data structure containing data about (x,y,z) and the depth image is an actual image which contains data about the distance from the sensor), but I cannot understand why someone would use a depth point cloud for computations ? Depth images sound much more simpler to implement tasks. I mean, if I detect an object and be able to know its location in the 2 dimensions of an rgb image and with the right callibration of the sensor, wouldn't be easy to know its distance from the sensor from the depth image at the same location I located it in the rgb image?

Lastly, for what tasks would I use depth pointclouds and what are their advantages in comparison to for example a depth image?

Thanks for answering and for your time in advance,


edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2016-10-14 05:06:06 -0500

patrchri gravatar image
edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2016-10-07 15:04:05 -0500

Seen: 4,460 times

Last updated: Oct 14 '16