ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Difference between depth point cloud and image depth (in their usage)

asked 2016-10-07 15:04:05 -0500

patrchri gravatar image

updated 2016-10-10 13:41:51 -0500

Hello,

I have recently started doing image processing with the Kinect v2 (with opencv) and I was thinking of adding the third dimension to this processing, but I have a basic question about the usage of depth pointcloud and the image depth. I think I know their differences (the pc is a data structure containing data about (x,y,z) and the depth image is an actual image which contains data about the distance from the sensor), but I cannot understand why someone would use a depth point cloud for computations ? Depth images sound much more simpler to implement tasks. I mean, if I detect an object and be able to know its location in the 2 dimensions of an rgb image and with the right callibration of the sensor, wouldn't be easy to know its distance from the sensor from the depth image at the same location I located it in the rgb image?

Lastly, for what tasks would I use depth pointclouds and what are their advantages in comparison to for example a depth image?

Thanks for answering and for your time in advance,

Chris

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2016-10-14 05:06:06 -0500

patrchri gravatar image
edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2016-10-07 15:04:05 -0500

Seen: 4,817 times

Last updated: Oct 14 '16