Difference between depth point cloud and image depth (in their usage)
Hello,
I have recently started doing image processing with the Kinect v2 (with opencv) and I was thinking of adding the third dimension to this processing, but I have a basic question about the usage of depth pointcloud and the image depth. I think I know their differences (the pc is a data structure containing data about (x,y,z) and the depth image is an actual image which contains data about the distance from the sensor), but I cannot understand why someone would use a depth point cloud for computations ? Depth images sound much more simpler to implement tasks. I mean, if I detect an object and be able to know its location in the 2 dimensions of an rgb image and with the right callibration of the sensor, wouldn't be easy to know its distance from the sensor from the depth image at the same location I located it in the rgb image?
Lastly, for what tasks would I use depth pointclouds and what are their advantages in comparison to for example a depth image?
Thanks for answering and for your time in advance,
Chris