How do you calculate depth using point cloud information?

asked 2020-06-18 10:20:20 -0500

Anonymous gravatar image

Hello,

I am working with an Intel Realsense D435 camera to calculate the depth of an image. However, I am not clear on how to use the pointcloud topic (/camera/depth_registered/points). How do I extract the information and use this in a python code to print out the depth of an object in the image?

edit retag flag offensive close merge delete

Comments

I am not understanding exactly what you mean by depth. First i though you wanted a depth image. after reading again seems like you want the depth of the points. If its like that, and supposing you want the depth refeered with the camera front frame, should be as easy as do the module of x,y,z values (for each point) into one? ( sqrt(pow(x,2)+pow(y,2)+pow(z,2))

Solrac3589 gravatar image Solrac3589  ( 2020-06-19 00:44:32 -0500 )edit

Your question is little confusing as you have not mentioned what you already have, so I will explain the whole thing here.

  1. Find object from your image
  2. Project corresponding lidar data onto pixels of the object so you have pointcloud for that object
  3. Average out points or filter or take the closest one whichever you want to choose and then get it's euclidean distance which will be depth in this case.
Choco93 gravatar image Choco93  ( 2020-06-19 08:24:11 -0500 )edit

Sorry for the confusion. The issue is that I am not entirely sure of how to use "sensor_msg/Image" to retrieve information of the camera to calculate the depth. How do you use this information?

Anonymous gravatar image Anonymous  ( 2020-06-22 15:47:31 -0500 )edit

Take a look here

Choco93 gravatar image Choco93  ( 2020-06-25 01:18:19 -0500 )edit