Converting ToF depthmaps to pointclouds

asked 2022-01-30 10:46:54 -0500

user2602 gravatar image

Hi All,

I am very new to ROS and am working on building a system from ground-up to understand the concepts better. I am trying to convert a depthmap (received from visionary-t time of flight camera as a sensor_msgs/Image message) into a pointcloud. I am looping through the width and the height of the image (in my case 176x144 px) say (u, v) and the value at (u, v) is Z in meters. I then use the intrinsic camera metrics (c_x, c_y, f_x, f_y) to convert the local (u, v) co-ordinates to global (X, Y) co-ordinates and for this, I make use of the pinhole camera model.

X = (u - c_x) * Z / f_x

Y = (v - c_y) * Z / f_y

I then save these points into pcl::PointXYZ. My camera is mounted on top and the view is of a table with some objects on it. Although my table is flat, when I convert the depthmaps to pointclouds, I see that in the pointcloud, the table has a convex shape and is not flat.

Can someone please suggest what could be the reason for this convex shape and how do I rectify this?

edit retag flag offensive close merge delete

Comments

Hi @user2602. This question is more about point cloud conversion so I suggest you extend it to what specific you are looking to do in ROS.

Having said that, it's hard to say what's causing the distortion in your case. What you are seeing might be due to environment illumination that causes distortion in uniformed color objects. You may adjust confidence level down to avoid the points with poor match.

osilva gravatar image osilva  ( 2022-01-31 06:50:33 -0500 )edit

Aren't there nodelets which implement this kind of conversion? What's the rationale for implementing it again?

gvdhoorn gravatar image gvdhoorn  ( 2022-01-31 07:25:57 -0500 )edit

Are you sure you need to apply the lens model? The camera may have already done it for you.

Mike Scheutzow gravatar image Mike Scheutzow  ( 2022-02-01 19:47:00 -0500 )edit