ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Convert pointcloud 3d point to image pixel

asked 2018-07-30 06:26:12 -0500

rbsb gravatar image

Hello there. I have a rosbag that contains a camera footage topic (sensor_msgs/Image) of a room and a laserscan topic of the same room. I also have a program that converts this laserscan to a pointcloud. What I'm trying to do, is write a program that gets as input a point in the pointcloud (x,y,z values) and finds the pixel coordinates of this point in the camera image (it's more important to me to find the horizontal coordinate though (width pixel) ). The specs of the camera are known, and so are the angle_min, angle_max, angle_increment, range_min, range_max of the laserscan. I'm a bit lost as to how I calculate this. Can anyone help me? Thanks in advance.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2018-07-30 07:28:25 -0500

When you say the "The specs of the camera are known" what exactly is known?

This is a fairly straightforward transformation to do if you have the right information to hand. You'll need the lens calibration parameters of the camera, to be able to transform a 3D point in camera space into a 2D point in image space. You'll also need to know the extrinsic transform of the camera, in this case the relative pose of the LiDAR frame and the camera frame. This is used to transform the 3D LiDAR points into the camera frame.

You can determine the calibration parameters of the camera using this guide here.

If your LiDAR and camera are rigidly fixed together then you could measure by hand the extrinsic transform for a rough approximation. Note the orientation will need to be as precise as possible here. When you have this transform you can set it up as a static transform publisher following this tutorial, then ROS can easily transform the points into camera space for you. Note the different coordinate conventions of geometric and optical frames described here.

Finally you can use the openCV camera calibration module to do the final transformation from 3D points in camera space to 2D points in image space. The projectPoints function is what you'll use here.

Hope this gets you going in the right direction.

edit flag offensive delete link more

Comments

Thank you very much for replying. The camera used is Orbbec Astra Pro. I should have also mentioned that I'm very new to ROS and robotics in general. I'm also having trouble understanding how the x,y,z coordinate system for pointclouds works (where is 0,0,0 for example?).

rbsb gravatar image rbsb  ( 2018-07-30 08:13:35 -0500 )edit

I have to say you've chosen a challenging task to start with if you're new to ROS. Have you tried any of the steps I've mentioned above? The coordinate system for the LiDAR point cloud will have its origin at the centre of the sensor itself.

PeteBlackerThe3rd gravatar image PeteBlackerThe3rd  ( 2018-07-30 08:38:52 -0500 )edit

Could you please elaborate on the "you could measure by hand the extrinsic transform for a rough approximation" part?

rbsb gravatar image rbsb  ( 2018-07-31 03:40:52 -0500 )edit

The exstrinsic transform between the LiDAR and the camera is the XYZ position of the camera in the LiDAR frame and the roll pitch and yaw of the camera in the LiDAR frame. If these sensors are rigidly attached to your robot then you should be able to roughly measure them by hand

PeteBlackerThe3rd gravatar image PeteBlackerThe3rd  ( 2018-07-31 08:09:45 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2018-07-30 06:26:12 -0500

Seen: 3,822 times

Last updated: Jul 30 '18