# How to project PointCloud2 points to image plane with known transformations.

Hello everyone,

as the title says, I have a PointCloud2 topic and I am trying to project every point on an Image so I can extract the RGB values. I also have a known tf and the camera_info topic that corresponds to the Image topic, any ideas ?

Frame Selection

First of all, I have 4 cameras and it is crucial to choose the best frame to project each point. For that, I've implemented this:

def choose_frame(info_msgs, lidar_point, selected_cameras = [0, 1, 2, 3], tf_data = read_tf_yaml):

computed_angles = []

for camera in selected_cameras:
# Define the 3D ray from the projection point to the principal point
camera_object = image_geometry.PinholeCameraModel()
info_msg = info_msgs[camera]
camera_object.fromCameraInfo(info_msg)
principal_ray = camera_object.projectPixelTo3dRay((camera_object.cx(), camera_object.cy()))
# Define the 3D coordinates of the projection point referenced on the LiDAR's coordinate system.
camera_xyz = np.array(tf_data[f'cam_{camera}'][:3])
# Define a 3D point referenced which lays on the principal ray referenced on the LiDAR's coordinate system.
rot_quat = np.array(tf_data[f'cam_{camera}'][3:])
rotation = R.from_quat(rot_quat)
camera_helper_xyz = camera_xyz + rotation.apply(principal_ray)
# Define the LiDAR point.
point = lidar_point

# Find the angle between
ba = camera_helper_xyz - camera_xyz
bc = point - camera_xyz

cosine_angle = np.dot(ba, bc) / (np.linalg.norm(ba) * np.linalg.norm(bc))
angle = np.arccos(cosine_angle)
computed_angles.append(np.degrees(angle))

index_min = np.argmin(computed_angles)
return index_min


What this does is for each camera to compute the angle between the lidar point, the projection point and a point that lies on the principal axis (projection point - principal point). The camera with the smallest angle is the one that is selected to project the LiDAR point. Other than being badly written There probably must be a better way to select the best camera for each point.

Now, let me show you how I've approached the LiDAR point - camera projection.

Method - 1

This simplistic way of projecting the lidar point on an image uses image_geometry package but does not use any information that I have of the external calibration (tf_tree).

def extract_rgb(lidar_point, image, camera_info):
bridge = CvBridge()
camera_object = image_geometry.PinholeCameraModel()

camera_object.fromCameraInfo(msg = camera_info)
u,v = camera_object.project3dToPixel(lidar_point)
u_rectified, v_rectified = camera_object.rectifyPoint((u,v))
cv_image = bridge.imgmsg_to_cv2(image, desired_encoding='passthrough')
rgb = cv_image[int(u_rectified), int(v_rectified)].tolist()
return rgb


Method - 2

This method is slightly more sophisticated, now I use the tf information but still the results are not correct.

def extract_rgb_1(lidar_point, image, camera_info, selected_camera_index, tf_data):
bridge = CvBridge()
camera_object = image_geometry.PinholeCameraModel()

rodrigues = quat_to_rodrigues(tf_data[f'cam_{selected_camera_index}'][3:])
rodrigues_rotation_vector = rodrigues[0]

t_vec = tf_data[f'cam_{selected_camera_index}'][:3]

camera_object.fromCameraInfo(msg = camera_info)
cv_image = bridge.imgmsg_to_cv2(image, desired_encoding='passthrough')

Image_points = cv2.projectPoints(objectPoints = np.array(lidar_point),
rvec = rodrigues_rotation_vector,
tvec = np.array(t_vec),
cameraMatrix = camera_object.intrinsicMatrix(),
distCoeffs = camera_object.distortionCoeffs(),
)

point_uv = image_points[0][0][0]

if point_uv[0] >= 0 and point_uv[0] <= camera_info.width:
if point_uv[1] >= 0 and point_uv[1] <= camera_info.height:
rgb = cv_image[int(point_uv[1]), int(point_uv[0])].tolist ...
edit retag close merge delete