you need to follow traditional point-to pixel transform to project LIDAR points onto your RGB image. From there you may assign RGB values to point cloud and have point cloud with XYZRGB fields. After you are able to do this projection you can pretty much get all of the "fusion" tasks staright forward. Assuming the calibration parameters you got are right first you need to transform your point from LIDAR frame to Camera frame using extrinsic parameters and finally project this point in camera frame to image plane using intrinsic parameters. Now in code it will look something like below first know the sizes and types of you matrices, and then feed your calibration parameters into them
// camera projection matrix P
Eigen::MatrixXf TRANS_CAM_TO_IMAGE;
// lidar to cam transform matrix
Eigen::MatrixXf TRANS_LIDAR_TO_CAM;
Eigen::MatrixXf TRANS_LIDAR_TO_IMAGE;
TRANS_CAM_TO_IMAGE = Eigen::MatrixXf::Zero(3, 4);
TRANS_LIDAR_TO_CAM = Eigen::MatrixXf::Zero(4, 4);
TRANS_CAM_TO_IMAGE << 612.3624267578125, 0.0, 422.271484375, 0.0, 0.0, 610.940185546875, 241.3871612548828,
0.0, 0.0, 0.0, 1.0, 0.0;
TRANS_LIDAR_TO_CAM << 0.9999990463256836, -0.0005898026865907013, 0.0012526829959824681,
0.015111126005649567, 0.0005937033565714955, 0.9999949932098389, -0.003115807892754674, 0.00044604094000533223,
-0.0012508389772847295, 0.003116548527032137, 0.9999943375587463, -0.000181241164682433, 0, 0, 0, 1;
TRANS_LIDAR_TO_IMAGE = TRANS_CAM_TO_IMAGE * TRANS_LIDAR_TO_CAM;
And a function that takes in point cloud from your Lidar and results of matric multplication above;
Eigen::MatrixXf transformLidarToImage(pcl::PointCloud<pcl::PointXYZRGB>::Ptr in_cloud, Eigen::MatrixXf TRANS_LIDAR_TO_IMAGE)
{
Eigen::MatrixXf matrix_lidar_points_in_lidar_frame = Eigen::MatrixXf::Zero(4, in_cloud->size());
// feed cloud points into a matrice
for (int i = 0; i < in_cloud->size(); ++i) {
matrix_lidar_points_in_lidar_frame(0, i) = in_cloud->points[i].x;
matrix_lidar_points_in_lidar_frame(1, i) = in_cloud->points[i].y;
matrix_lidar_points_in_lidar_frame(2, i) = in_cloud->points[i].z;
matrix_lidar_points_in_lidar_frame(3, i) = 1;
}
/*************** PROJECT POINT CLOUD TO IMAGE PPLANE ****************/
Eigen::MatrixXf image_points = TRANS_LIDAR_TO_IMAGE * matrix_lidar_points_in_lidar_frame;
Eigen::MatrixXf uv = Eigen::MatrixXf::Zero(3, matrix_lidar_points_in_lidar_frame.cols());
uv.row(0) = image_points.row(0).array() / image_points.row(2).array();
uv.row(1) = image_points.row(1).array() / image_points.row(2).array();
uv.row(2) = image_points.row(2);
return uv;
}
You may use a function that look like in above to do the transform, I also listed the matrices that are going to be involved. so you can just feed your calibration results to them and call the function.
The return of that function is a matrix of points with pixel coordinates of whole point cloud. Remember that since you have 360 degrees cloud, not all projected points will make sense you may get some negative values as well, so you need to do a few checks to get the points that actually corresponds to image coordinates. Assuming you used the above function somewhere in your code and now you have uv
returned, you may use a for loop as below to do fusion;
pcl::PointCloud<pcl::PointXYZRGB>::Ptr fused_cloud(new pcl::PointCloud<pcl::PointXYZRGB>);
for (int m = 0; m < uv.cols(); m++) {
cv::Point point;
point.x = uv(0, m);
point.y = uv(1, m);
// Store corners in pixels only of they are on image plane
if (point.x > 0 && ...
(more)