ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Sensor fusion after intrinsic and extrinsic calibration - Autoware

asked 2020-02-10 17:16:30 -0600

msaeUF gravatar image

updated 2022-01-22 16:10:19 -0600

Evgeny gravatar image

Hello! I am runing autoware v1.12 on nvidia jetson agx with jetpack 4.2. Currently am using live sensors VLP-16 lidar and FLIR ADK thermal camera. I was able to get the intrinsic and extrinsic calibration by following directions at: https://autoware.readthedocs.io/en/fe...

However now, I do not know how to overlay data and get fusion.

I run the calibration publisher using the extrinsic yaml file and generic fields that autoware already has in target_frame, camera_frame, camera_info_topic_name etc and it runs and exits out of the node - is that how it is supposed to be?

I used Pixel-Cloud Fusion node (https://autoware.readthedocs.io/en/fe...) but it keeps on saying waiting for intrinsic file and the target frame does not exist. Am I supposed to do something else also? The Directions ask for "rectified_image" how am i to do that?

Following the steps from https://youtu.be/pfBmfgHf6zg?t=501 at time 8:20, does not allow me to view both lidar and camera data, it only shows one or the other depending on which frame I pick i.e. velodyne or boson-camera, and then the camera feed does not have any overlay data either.

I would really appreciate if you could direct me of the next steps I could take in order to overlay the data of each sensor and get "sensor fusion". I am very new to this, but am trying to learn and properly document.

Thank you!!

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
1

answered 2020-02-11 14:06:31 -0600

Josh Whitley gravatar image

@msaeUF The pixel_cloud_fusion node is the correct node for this function. You first need to use the calibration_publisher package to publish the camera calibration based on the lidar/camera calibration file that you created. Here is an example command to run the calibration_publisher using a launch file from runtime_manager:

roslaunch runtime_manager calibration_publisher.launch camera_frame:=/camera_frame image_topic_src:=/camera/image_raw file:=/path/to/your/lidar_camera_cal.yaml camera_info_topic:=/camera/camera_info

In the above command, replace /camera_frame with the TF frame that your camera publishes images in, replace /camera/image_raw with the topic name of the raw image from your camera, replace /path/to/your/lidar_camera_cal.yaml with the full path to your lidar/camera calibration file produced from the autoware_camera_lidar_calibration YAML file, and replace /camera/camera_info with the topic that you would like the calibration_publisher to publish the calibration info on.

Once this is running, you can use the pixel_cloud_fusion.launch file to launch both an image_rectifier node (which will generate the rectified image topic) and the pixel_cloud_fusion node. You'll have to remap the topics in this launch file to the appropriate topics for your camera and calibration.

edit flag offensive delete link more

Comments

Thanks alot!!!

msaeUF gravatar image msaeUF  ( 2020-02-11 19:12:08 -0600 )edit
0

answered 2020-02-10 19:36:09 -0600

updated 2020-02-10 19:41:00 -0600

you need to follow traditional point-to pixel transform to project LIDAR points onto your RGB image. From there you may assign RGB values to point cloud and have point cloud with XYZRGB fields. After you are able to do this projection you can pretty much get all of the "fusion" tasks staright forward. Assuming the calibration parameters you got are right first you need to transform your point from LIDAR frame to Camera frame using extrinsic parameters and finally project this point in camera frame to image plane using intrinsic parameters. Now in code it will look something like below first know the sizes and types of you matrices, and then feed your calibration parameters into them

 //  camera projection matrix P
Eigen::MatrixXf TRANS_CAM_TO_IMAGE;
//  lidar to cam transform matrix
Eigen::MatrixXf TRANS_LIDAR_TO_CAM;
Eigen::MatrixXf TRANS_LIDAR_TO_IMAGE;

TRANS_CAM_TO_IMAGE = Eigen::MatrixXf::Zero(3, 4);
TRANS_LIDAR_TO_CAM = Eigen::MatrixXf::Zero(4, 4);

TRANS_CAM_TO_IMAGE << 612.3624267578125, 0.0, 422.271484375, 0.0, 0.0, 610.940185546875, 241.3871612548828,
    0.0, 0.0, 0.0, 1.0, 0.0;

TRANS_LIDAR_TO_CAM << 0.9999990463256836, -0.0005898026865907013, 0.0012526829959824681,
    0.015111126005649567, 0.0005937033565714955, 0.9999949932098389, -0.003115807892754674, 0.00044604094000533223,
    -0.0012508389772847295, 0.003116548527032137, 0.9999943375587463, -0.000181241164682433, 0, 0, 0, 1;

TRANS_LIDAR_TO_IMAGE = TRANS_CAM_TO_IMAGE * TRANS_LIDAR_TO_CAM;

And a function that takes in point cloud from your Lidar and results of matric multplication above;

Eigen::MatrixXf transformLidarToImage(pcl::PointCloud<pcl::PointXYZRGB>::Ptr in_cloud, Eigen::MatrixXf TRANS_LIDAR_TO_IMAGE)
{
    Eigen::MatrixXf matrix_lidar_points_in_lidar_frame = Eigen::MatrixXf::Zero(4, in_cloud->size());
   // feed cloud points into a matrice
    for (int i = 0; i < in_cloud->size(); ++i) {
        matrix_lidar_points_in_lidar_frame(0, i) = in_cloud->points[i].x;
        matrix_lidar_points_in_lidar_frame(1, i) = in_cloud->points[i].y;
        matrix_lidar_points_in_lidar_frame(2, i) = in_cloud->points[i].z;
        matrix_lidar_points_in_lidar_frame(3, i) = 1;
    }

    /*************** PROJECT POINT CLOUD TO IMAGE PPLANE  ****************/

    Eigen::MatrixXf image_points = TRANS_LIDAR_TO_IMAGE * matrix_lidar_points_in_lidar_frame;
    Eigen::MatrixXf uv = Eigen::MatrixXf::Zero(3, matrix_lidar_points_in_lidar_frame.cols());
    uv.row(0) = image_points.row(0).array() / image_points.row(2).array();
    uv.row(1) = image_points.row(1).array() / image_points.row(2).array();
    uv.row(2) = image_points.row(2);
    return uv;
}

You may use a function that look like in above to do the transform, I also listed the matrices that are going to be involved. so you can just feed your calibration results to them and call the function. The return of that function is a matrix of points with pixel coordinates of whole point cloud. Remember that since you have 360 degrees cloud, not all projected points will make sense you may get some negative values as well, so you need to do a few checks to get the points that actually corresponds to image coordinates. Assuming you used the above function somewhere in your code and now you have uv returned, you may use a for loop as below to do fusion;

pcl::PointCloud<pcl::PointXYZRGB>::Ptr fused_cloud(new pcl::PointCloud<pcl::PointXYZRGB>);
for (int m = 0; m < uv.cols(); m++) {
    cv::Point point;
    point.x = uv(0, m);
    point.y = uv(1, m);

    // Store corners in pixels only of they are on image plane
    if (point.x > 0 && ...
(more)
edit flag offensive delete link more

Comments

I thought autoware would already have node that could do sensor fusion

msaeUF gravatar image msaeUF  ( 2020-02-10 20:00:25 -0600 )edit

well they might but I am not sure

Fetullah Atas gravatar image Fetullah Atas  ( 2020-02-10 20:14:07 -0600 )edit

even if they do, must be something that looks similar to what I just wrote :)

Fetullah Atas gravatar image Fetullah Atas  ( 2020-02-10 20:14:59 -0600 )edit

Thank you!

msaeUF gravatar image msaeUF  ( 2020-02-15 14:47:06 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2020-02-10 17:16:30 -0600

Seen: 1,352 times

Last updated: Feb 11 '20