Robotics StackExchange | Archived questions

Overlay depth from LiDAR onto RGB data

Hi,

I am using ROS1 noetic and YOLO V3 (darknet_ros) to classify objects in the camera image stream. Now I would like to acquire depth information to not necessarily all but at least one (the centre) pixel of the bounding box.

For this I calibrated the extrinsic parameters between my lidar and camera using this.

Now to my questions: Is there a package that creates a depth_image from organized PointCloud2 and RGB data? Do I need to use tf to project the PointCloud onto the camera tf myself? I know that Octomap SLAM provides a raycast method, however I would like to use LIO-SAM since I managed to achieve better results with it.

I would appreciate your help/input.

Asked by GeorgNo on 2022-09-22 00:15:06 UTC

Comments

this may help #q304857

Asked by ravijoshi on 2022-09-22 05:53:07 UTC

Answers