Is it possible to fuse a 2d lidar (rplidar-a1) with a usb monocular camera (RGB only)?

asked 2022-07-11 10:50:57 -0500

CrazyFrog77 gravatar image

updated 2022-07-11 11:52:26 -0500

Hi everyone, I’m trying to calibrate a 2d lidar (rplidar-a1) with a usb monocular camera (RGB only)? Is this possible or do you need a 3d lidar or a RGB-D camera in order to perform calibration between the two to achieve sensor fusion? I'm using ROS noetic and a raspberry pi 4B. Many thanks in advance

edit retag flag offensive close merge delete

Comments

its not entirely clear to me what you're trying to accomplish.. what state information are you "fusing"? are you running object detection or visual slam using the camera? the overlap between a 2D lidar and an RGB image would be a line of pixels (or more likely due to the angular resolution of each a dotted line of pixels).

If you have a feature detection in the RGB image that corresponds with that overlap, then yes, its presumably possible to use the LIDAR information to fuse or register the detection with the pointcloud or scan.

if the goal is just camera calibration, there are usually ways that don't involve direct depth measurement (such as printing out a checkerboard)

shonigmann gravatar image shonigmann  ( 2022-07-11 13:51:52 -0500 )edit

@shonigmann thanks for the reply. The goal is obstacle detection using sensor fusion, what you're saying completely makes sense. I'm wondering if navigation stack allows for a monocular camera to be integrated with 2d lidar when creating local costmaps. Something I need to explore a lot more!

CrazyFrog77 gravatar image CrazyFrog77  ( 2022-07-12 10:42:05 -0500 )edit