Camera+Lidar fusion - SLAM
Hi!
I have a carlike robot which is equipped with a 360 degrees lidar(even though I only use 180 deg.), and a monocular camera. I have been using ROS for performing SLAM with lidar-based algorithms, e.g., gmapping, hector, and cartographer. Also, I have used ORB slam for performing slam using the monucular camera attached to my robot.
My question is: is there any known method within ROS community which allows the fusion of monocular camera + lidar data for performing SLAM? If not, is there any suggestions of how implementing such a package in ROS (taking advantage of ROS framework and existing packages) ?
I am thinking that a graph-based slam with the constraints being both features from camera and obstacles detected by lidar could be a feasible approach. However I have not found any package that could provide anything similar to that.
Update: I have found the package called omnimapper. It seems a promising option for my task. Has anyone already used it before in a similar situation?
Asked by Danny_PR2 on 2019-11-04 08:39:33 UTC
Answers
You may look for the paper
A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low-Cost LiDAR and Vision Fusion
Asked by duck-development on 2019-11-04 13:23:04 UTC
Comments
Thank you for the suggestion! I was aware of this paper, but the authors AFAIK have not released a package/framework that could be re-used. I am looking for a possible existing package that would allow monocular camera + 2D lidar as sources for SLAM
Asked by Danny_PR2 on 2019-11-05 08:19:41 UTC
LIMO-SLAM ? https://github.com/johannes-graeter/limo
But I didn't successfully set up it on my machine.
Asked by sinsinsinsin on 2020-09-23 07:40:25 UTC
Comments
Hello, I was working on something similar to this and wanted to ask what micro-controller you are using for this application. I am kind of new to ROS. Any help would be appreciated! Thank you.
Asked by Ashish05 on 2020-09-23 16:03:37 UTC