Camera+Lidar fusion - SLAM
Hi!
I have a carlike robot which is equipped with a 360 degrees lidar(even though I only use 180 deg.), and a monocular camera. I have been using ROS for performing SLAM with lidar-based algorithms, e.g., gmapping, hector, and cartographer. Also, I have used ORB slam for performing slam using the monucular camera attached to my robot.
My question is: is there any known method within ROS community which allows the fusion of monocular camera + lidar data for performing SLAM? If not, is there any suggestions of how implementing such a package in ROS (taking advantage of ROS framework and existing packages) ?
I am thinking that a graph-based slam with the constraints being both features from camera and obstacles detected by lidar could be a feasible approach. However I have not found any package that could provide anything similar to that.
Update: I have found the package called omnimapper. It seems a promising option for my task. Has anyone already used it before in a similar situation?
Hello, I was working on something similar to this and wanted to ask what micro-controller you are using for this application. I am kind of new to ROS. Any help would be appreciated! Thank you.