SLAM with Lidar and extracted camera data [closed]
Hi,
i am using a 2D hokuyo lidar scanner along with a camera on my Parallax Arlo bot. The purpose of this will be navigation on an unknown road like environment.
My roaddetection outputs a PointCloud2 containing Points for each line of the road, which i am using for my costmap.
I cant guarantee the presence of obstacles in the fov of the lidarscanner which leads me to the idea to use the point cloud of the road as input for slam at the same time.
I am currently using gmapping but unfortunately it only accepts one laserscanner.
Merging my camera data with the lidar data into one pointcloud and transforming that back to a laserscan wont be an option since i have different tf_frames for camera and lidar and it would be a nightmare to figure out a angle-increment .
Long story short: Is there a way or a recommendable SLAM package that accepts multiple inputs or 3D point clouds?
I'd appreciate any Help and Ideas Thanks in advance Tristan
check out rtabmap and/or cartographer. Both can work with 3d point clouds.
Thanks for your answer. rtabmap doesn't seem to work since they pretty much rely on a kinect or stereo camera. I am using a normal camera and am extracting the point by projection of the image. I am trying to get into carographer right now it seems promissing but is realy hard to set up
In my use case it can happen that one of my sensors doesn't see anything for a while (DeadZone of Lidar or temp missing road lines ) this leads me to throw these two together in one big PointCloud2. This is now a point where i can consider other slams again since i only have one input, but cartographer is nice and i might add imu in the future along with my wheel encoders. So for now carographer it is. Thanks for the help.