advice for SLAM with 3D lidar

Hello. The project is running Ubuntu 16.04, with Kinetic on an Intel PC.

Some background: I designed and built a robot, and was at the SLAM phase. The turtlebot tutorials (https://learn.turtlebot.com/) are a great guide to SLAM for a person like me. Then, I experienced a real kick in the pants - it turns out that the current offering of SLAM packages is geared towards horizontal (planar) lidar, and not vertical lidar like the one I built (see: https://answers.ros.org/question/3466...). Well, life is a learning experience so I built a new horizontal 3D lidar system:

At present the new horizontal/planar lidar system is hanging onto the robot with zip ties, and needs to be mounted onto the robot:

Before I rip out the vertical 3D lidar system, and replace it with the horizontal system I need to decide the height at which to place the new horizontal lidar. I have a group of related questions that I am hoping will guide the placement:

• It is my understanding that gmapping is the recommended mapping engine for the Navigation stack (https://wiki.ros.org/navigation/MapBu...). Is gmapping still the best tool for creating a 2D map (given the 3D lidar)?
• I want to create a 2D map, but avoid obstacles using the full 3D lidar data (exactly like the video on the navigation stack home page: https://wiki.ros.org/navigation). Using the navigation stack with gmapping, and amcl will I be able to reach this objective?
• Can you please recommend package combinations that will allow the robot to build a 2D map, localize in the map, and navigate to points on the map while avoiding obstacles using the full horizontal 3D lidar data?

I am pre-emptively asking these questions becasue I don't want to rebuild my robot to later learn I positioned the lidar in a way that does not work optimally with the current SLAM offerings.

Thanks a ton for your time!

Mike

Edit #1 - I appreciate all suggestions for packages that prevent bumping into the top of the table as I navigate around the table legs; also I don't want to run over things laying on the floor. On closer analysis, it looks like like the PR2 has a pitching lidar AND a fixed lidar, AND rgbd cameras. I am really hoping to get some guidance on package selection for building a map, localizing, and navigating to points on the map (map of my apartment - small area). I would prefer to do it all via lidar, but if more "cheap" hardware will really help, I am very open to those suggestions. Any full working solutions are very, very appreciated. Thanks again.

edit retag close merge delete

I work with rgbd cameras but some things might work in your case as well. There is http://wiki.ros.org/pointcloud_to_las... , on top of my head i cannot say wether it is this package or an aditional one, but there is the possibility to recognize obstacles depending on size, so you actually dont get a slice of the pointcloud, but more of a projection taking obstacle hight in to consideration, you dont just "look under the desk" kind of idea. There are older packages that tried 3d, 2d navigation http://wiki.ros.org/humanoid_navigation, and http://wiki.ros.org/3d_navigation - but these seem to have gone closed source or were abandoned. As I look at your lidar I wonder wether you could muster the resources and get a rgbd camera and try rtabmap-slam, its apearance based, needs a camera for this. I have found it not to be ideal ...(more)

( 2020-05-03 10:59:16 -0500 )edit

@Dragonslayer Thanks for the info. I am not a programmer, and am unfortunately isolated from anyone who knows ROS or robotics.... (but I have the internet...whooo hooo), so I really appropriate guiding comments like yours. Thanks for mentioning something new that I had no idea about. I will look into it. Having said that, I still have to do the lidar thing as I am so heavily invested in it. Thanks again!

( 2020-05-03 11:30:18 -0500 )edit

I see. The real nice thing about rtabmap is that it gives you lots of outputs, obstacles map, floormap(projection map), octomap etc. Maybe you could just use it to "convert" your pointcloud to then use those topics/data types to go on with other slam packages. Its not a fine solution but it can get you testing qicker then dealig with specialized packages that process the pointcloud.

( 2020-05-04 08:50:07 -0500 )edit

Regarding EDIT1: You are actually already there it seems to me, that is if you have odometry. Use the planar lidar with gmapping and then amcl for mapping and navigation via global_costmap and navigation stack, planners by move_base. And the costmap2d (from navigation stack) as local costmap (obstacle avoidance) to create an obstacle layer link text from the 3d-llidar pointcloud in the local costmap for obstacle avoidance. There is a parameter for obstacle hight (its actually in this package not the one I suspected in the earlier comment). This will give you cmd_vel output, which you have to get to your motor drivers/controllers via a hardware interface, thats it. link text

( 2020-05-04 09:38:16 -0500 )edit

Sort by » oldest newest most voted

It is my understanding that gmapping is the recommended mapping engine for the Navigation stack (https://wiki.ros.org/navigation/MapBu...). Is gmapping still the best tool for creating a 2D map (given the 3D lidar)?

Personally, I'd use Slam Toolbox, but I'm also horribly biased. I see you're working with a Hokuyo, that was my main platform for development for that project so I'd expect good out of the box results. Other options are Hector, Karto, and Cartographer (though abandoned), and LAMA.

I want to create a 2D map, but avoid obstacles using the full 3D lidar data (exactly like the video on the navigation stack home page: https://wiki.ros.org/navigation). Using the navigation stack with gmapping, and amcl will I be able to reach this objective?

Yes. The 2D lidar will be used for localization and mapping, The "3D" sweeping lidar points can be used for collision avoidance in the costmaps.

Can you please recommend package combinations that will allow the robot to build a 2D map, localize in the map, and navigate to points on the map while avoiding obstacles using the full horizontal 3D lidar data?

The generic toolset will do this fine. The "3D" steeping lidar is essentially just a pointcloud generator which the Voxel Layer (or STVL) can handle.

Personally, I wouldn't go for the sweeping 2D lidar anymore, depth cameras are ubiquitous and cheap. But since you have it, you should use it since those are $. In the future though, look at the Orbbec or Realsense cameras. They're about ~$200 and mechanically simplier. Its an either-or situation, so no need to go down this route right now since you already have the sweeping lidar.

more

Just my 2 cents about the realsense D435i camera. It is an engineering marvel for a cheap price of ~\$200, and the quality is pretty good. However, there are many nitty gritty issues that make it difficult to rely on the realsense 100%, many of them stemming from its ros driver. Until very recently (March 2020), ubuntu kernel 5.0 support was unavailable without manual patches. Even with the manual patch or using older kernels, this was a huge source of bugs. IMU values went haywire (only noticeable in prolonged testing), the ros driver node can randomly die after 10-20mins. The amount of data being pumped by the realsense over USB3 also poses some difficulties as it is a very dense pointcloud. This was verified with other DARPA subt challengers I met during the Urban Circuit in Feb 2020, and some of the competitiors I talked to mentioned some of ...(more)

( 2020-05-04 00:22:18 -0500 )edit

I have plenty of my own issues with the realsense camera, but not really what this topic is about. Lets try to keep this on topic. Its OK for what it is, and I mention those 2 makers as just examples of depth cameras to consider for about the same pricepoint.

To your comment, don't expect to just wait out the driver for stability. You'll be waiting forever. Make the changes yourself and upstream them.

( 2020-05-04 01:17:50 -0500 )edit

Its more hacking then installing realsense, and a bit quirky as mentioned. Would really like them to get their act together in regards to drivers etc. as their new "solid state" 3d Lidar seems to good to be true.

( 2020-05-04 08:52:49 -0500 )edit

@stevemacenski Hi Steve, I implemented your answer in my physical robot. During the lidar motion when lidar is horizontal, LaserScan goes to AMCL, and the 3D pointcloud2 otherwise goes to STVL (thanks a ton BTW for STVL, slam_toolbox, and all the answers!!). My current problem is that AMCL is now getting a LaserScan once every 5 seconds which is sub optimal for localization. I have a strong feeling that a single sweeping 2d lidar is not enough to get the navigation stack working at it's full potential. I believe I need to fix the lidar in a horizontal position so that AMCL gets data constantly, and navigate around obstacles using an RGBD camera. Does that sound right? if so, which RGBD camera models do you suggest today? Is Orbbec Astra pro the best choice for indoor use? I have never worked with any RGBD cameras and hugely prefer hardware ...(more)

( 2020-07-26 10:43:42 -0500 )edit

Wild guess, what would be if you try the sweeper 90° turned (sweeping sideways instead of up and down)? You wouldnt get a complete scan every 5 seconds but at least some points in quick sucession. Of course FOV might be an issue, but as an experiment? RGBD: If you only need it for obstacle avoidance and voxels the sony chip from the ipadpro might be interessting. They call it Lidar but it seem to be TOF CMOS. Its low resolution might not be a problem for this task. Less experimental, the Intel Realsense models do have ros packages can be tricky though, they are relatively cheap, and really not bad at all. Wouldnt stop anyone from getting one. I got one myself. I would highly recomend one with imu though, one less sensor to integrate in the end.

( 2020-07-27 07:33:37 -0500 )edit

@Dragonslayer The lider used to be in that position, and it's not ideal, as other issues start to happen (mapping algorithms expect a horizontal laser). I have come to learn that the navigation stack needs at least one 2D and one 3D depth data generators, one for slam, and one for obstacle avoidance. I am planning to use my lidar for slam, and an Orbbec Astra Pro for obstacle avoidance. Any advice for or against the Astra? I don't want the Realsense D435i as it seems to be unstable, and I am willing to sacrifice performance for stability. Any advice?

( 2020-07-27 08:50:53 -0500 )edit

@BuIilderMike Heard nothing specially bad about the Astra´s. I didnt know it well, but looked it up quick. The Astra Pro seem to have a mimimum Distance of 0.6 meters. In my opinion thats very important to be aware of, as you might have to decide at that distance out what to do(what data does it producer if an object is nearer, how to handle it in the alghorythms?), indoor that might be very limiting(usecase?). It could mean you have to integrate an aditional sensor layer for the near field. But depending on robot-base size, mounting point and usecase this might not be an issue for you, as i said earlier just something to be aware of. Just something that made me think about.

( 2020-07-28 10:19:28 -0500 )edit

@Dragonslayer THANK YOU!!! I didn't even think of the 60cm of dead space. That makes a HUGE impact on me! Thanks for pointing it out. I saw it and didn't give it any thought... until you pointed it out. Thanks a ton!

( 2020-07-28 16:37:35 -0500 )edit