ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A
Ask Your Question

SLAM possibilities,where does it end ?

asked 2020-02-19 17:33:16 -0500

phil123456 gravatar image

updated 2020-02-19 17:36:02 -0500


the goal : something that looks like a droid, able to do SLAM without falling down the stairs

the problem : the more I dig, the less I think it's feasible, even by spending money and hours around ROS configs/packages and hardware

  • at first I wanted to use sonars,then distance sensors on a servo, then RPLIDAR

then I realize they have limitations (I don't want a rotating widget on the top of my robot that only detect obstacles on a plane of sight at a couple of feets above the ground)

  • so I thought, doing slam based on opencv-stereo-cameras-features like approach would be better; I found SLAM articles/videos about using Intel D435 cameras; then even more limitations (needed better odometry/IMU/off load SLAM...)

then there is the D435i camera; then I found the Intel t265 camera; optimized for slam with integrated IMU and stuffs(yet, grayscale fisheye, no depth memory size ?)

now I am really lost and confused...

  • I really don't wanna spend a lot of money on devices that won't work or don't do what I need, or worst, will end up not supported anymore, beeing super complicated to interface/tune within ROS
  • then finding out a new/better device is now out there (a couple of days after receiving what I ordered)
  • SLAM is one problem among many, I want to focus on other parts of the robot (TTS, voice recognition, actuators...)

any idea/opinion on this ?

am I going the right way ?

is slam possible for a hobbyist or is it only possible inside money loaded universities ?


edit retag flag offensive close merge delete

4 Answers

Sort by ยป oldest newest most voted

answered 2020-02-19 17:49:41 -0500

is slam possible for a hobbyist or is it only possible inside money loaded universities ?

I think that boils down alot of the other comments.

SLAM is hard. There are alot of 2D packages out there because they're reliable and robust. Most of the time people that have a 2D lidar also have another form of 3D sensing for obstacle avoidance. The lidar is for positioning and longer range obstacle detection than a depth camera or a sonar can handle.

Once you leave the planar-land of SLAM, you are now entering an active research field. Within ROS, its great that anyone can publish a package and have it available for the world, but that also creates varying quality and maintained implementations. This is especially true in edge research fields were every couple years someone's new approach deems their prior work no longer state of the art. None of the packages that I do know of visual or dense SLAM are going to be trivial for a hobbyist to configure or use (needing high quality and highly calibrated cameras / extrinsic). Many times, these are use-case specific and may not be as general as lidar slam. ORB-SLAM is the one I usually point people to, but there are also about 5 different popular implementations with their own special quirks on GitHub.

My recommendation is as a hobbyist unless your interest is SLAM, don't leave planar-land. Use the lidar slam packages and use that D435i you bought as an obstacle avoidance camera to give you 3D sensing. Or maybe add in some visual odometry to improve your positioning.

edit flag offensive delete link more


ok thanks....

phil123456 gravatar image phil123456  ( 2020-02-20 01:40:27 -0500 )edit

Can you mark this as correct to get it off the unanswered questions queue?

stevemacenski gravatar image stevemacenski  ( 2020-02-20 08:32:55 -0500 )edit

I was waiting someone else to give another opinion before purchassing a lidar :-)

phil123456 gravatar image phil123456  ( 2020-02-20 15:30:45 -0500 )edit

answered 2020-02-22 08:09:31 -0500

Dragonslayer gravatar image

updated 2020-02-22 08:18:37 -0500

Depending on what you do it might even be enough to localize and navigate, this can be done to some degree from a floorplan map and scanmatching (LIDAR) in AMCL. Nice Video and Software link for creating pgm and yaml map from "pictures"/plans. Slam might not be needed in your case.

Regarding the stair problem. To be safe you need additional sensors because light based sensors have problems with reflective surfaces and transparency. Depending on the floor they might get problems. But if that isnt the issus. RTABMAP (from depth image) has nice features, like obstacle-map, clear-space-map etc. Doesnt have testet negative height obstacles yet, but for sure that should be possible. RTABMAP also provides visual odometry and SLAM. Needs some computation power though.

edit flag offensive delete link more


interesting, I''l have a look

phil123456 gravatar image phil123456  ( 2020-02-22 14:00:48 -0500 )edit

answered 2020-02-21 13:28:09 -0500

achille gravatar image

I would agree with what @stevemacenski states. Start with an RPlidar or similar an get gmapping to work. This should be fairly easy as this has been done countless times before.

Typically the stairs/hole problem is solved differently and depends on your robot (is it holonomic, i.e. can it generate a velocity in any direction or is it differential drive?). You can decide to put a second lidar on the robot on an angle and manually detect any sudden terrain changes, or similarly using a depth sensor to do the same.

These things you learn by experience. I would recommend you to go ahead and get started as opposed to spending countless hours researching what to do. A lot of problems will become clear just by trying something. Happy tinkering!

edit flag offensive delete link more


fair enough

phil123456 gravatar image phil123456  ( 2020-02-22 13:59:43 -0500 )edit

answered 2020-04-08 04:23:34 -0500

KalanaR gravatar image

updated 2020-04-08 04:26:47 -0500

So, i may be late in answering, but can you add to what you said?

able to do SLAM without falling down the stairs

do you want to skip the stairs and move around or detect the stairs and go up/down? if its the detection and moving away scenario, you can do something like this.

  1. feed the point cloud from camera to octomap server
  2. evaluate the octomap's ground layer or a layer below that. you can use a leaf iterator and check nodes with z coordinate 0 or -1 for occupancy.
  3. if you find any unoccupied cells, then that means there is a hole in the ground. probably stairs :D
  4. finally, you can generate a cost map layer with that details or add it directly to the cost map you are using. add the detected region as a obstacle and the robot will navigate around it.

hope this is helpful.

if you can get your hands on a Kinect, then its IR depth camera would enable you to capture indoors easily, without any light issues. outdoors might cause issues due to sun. computational requirements are not that high for a static environment i would say. Ran Octomap server on a RPi perfectly (but cannot use rviz on RPi of course).

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools



Asked: 2020-02-19 17:33:16 -0500

Seen: 876 times

Last updated: Apr 08 '20