Ask Your Question

How to implement autonomous navigation with Google Cartographer?

asked 2020-05-27 09:44:11 -0500

Py gravatar image

I am trying to implement autonomous navigation using 2 lasers and a depth camera. IMU and odometry data will also be available. Basically, I'd like to have the simulated robot spawned in some gazebo world and autonomously drive around it covering all of the floor space.

So far, I've come to the conclusion that using cartographer is best for me because it allows 2 lasers to be directly used without additional tools. However, before trying this I'd played around with gmapping and AMCL, where I'd do the following as a starting point for autonomous exploration:

  1. Generate a map by manually moving around
  2. Save the map and serve it
  3. Localise the robot on the map
  4. Provide waypoints to move_base

My question is, what is the order of operations when using cartographer? In my launch file would it just be a cartographer and move_base node and then something else to for frontier exploration? I suppose I'm a bit confused whether I need to replicate the workflow used above with gmapping and AMCL or whether the single cartographer node handles both of these roles in one step.

I'm really getting stuck with some of the tutorials I see online for this as I'm unsure how to adapt them to my use case. I've looked at here, which seems handy but it doesn't work for me as it doesn't do anything when I graphically set a 2DNavGoal in RViz.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2020-05-27 10:36:34 -0500

Dragonslayer gravatar image

Hi, it seems cartographer can work as SLAM as well as pure localization node. see: Localization only link text

Depending on what your waypoint generating node or usecase works better on, you can either SLAM-navigate, or do mapping and afterwards navigate/plan.

I dont know this tutorial, but when workinng in localization mode one would asume cartographer to demand an initial pose (can be set in RViz or might be served by other means). Some localization nodes dont even start to work before they get an initial pose. In SLAM mode you just start at 0/0 in most cases.

As you run Gabzebo I assume you are running it all on the same computer, otherwise the firewall can hinder proper communication.

Anything else is up and running clean? All depending Jackal packages installed?

edit flag offensive delete link more


Thanks for the message! Everything running is on my computer as you suggested. The only bits I'm confident in are the simulated robot in a Gazebo world and a keyboard teleoperation node with successful laser data visualisation in RViz. I can also save .yaml and .pgm map files generated using cartographer and serve them but am unsure whether this is necessary and what the next steps should be without using the AMCL approach before move_base to explore the mapped area.

Py gravatar image Py  ( 2020-05-27 11:46:17 -0500 )edit

Well, cartographer in pure localization mode as linked in my answer makes cartographer do what amcl does (computationally different but with "the same" outcome). If the map is served to cartographer in this mode (and scans) it should localize and corret odometry, giving move_base a clue where the robot is. In tf, map -> odom would be published by cartographer instead of amcl, and odom -> base_link by odometry as before. If you just droped amcl and launched cartographer its likely that you have some remapping of topics to do, linking node outputs to node inputs and defining frames. By the way dont forget to not! launch the teleop node when move_base comes into play as this would give you publisher conflict on the cmd_vel topic, result is likely nothing happens.

Dragonslayer gravatar image Dragonslayer  ( 2020-05-28 08:27:13 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools



Asked: 2020-05-27 09:44:11 -0500

Seen: 1,406 times

Last updated: May 27 '20