# Why is my Turtlebot 3 SLAM not working properly?

Hi, i'm beginner to ROS and I need help with my Waffle Turtlebot. The problem is, SLAM is not working correctly and therefore it makes unusable messy maps using gmapping. It seems the problem is with the odometry data or in the turtlebot3_waffe.urdf.xacro file. I tried to edit my xacro file as in: this link but it's not working for me.

This is how it looks in my Rviz. Laser scan providing correct data, but odometry red arrow shows position way too far. https://imgur.com/9X9KS3J My PC ubuntu version is 18.04 LTS and PC ROS version melodic. I will appreciate if you have any ideas or tips how to configure my xacro file.

edit retag close merge delete

Sort by » oldest newest most voted

I tried to edit my xacro file as in: this link but it's not working for me.

If you do not have a very specific reason, don't edit anything of the URDF definition of the robot: it tells Gazebo (or Stage, depending on what simulator you use) and Rviz how to properly render the robot and what parts is it composed of.

Anyway, the image you posted is completely correct. You're not rendering the RobotModel in Rviz, I suggest you do by clicking Add -> RobotModel, it should appear in the center of that gray image. If it's not, then you have some sort of problem in spawning your robot.

That gray is the area of the map that has been discovered and mapped by gmapping. You won't discover all the map at once when you run the gmapping node, you'll discover only the portion of the map that is visible to the robot through the Laser.

For a complete map, you can navigate the robot yourself using the turtlebot_teleop node via roslaunch turtlebot_teleop keyboard_teleop.launch

more

@davidem I have already tried using slam with "roslaunch turtlebot3_slam turtlebot3_slam.launch slam_methods:=gmapping" , it worked as it should, therefore the center of the screen is discovered. Problem is, that mapping stops working correctly as i start to move with robot using teleoperation.

( 2019-11-14 11:03:19 -0500 )edit

I need to understand what you mean by "stops working correctely". The map doesn't get updated? The readings of the laser do not match the actual obstacles? Also, are you using a 3D world in a simulator like Gazebo or a 2D world in a simulator like Stage? Please edit your question to provide as much information as possible

( 2019-11-14 14:32:54 -0500 )edit

@davidem I am using a real device and i try to map a room, i tried simulated world in gazebo before, and gmapping was working with no problems. By "stops working correctly" i meant that the map is always updating but for example if i go one meter forward, then i rotate 180 degrees and return one meter back, the map will update in a new grey places (and not in the same already discovered place). Doing this all over again gets my map bigger over time (even if i'm always in the same place).

( 2019-11-14 14:56:35 -0500 )edit

Does this problem happen in the simulation aswell? Have you tried? My suggestion is that you go through the Robotis tutorials for the Turtlebot3 if you haven't. First, understand deeply the navigation stack using a simulated environment, the different nodes, what they offer, what do they need and what configuration is required. Deploying a real robot shouldn't be much of a difference.

( 2019-11-15 07:48:30 -0500 )edit

@davidem Well yes, i have tried the simulation. The problem is only in the real environment only with the waffle turtlebot. The burger in the real environment is working as in the simulation, therefore odometry shows exact location. Also rotating quickly with the waffle makes map messy, but rotating quickly with burger does no harm to map.

( 2019-12-02 11:26:51 -0500 )edit