How to use the gmapping SLAM Algorithm correctly?

asked 2021-12-09 15:32:06 -0500

sincoy gravatar image

updated 2021-12-12 07:26:32 -0500

Hello everyone,

I am using gmapping to map my environment in RVIZ. Herefore I am using two sensors - one on the front left side and the other on the rear right side - for my mobile robot. The problem is that the mapping starts from the center of the mobile robot and not from the specified sensors, although the sensor data are merged correctly. Do you have any idea to fix the problem?

After loading my mobile robot in gazebo. I am launching the following launch.file to use the gmapping slam algoritm:

<launch>
  <arg name="slam_methods" default="gmapping" doc="slam type [gmapping, cartographer, hector, karto, frontier_exploration]"/>
  <arg name="configuration_basename" default="robot_lds_2d_gazebo.lua"/>
  <arg name="open_rviz" default="true"/>

  <!-- SLAM: Gmapping, Cartographer, Hector, Karto, Frontier_exploration, RTAB-Map -->
   <include file="$(find robot_slam)/launch/robot_$(arg slam_methods).launch">
    <arg name="configuration_basename" value="$(arg configuration_basename)"/>
  </include> 

  <!-- rviz -->
  <group if="$(arg open_rviz)">
    <node pkg="rviz" type="rviz" name="rviz" required="true"
          args="-d $(find robot_slam)/rviz/robot_gmapping.rviz">
    </node>
  </group>
</launch>

the robot_lds_2d_gazebo.lua configuration file:

include "map_builder.lua"
include "trajectory_builder.lua"

options = {
  map_builder = MAP_BUILDER,
  trajectory_builder = TRAJECTORY_BUILDER,
  map_frame = "map",
  tracking_frame = "base_footprint",
  published_frame = "base_link",
  odom_frame = "odom",
  provide_odom_frame = false,
  publish_frame_projected_to_2d = false,
  use_odometry = true,
  use_nav_sat = false,
  use_landmarks = false,
  num_laser_scans = 2,
  num_multi_echo_laser_scans = 0,
  num_subdivisions_per_laser_scan = 1,
  num_point_clouds = 0,
  lookup_transform_timeout_sec = 0.2,
  submap_publish_period_sec = 0.3,
  pose_publish_period_sec = 5e-3,
  trajectory_publish_period_sec = 30e-3,
  rangefinder_sampling_ratio = 1.,
  odometry_sampling_ratio = 1.,
  fixed_frame_pose_sampling_ratio = 1.,
  imu_sampling_ratio = 1.,
  landmarks_sampling_ratio = 1.,
}


MAP_BUILDER.use_trajectory_builder_2d = true
TRAJECTORY_BUILDER_2D.min_range = 0.1
TRAJECTORY_BUILDER_2D.max_range = 10.0
TRAJECTORY_BUILDER_2D.missing_data_ray_length = 3.
TRAJECTORY_BUILDER_2D.use_imu_data = true
TRAJECTORY_BUILDER_2D.use_online_correlative_scan_matching = true
TRAJECTORY_BUILDER_2D.motion_filter.max_angle_radians = math.rad(0.1)

POSE_GRAPH.constraint_builder.min_score = 0.65
POSE_GRAPH.constraint_builder.global_localization_min_score = 0.7

return options

my robot_gmapping.launch file:

<launch>
  <!-- Arguments -->
  <arg name="set_base_frame" default="base_link"/>
  <arg name="set_odom_frame" default="odom"/>
  <arg name="set_map_frame"  default="map"/>

  <!-- Gmapping -->
  <node pkg="gmapping" type="slam_gmapping" name="robot_slam_gmapping" output="screen">
    <param name="base_frame" value="$(arg set_base_frame)"/>
    <param name="odom_frame" value="$(arg set_odom_frame)"/>
    <param name="map_frame"  value="$(arg set_map_frame)"/>
    <rosparam command="load" file="$(find robot_slam)/config/gmapping_params.yaml" />
  </node>
</launch>

gmapping_params.yaml:

map_update_interval: 2.0
maxUrange: 10.0
sigma: 0.05
kernelSize: 1
lstep: 0.05
astep: 0.05
iterations: 5
lsigma: 0.075
ogain: 3.0
lskip: 0
minimumScore: 50
srr: 0.1
srt: 0.2
str: 0.1
stt: 0.2
linearUpdate: 1.0
angularUpdate: 0.2
temporalUpdate: 0.5
resampleThreshold: 0.5
particles: 100
xmin: -10.0
ymin: -10.0
xmax: 10.0
ymax: 10.0
delta: 0.05
llsamplerange: 0.01
llsamplestep: 0.01
lasamplerange: 0.005
lasamplestep: 0.005

Thanks in advance! I appreciate your help!

edit retag flag offensive close merge delete

Comments

2

I understand that this is a follow-up question to #392446.

Since I think you have successfully merged scan, I think you should remove the notation "two Lidar" from the title and description. It leads to misleading.

( If you want to verify if merging is the problem, I suggest you try running gmapping with one Lidar. If merging is the problem, you should be able to SLAM. )

Does the odom ( odometry ) exist and is it connected to the gmapping?If the scan should be working but the gmapping is not working, that is the first thing to worry about.

Attaching a launch file and a screenshot will help us understand and solve the problem, so it is best to provide as much relevant information as possible.

miura gravatar image miura  ( 2021-12-09 17:51:56 -0500 )edit
1

I have revised my contribution again. I hope that it is sufficient so far to understand my problem.

My map is connected with the odom and the odom with the base_footprint.

sincoy gravatar image sincoy  ( 2021-12-10 10:03:07 -0500 )edit
1

"the mapping starts from the center of the mobile robot and not from the specified sensors" I would like to know more about.

In other words, is there a difference between the position of the point cloud before the merge and the position of the point cloud after the merge?

Would you share a visualization of the sensor data before the merge (scan_1, scan_2) and the sensor data after the merge (scan) together? (When visualizing, it would be helpful if you could change the color for each topic.)

miura gravatar image miura  ( 2021-12-10 19:16:01 -0500 )edit
2

Gmapping works by estimating the transform map -> odom. This is used to correct the error in the odom->base_link transform resulting from drift. Your sensors are then linked to the base_link frame. If you look at the source code of Gmapping, the map is first created in the lidar's frame and then transformed to the base_link frame so it wouldn't make a difference which frame here is used. It'd alsmo make more sense to use the robot's base_link as it's the object we try to localize, and not the sensor only.

FYI, the lua file you attached is used by Google Cartographer only. To set the parameters of Gmapping, look for the gmapping.launch file.

3473f gravatar image 3473f  ( 2021-12-10 22:46:22 -0500 )edit

@miura: I would like to share the images of the sensor data but right now it is not possible to upload the images. I determined a difference but overall the merged scans are the same. You will understand when I am able to share the pictures.

@3473f Yeah, it makes sence right now! Thanks for the information. I mean we are using SLAM for mapping and localization. Therefore it makes sence to use the base_link as reference. In RVIZ I can see that a map is created from sensor_1 to base_link but not from sensor_2 to base_link. Why is there a difference?

sincoy gravatar image sincoy  ( 2021-12-11 06:10:30 -0500 )edit
1

It looks like there is no problem with scan. So I'm wondering if the node that issues the tf between odom and base_link exists. Or maybe the launch file opened in $(find robot_slam)/launch/robot_$(arg slam_methods).launch has incomplete parameter settings.

miura gravatar image miura  ( 2021-12-11 17:43:02 -0500 )edit

If I am running rosrun tf2_tools view_frames.py the tf from map to odom and from odom to base_link exist.

I will share my robot_gmapping.launch file. Perhaps it will be better for you to understand the issue. Thank you very much for your help and time!

sincoy gravatar image sincoy  ( 2021-12-12 07:23:26 -0500 )edit
1

Thanks for sharing robot_gmapping.launch. There doesn't seem to be any problem.

You said that odom and base_link are connected in tf, which node is connecting them? It would be better if gazebo is connected to them and they move together when you move the robot, but if they are fixed with static_transform_publisher, SLAM should not work.

miura gravatar image miura  ( 2021-12-12 18:01:21 -0500 )edit