ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Why gmapping package requires the base_link -> odom tf?

asked 2016-02-24 04:00:14 -0500

SHI.ZP gravatar image

in the http://wiki.ros.org/gmapping

4.1.5 Required tf Transforms

1 the frame attached to incoming scans → base_link usually a fixed value, broadcast periodically by a robot_state_publisher, or a tf static_transform_publisher.

2 base_link → odom usually provided by the odometry system (e.g., the driver for the mobile base)

About 2, should be odom -> base_link rather than base_link -> odom, am I correct?

edit retag flag offensive close merge delete

1 Answer

Sort by » oldest newest most voted
3

answered 2016-03-04 21:36:03 -0500

robustify gravatar image

/odom frame is a pseudo-global reference frame in which a dead reckoning position and orientation estimate is represented. This estimate is known as odometry. Essentially, the robot measures its linear and angular velocity using wheel encoders and/or inertial sensors, and integrates those values to obtain a position and orientation.

Typically, some node in the ROS system is responsible for subscribing to the raw encoder and inertial data and publishing the relationship between the base frame of the robot, usually called '/base_link', and the '/odom' frame using a tf::TransformBroadcaster class. Using the notation used in the wiki that you cite, the frame the arrow points to is the parent frame of the transform.

Because encoder and inertial data is noisy and biased, and there is some uncertainty in the physical system, position and orientation estimates based solely on dead reckoning will quickly drift and diverge from the true position. This is where SLAM algorithms such as gmapping come in. The odometry estimate is a key input to gmapping, because at the core of the gmapping algorithm is a particle filter.

The particle filter simultaneously maintains a number of hypothetical map states and corresponding robot poses, with each combination of map and pose being called a particle. During each update of the map, gmapping updates each particle's pose estimate with a random sampling around the odometry estimate's velocity values. This propagates the position and heading of the particle according to the odometry, with the random sampling accounting for the possibility of the odometry being inaccurate.

Based on the propagated position and heading of the particles, the current LIDAR scan is matched with each of the separate map hypotheses in each particle. Each particle is then assigned a score based on how well the scan aligns with the map hypothesis. Since the particles that have higher scores are more likely to be good estimates of the true pose, they have a much higher chance of being selected to continue in the next update iteration.

Finally, once all the particles are updated, the one that yielded the highest score is output. The occupancy grid that represents the map features of that particle is published on the '/map' topic, and the corresponding pose is the algorithm's best determination of the current pose. However, instead of just publishing out the pose relative to '/map' frame, gmapping publishes a transform from '/odom' (child frame) to '/map' (parent frame), such that the odometry pose estimate aligns with the estimated pose output from gmapping.

I might be slightly misrepresenting the particular implementation of gmapping, and perhaps others could point out what I'm missing here, but this is the general idea behind particle filter SLAM algorithms.

TL; DR

So, to answer your question... it should definitely be base_link (child) --> odom (parent), and that transform should be generated by some node that integrates encoders and/or inertial sensors to get a pose estimate.

gmapping needs this to guide the random sampling of the robot's ... (more)

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2016-02-24 04:00:14 -0500

Seen: 1,712 times

Last updated: Mar 04 '16