Well, you do have two different tf
tree branches (if this is the name for it), where the one with base_footprint
is fed from slam_gmapping
and the one with base_link
is fed from the scan_matcher
.
Thus, you have two different sources of a global pose.
So as long as those two do not provide the exact same transform (which is very unlikely given the probabilistic nature of the algorithms), this can obviously happen.
The way you set up your tf
tree looks as if you had two different robots, and not just one.
EDIT:
To shed some more light on this.
@osmancns:
Yes, your problem is normal. Especially when moving the lidar with your hands. But this will not make a difference if the sensor is mounted on a robot.
You do have two algorithms estimating the same position from the same sensor within the world coordinate system.
However, the algorithms are not perfect, neither is the sensor data, so there will be a drift in between the estimated poses.
And because your tf tree is set up to have two distinct branches, this is more or less something like having two robots/sensors.
In general, it makes no sense to set the robot up the way you have. If you want to have more information you would also need to share how you set up your tf
tree, i.e. do you have a urdf
? Another model? No model at all? And the source code (i.e. the launch and config files, as well as any custom ros nodes) would here be helpful too.
@nightblue:
tf
is transformation library of ROS. check out the documentation on the ROS wiki by clicking HERE.
A tf
tree has absolutely _nothing_ to do with OO diagrams.
With tf
you specify the transformation from one coordinate system to another.
And due to the nature of this, this can only be a tree.
An example given the figure linked above:
You have the coordinate system (also called frame
) map
at the very top of you tree.
From there, the ROS node slam_gmapping
estimates the position and orientation (also called Pose) and represents this through a transform chain, which is in this example map --> odom_combined --> base_footprint
.
Usually, (and this is where we probably deviate from the figure) a sensor is mounted fixed on a robot, so the transformation from base_footprint
to laser
is usually "hardcoded" in urdf
.
Please check out the documentation on tf in detail and try to understand how transformations are represented.