ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Redundant use of local information in robot_localization when fusing local and global data?

asked 2019-09-09 10:49:08 -0500

Phgo gravatar image

I am using robot_localization to fuse both local and global sensor information for a 2d differential drive robot. More precisely odometry, imu and when available global pose information are fused.

My configuration basically follows this tutorials: ros-sensor-fusion-tutorial

According to that I have set up two separated r_l nodes for local and global fusion, with the local fusion node only taking odom and imu and the global one additionally using the global pose information. The local fusion node also provides the (fused) odom->base_link tf.

However, I am wondering whether the proposed configuration may lead to redundant use of the local sensor information, which should be avoided in sensor fusion as this may lead to biased results. As far as I understand, the global r_l node takes the raw sensor msgs but first transforms them into the world frame (map_frame) implicitly using the odom->base_link tf provided by the local r_l node.

Does not this mean, that the local information is used twice in the global r_l node, once in form of the original data and once as the fused odom->base_link tf?

If this is not the case, can someone please point me in the right direction?

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
2

answered 2019-11-18 03:06:43 -0500

Tom Moore gravatar image

As far as I understand, the global r_l node takes the raw sensor msgs but first transforms them into the world frame (map_frame) implicitly using the odom->base_link tf provided by the local r_l node.

In general, this isn't accurate.

In the sensor fusion nodes in r_l, the assumptions are generally

  • All velocity and acceleration data is in the robot's body (e.g., base_link) frame, or there is a transform from the sensor's frame to that body frame that does not "pass through" the world_frame->base_link_frame transform. So, e.g., you should not have velocities reported in the map frame that are getting fused in a filter.
  • All pose data is in a world-fixed frame that can be transformed to your world_frame without passing through the world_frame->base_link frame transform. So, for example, you should not have pose data that is relative to base_link being fused.

Internally, the filter first transforms velocity data into your base_link_frame. Then, when it passes through the kinematic model, the model itself is transforming it into the world frame you specify. But velocity data, when fused in the map frame EKF, does not pass through the odom->base_link transform.

Another thing to keep in mind is that, even if the velocity data were to pass through the odom->base_link transform, it is effectively meaningless. The map frame EKF is actually generating the map->base_link transform, but since tf doesn't permit reparenting, we need to compute map->odom so that we can make a chain:

T_map_odom = T_map_base_link * T_odom_base_link^-1

Now we have map->odom->base_link. But you can change T_odom_base_link to be anything, and map->base_link will remain the same, since we're constantly updating T_map_odom. So data transformed from base_link to map will always be consistent, since we're keeping track of the transform between map and odom.

In any case, you can generally consider the EKFs to be completely separate. The only exception is if you try to, e.g., fuse pose data inputs that are in the odom frame into the map frame EKF.

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2019-09-09 10:49:08 -0500

Seen: 296 times

Last updated: Nov 18 '19