ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

OK, there's a lot to unpack here.

First, please always include a full sample message from every sensor input.

Second, both the map and odom frame are world-fixed frames. If you were to feed the same inputs to both EKF instances, you'd have the exact same output from both filters.

The odom-frame EKF is generating the odom->camera_link transform in your case. The map frame EKF is technically producing the map->camera_link transform. So it's producing the pose of the camera in the map frame, and the odom EKF is giving you the pose of the camera in the odom frame.

The problem is that tf doesn't permit re-parenting of frames, so the map frame EKF has to take its (internal) map->camera_link transform, and multiply it by the inverse of the odom->camera_link transform, which generates the map->odom transform. But all of that should be transparent: all you care about is that you can obtain the pose of the camera in the map and odom frames, and you can carry out transforms along that chain.

As for your tag poses, the only packages I've used all give the pose of the tag in the camera frame. If you want to fuse tag poses, you need some sort of data store that contains the map-frame poses of all the AR tags and serves them up as, e.g., static transforms. That, or you can have a special node that looks up the tag pose in the map frame, and uses the current detection to compute the map-frame pose of the robot, which then goes to the EKF.

How do you get the map-frame poses of the AR tags? You can try recording them during mapping, or you can manually measure their locations and add them to your data store. But there's no way the EKF can just take a relative AR tag pose and somehow work with it. It has to be grounded to something.

OK, there's a lot to unpack here.

First, please always include a full sample message from every sensor input.

Second, both the map and odom frame are world-fixed frames. If you were to feed the same inputs to both EKF instances, you'd have the exact same output from both filters.

The odom-frame EKF is generating the odom->camera_link transform in your case. The map frame EKF is technically producing the map->camera_link transform. So it's producing the pose of the camera in the map frame, and the odom EKF is giving you the pose of the camera in the odom frame.

The problem is that tf doesn't permit re-parenting of frames, so the map frame EKF has to take its (internal) map->camera_link transform, and multiply it by the inverse of the odom->camera_link transform, which generates the map->odom transform. But all of that should be transparent: all you care about is that you can obtain the pose of the camera in the map and odom frames, and you can carry out transforms along that chain.

As for your tag poses, the only packages I've used all give the pose of the tag in the camera frame. If you want to fuse tag poses, you need some sort of data store that contains the map-frame poses of all the AR tags and serves them up as, e.g., static transforms. That, or you can have a special node that looks up the tag pose in the map frame, and uses the current detection to compute the map-frame pose of the robot, which then goes to the EKF.

How do you get the map-frame poses of the AR tags? You can try recording them during mapping, or you can manually measure their locations and add them to your data store. But there's no way the EKF can just take a relative AR tag pose and somehow work with it. It has to be grounded to something.

EDIT: also, is there any reason you are fusing every single variable for odom0? And I'd ditch both yaw and linear acceleration from the IMU, if I were you. I know you have it in differential mode, but magnetometers are awful.