ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A answers.ros.org
Ask Your Question
0

Fusing two odoms (VO) using robot_localization

asked 2020-09-11 15:14:14 -0600

Equaltrace gravatar image

Hi,

I am fairly new to using robot_localization. I am trying to fuse odometry from two visual odometry sources (i.e. stereo cameras) in my robot’s reference frame. The setup is drawn here

I have drawn the tf tree here

I am not sure as to how I should proceed to fuse the two odometry sources as there is no tf base_link-->odom1 and base_link-->odom2

I can work out the static tf between base_link-->base1 and base_link-->base2, but then base_link becomes the parent to base1 and base2 and I lose the base_link-->odom1 and base_link-->odom2 tf

I have searched for an answer among similar posts, but haven’t been able to get a solution. I am pretty sure I am missing something very basic, but I can't point it out. Kindly advise.

edit retag flag offensive close merge delete

Comments

Your tree should look like map->odom_filtered->base_link->... not base_link->odomX. Take the odometry message of source1 (I asume its camera centered, so frame_ID would be "camera1_base") fuse it in robot_localization ekf (link text with odom2 and output the fused data (ekf output) in odom_filtered frame.

Dragonslayer gravatar image Dragonslayer  ( 2020-09-11 16:55:10 -0600 )edit

@Dragonslayer Hi. I fell asleep before I could tag you and you beat me to it. So welcome to the post. I am starting from the basics. My config file for robot_localization is below. But I am still confused what needs to go in for odom_frame, base_link_frame, map_frame, world_frame ?

For the sake of simplicity, the frame names for camera1 are cam1_odom->cam1_base->cam1_optical... and the frame names for camera2 are cam2_odom->cam2_base->cam2_optical... and the PC is the base_link

frequency: 5
two_d_mode: true
publish_tf: true
predict_to_current_time: true
print_diagnostics: true

**odom_frame: ???
base_link_frame: ???
#map_frame: ???
world_frame: ???**

# xyz, rpy, vxyz, vrpy, axyz
odom0: /camera1/odom
odom0_config: [true,...
               ...false]
odom0_differential: false

odom1: /camera2/odom
odom1_config: [false,... 
               ..., false]
odom1_differential: true ...
Equaltrace gravatar image Equaltrace  ( 2020-09-11 19:24:26 -0600 )edit
  • odom_frame: odom #published by ekf itslef.
  • base_link_frame: base_link #the robot base link/chassis of your system. Or projected base_footprint frame depending on robot model. Published to tf from urdf robot model (usually) via robot_state_publisher.
  • map_frame: map #The map frame the localization or slam package publishes.
  • world_frame: odom #defaults to odom if unspecified for fusing continous data like in your case this is the way to go.
Dragonslayer gravatar image Dragonslayer  ( 2020-09-12 09:02:52 -0600 )edit

@Dragonslayer: I get the following warning messages repeatedly when I run the localization node with the frames set to

odom_frame: odom
base_link_frame: base_link
map_frame: map
world_frame: odom

Warnings:

[ WARN] [1599935167.107900670]: Could not obtain transform from cam2_base to base_link. Error was Could not find a connection between 'base_link' and 'cam2_base' because they are not part of the same tree.Tf has two or more unconnected trees.

[ WARN] [1599935169.111119120]: Could not obtain transform from cam1_odom to odom. Error was Could not find a connection between 'odom' and 'cam1_odom' because they are not part of the same tree.Tf has two or more unconnected trees.

...

In a way I would expect these errors because there are no tf which would establish a relation between cam1 tree, cam2 tree, and base_link. :/

Equaltrace gravatar image Equaltrace  ( 2020-09-12 15:05:47 -0600 )edit

You dont need the cam1 and cam2 tree. just a urdf for the robot model, base_link up to the frame the cams odometries are in reference to (camera_linkX or cam1_base or whatever its called).

Dragonslayer gravatar image Dragonslayer  ( 2020-09-12 16:46:11 -0600 )edit

@Dragonslayer So, I turned off the tf between the camX_odom --> camX_base. I then modified the robot urdf to publish base_link --> camX_base urdfs. Now my tf tree looks like this.

And now when I try to run robot_localization, I get the following warn message repeatedly:

[ WARN] [1599967026.507911316]: Could not obtain transform from cam1_odom to odom. Error was "cam1_odom" passed to lookupTransform argument source_frame does not exist

This I would expect too, since there is no tf to relate cam1_odom to any other frames. Do I need to manually publish the inverse tf from camX_base --> camX_odom?

Equaltrace gravatar image Equaltrace  ( 2020-09-12 22:43:57 -0600 )edit

Are you sure its ekf node that sends the warn? What does your /camera1/odom message look like (frame_ID)? Likely somewhere in your setup some node is looking for cam1_odom but your odom frame now is odom or odom_filtered whatever you put in the ekf parameter odom_frame. You dont get a cam2_odom warning?

Dragonslayer gravatar image Dragonslayer  ( 2020-09-13 09:19:40 -0600 )edit

@Dragonslayer The warning messages only start when I start the ekf node and in the same window where ekf is running. The odom_frame in ekf config is set to odom. Here is the config-file for ekf node. If I change the order of odom in config i.e. put odom0: /camera2/odom and odom1: /camera1/odom, the warning message change to cam2_odom warnings instead of cam1_odom warnings:

[ WARN] [1600008042.140231369]: Could not obtain transform from cam2_odom to odom. Error was "cam2_odom" passed to lookupTransform argument source_frame does not exist

This is what the /camera1/odom message looks like:

header: 
  seq: 1252
  stamp: 
    secs: 1600008164
    nsecs: 680465541
  frame_id: "cam1_odom"
child_frame_id: "cam1_base"
pose: 
  pose: 
    position: ...
Equaltrace gravatar image Equaltrace  ( 2020-09-13 10:01:54 -0600 )edit

1 Answer

Sort by » oldest newest most voted
1

answered 2020-10-07 04:28:33 -0600

Tom Moore gravatar image

Following on from the comments:

When it comes to taking in sensor data, the filter is primarily concerned with two coordinate frames: the value of the world_frame parameter (typically odom or map) and the value of the base_link_frame parameter (typically base_link or base_footprint).

When you provide a nav_msgs/Odometry message as a sensor input, there are two coordinate frames represented in that. The frame_id in the message refers to the pose data, and the child_frame_id refers to the twist (velocity) data.

All velocity data will get transformed from the child_frame_id of the odometry message to the base_link_frame of your EKF, so you need to provide that transform, or make sure both frames are the same.

All pose data will get transformed from the frame_id of the odomety message to the world_frame of your EKF, so you need to provide that transform, or make sure both frames are the same.

The EKF itself will generate a transform from world_frame->base_link_frame.

In your case, I'd recommend fusing just velocity data from one source. Get it working the way you want, and then add a new source.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2020-09-11 15:14:14 -0600

Seen: 597 times

Last updated: Oct 07 '20