ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

2 VIO inputs with robot_localization

asked 2021-04-26 06:00:58 -0500

Guglie gravatar image

updated 2021-04-27 07:34:27 -0500

I have a robot with 2 VIO cameras, I want to fuse the 2 odometries with robot_localization.

the odometry messages are reported in their respective frames which are correctly translated and rotated according to the sensor position and are children of "odom" frame.

The messages report a position starting from 0,0,0 in their frame

The robot_localization seems not to handle the frames correctly even if the Preparing Sensor Data guide of robot_localization says that:
All pose data is transformed from the message header's frame_id into the coordinate frame specified by the world_frame parameter (set to "odom" in my case)
So the filtered odometry (base_link) keeps jumping back and forth the two sources, seems much like the odometry poses are not transformed back in the odom frame before being fused.

Is that correct? How can I handle correctly the transform?

2 sources for robot_localization

launch:

<node pkg="tf2_ros" type="static_transform_publisher" name="odom2front" args="0.4 0 0.02 0 0 0 1 odom front_vio_odom_frame" />
<node pkg="tf2_ros" type="static_transform_publisher" name="odom2back" args="-0.4 0 0.02 0 0 1 0 odom back_vio_odom_frame" />

<node pkg="robot_localization" type="ekf_localization_node" name="ekf_se" clear_params="true">
    <rosparam command="load" file="ekf_config.yml"/>
</node>

ekf_config.yml:

map_frame: map
odom_frame: odom
base_link_frame: base_link
world_frame: odom

two_d_mode: false
publish_tf: true

# inputs
odom0: front_vio/odom/sample
odom1: back_vio/odom/sample

odom0_config: [true, true, true,
               true, true, true,
               false, false, false,
               false, false, false,
               false, false, false]
odom0_differential: false
odom0_relative: false

odom1_config: [true, true, true,
               true, true, true,
               false, false, false,
               false, false, false,
               false, false, false]
odom1_differential: false
odom1_relative: false

initial_state: [0, 0, 0
               0, 0, 0,
               0, 0, 0,
               0, 0, 0]

dynamic_process_noise_covariance: true

Odometry message sample:

header: 
  seq: 1655
  stamp: 
    secs: 1619171242
    nsecs: 910251379
  frame_id: "back_vio_odom_frame"
child_frame_id: "back_vio_pose_frame"
pose: 
  pose: 
    position: 
      x: 0.0742701664567
      y: 0.00130270281807
      z: 0.325136482716
    orientation: 
      x: 0.0363613590598
      y: -0.0208929777145
      z: 0.0272305142134
      w: 0.998749136925
  covariance: [0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001]
twist: 
  twist: 
    linear: 
      x: 0.0470368626879
      y: -0.024768557293
      z: 0.0780361119745
    angular: 
      x: -0.0763922602234
      y: -0.0473806500658
      z: -0.0343299482198
  covariance: [0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001]

EDIT

I've tested the configuration suggested in the @tom 's answer ... (more)

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
2

answered 2021-04-27 04:59:16 -0500

Tom Moore gravatar image

updated 2021-04-28 02:46:37 -0500

I'm a bit confused by your frame setup. If the VIO cameras are affixed to your robot, then I don't see how your transform makes sense. I assume the cameras are just producing visual odometry?

Ignoring the back sensor for a moment, let's say you start your robot, turn it 90 degrees to the left, and drive 10 meters. Your front sensor is mounted 0.4 meters in front of the robot, I'm assuming. So that sensor is first going to get rotated, and then translated 10 meters. But then you are just transforming it with:

 <node pkg="tf2_ros" type="static_transform_publisher" name="odom2front" args="0.4 0 0.02 0 0 0 1 odom front_vio_odom_frame" />

As there is no rotation in that transform, you'll just be subtracting 0.4 meters from the X position, and then fusing that into your state estimate. But your robot turned 90 degrees, so you'd really want to be subtracting 0.4 meters from Y, not X. You are trying to treat your sensor, which is affixed to your robot, as though it is an external fixed sensor.

Moreover, you are assuming that, over time, your two cameras are going to agree with each other's accumulated odometry. This seems unlikely.

My recommendation would be to ensure the camera's velocities are computed in the sensor frame, and then change the child_frame_id in those messages to something like front_vio_velocity_frame, and then change those transforms so that they are going from base_link to front_vio_velocity_frame. Then, in your EKF config, just fuse the velocity data, and not the pose data. Fuse one of the sensors in the EKF to make sure it's behaving as you want, and then add the second.

EDIT in response to further question

Why can't an odometry (or pose) source for position be specified with a simple transform from base_link like in this case?

Because the transform required to convert your camera odometry to your odom frame will change depending on your robot's current orientation. The static transform you are defining says that, in all cases, to move from front_vio_odom_frame to odom, you just need to subtract 0.4 from X and 0.02 from Z, and that's it. But that's not true. Rotate your robot 90 degrees after starting up, and you will find that to get from your camera's frame to the odom frame, you need to subtract 0.4 from Y.

What you really need is to rotate that static transform based on the robot's current pose, and then apply it. In other words, you need to convert an offset that is specified in the base_link frame and convert it to the odom frame, which is what you want.

The package doesn't currently support pose data that is reported from a sensor that is affixed to the robot at an offset from the origin. You'd need to either (a) use velocity data or (b ... (more)

edit flag offensive delete link more

Comments

Thanks for your answer, if I "affix" the cameras to the robot (parenting their tf to base_link) I break their connection to the world fixed frame (which seems required from robot_localization), don't I?

Guglie gravatar image Guglie  ( 2021-04-27 05:30:25 -0500 )edit

It's required if you are fusing pose data. The twist data in the odometry messages is velocity data. So if you change your EKF sensor configs to fuse those velocities instead (set X/Y/Z/R/P/y to false, and set their velocities to true), then the transform needs to be relative to base_link.

Tom Moore gravatar image Tom Moore  ( 2021-04-27 05:36:21 -0500 )edit

By the way I tried few things since this question and this setup kinda works if I process the Odometry messages and transform with the tf from their parent to odom (so they report a world position at the center of the robot and correct orientation), but the filter outputs a shaky pose because the 2 VIO don't agree perfectly

Guglie gravatar image Guglie  ( 2021-04-27 05:41:23 -0500 )edit

Ok, I'll try with the twist, but the cameras do VIO (with IMUs), how can I keep absolute gravity vector alignment (roll/pitch) given by accelerometers?

Guglie gravatar image Guglie  ( 2021-04-27 05:45:18 -0500 )edit

The velocity work as espected with a simple transform from base_link, but has significantly more drift as in the EDIT of the above post

Guglie gravatar image Guglie  ( 2021-04-27 07:36:57 -0500 )edit

Question Tools

3 followers

Stats

Asked: 2021-04-26 06:00:58 -0500

Seen: 327 times

Last updated: Apr 28 '21