ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

How to rearrange the Tf Tree for multiple sensor fusion for robot_localization

asked 2022-03-29 07:00:29 -0500

Abdur gravatar image

updated 2022-04-01 10:13:33 -0500

My goal is to use ekf for odometry estimate.

I have three sources: Visual Odometry, wheel encoders and an IMU.

I have watched the roscon tutorial video. But still confused on how I should design my tf tree.

Currently, I have the following frames:

Visual odometry: t265_odom_frame -> t265_pose_frame

Wheel Odometry: odom_encoders -> base_link

Imu: odom -> d435_imu_optical_frame

Obviously this is wrong. And I cant figure out how should I rearrange everything. My robot works ok without fusion and now I am trying improvements.

Should the parent and child frame IDs of all my sources be the same i.e. multiple odom -> base_link? But that doesnt make sense as well and, documentation says that the transform odom -> base_link should only be produced by the localization package.

**

Edit:

**

After reading from many resources, I have rearranged my tf tree as in the link.

My visual odometry msg is:

    rostopic echo -n1 /t265/odom/sample 
header: 
  seq: 143978
  stamp: 
    secs: 1648741517
    nsecs: 288775444
  frame_id: "t265_odom_frame"
child_frame_id: "t265_pose_frame"
pose: 
  pose: 
    position: 
      x: -0.6457403898239136
      y: -0.2904374897480011
      z: -0.03084297850728035
    orientation: 
      x: -0.013879550620913506
      y: -0.009088587947189808
      z: -0.9964107275009155
      w: 0.08300896733999252
  covariance: [0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001]
twist: 
  twist: 
    linear: 
      x: -0.002801669730148961
      y: 0.0014666232674908119
      z: -0.0010684610544884003
    angular: 
      x: -6.915069864478318e-05
      y: 0.000590384206814675
      z: -0.0017910302528019177
  covariance: [0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001]

My wheel encoder message is:

rostopic echo -n1 /odom_data_quat 
header: 
  seq: 36090
  stamp: 
    secs: 1648741577
    nsecs: 641369521
  frame_id: "t265_odom_frame"
child_frame_id: "t265_pose_frame"
pose: 
  pose: 
    position: 
      x: 1.3941340052752529
      y: -0.7237769047886113
      z: 0.0
    orientation: 
      x: 0.0
      y: 0.0
      z: -0.9992252356195777
      w: 0.039356428966556
  covariance: [0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1]
twist: 
  twist: 
    linear: 
      x: 0.0
      y: 0.0
      z: 0.0
    angular: 
      x: 0.0
      y: 0.0
      z: 0 ...
(more)
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
2

answered 2022-05-30 08:10:10 -0500

Tom Moore gravatar image

updated 2022-05-30 08:26:14 -0500

So the best thing to do in these situation is back up and start slowly.

First, determine what you want your coordinate frames to be called. For simplicity, let's just used odom for your world_frame (and odom_frame) and base_link for your base_link_frame. The base_link frame will be rigidly affixed to your robot's centroid. The odom->base_link transform that the EKF produces will therefore be your robot's pose in the odom frame.

Next, comment out all your input sources to the EKF except the wheel encoders. Also turn off the use_control option and get rid of all _rejection_threshold values.

Change your wheel encoder odometry so that its frame_id is odom (note that this won't really matter, as we'll be ignoring the pose data anyway) and make the child_frame_id base_link. Change your sensor config for that sensor to

odom0_config: [false, false, false,
               false, false, false,
               true, true, false,
               false, false, true,
               false, false, false]

We fuse Y velocity because it will be (correctly) reported as 0, and something needs to measure Y or Y velocity.

Now drive your robot around and confirm that it does what you expect.

For integrating the IMU, it looks like you already have a base_link->d435_optical_frame link. I assume the base_link->d435_link transform is the sensor's position on your robot, correct? If so, re-enable the IMU input, but change its config to this:

imu0: /imu/data
imu0_config: [false, false, false,
              false, false, false,
              false, false, false,
              false, false, true,
              true, false, false]

You have two_d_mode enabled, so using yaw velocity is all you should need. You could fuse absolute yaw, but I'd avoid that for now.

Now drive your robot around again and make sure it behaves as you expect from a state estimation perspective.

Now comes the trickiest sensor input. If it were me, since the sensor produces velocity data, I'd make a static base_link->t265_pose_frame transform with the offset of the sensor from the robot's origin, re-enable the sensor, and then change the sensor config to this :

odom1_config: [false, false, false,
               false, false, false,
               true, true, false,
               false, false, true,
               false, false, false]

Again, drive it around and make sure it works. If it doesn't, make note of specifically what it does wrong and update your question.

edit flag offensive delete link more

Comments

Tom's answer is very good at explaining EKF, so I've got nothing to add there. I would like to comment on your current TF.

Unless I'm mistaken, there is no need to have t265_odom_framet265_pose_frame & t265_link.

You have 3 odometry inputs: visual, encoders, and IMU. For your IMU you don't need to do anything else besides have your IMU message, in order to use it for EKF.

Your visual and encoder odometry both describe the relation between odom and base_link. You are right, however, that only EKF should publish your TF. The sensors themselves should only publish the odometry message, not the TF.

Joe28965 gravatar image Joe28965  ( 2022-05-30 09:04:51 -0500 )edit

Hi, thank you for your answer because I am dealing with similar problem. But in my case, the t265 is mounted in not aligned way with base_link axes (it is rotated in two axes) and using it with robot_localization package makes my robot go in wrong direction. Is there any way to correct this (t265 messages has its own frames which are difficult to align in the tree like 265_odom_frame) or should I write a node that will transform message frames to odom and base_link respectively to use for robot_localization package?

Marabir gravatar image Marabir  ( 2023-06-26 04:21:38 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2022-03-29 07:00:29 -0500

Seen: 293 times

Last updated: May 30 '22