ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Advice on improving pose estimation with robot_localization

asked 2018-05-21 11:20:28 -0500

updated 2018-05-29 07:28:36 -0500

Dear Tom Moore,

Let me start by thanking you on your excellent work on the robot_localization ROS package. I have been playing with it recently to estimate the pose of a differential-drive outdoor robot equipped with several sensors, and I would like to kindly ask your opinion about it.

Perhaps you could send me some tips on how to improve the pose estimation of the robot, especially the robot's orientation. Here is a video of the pose estimation that I get.

The Odometry estimation given by the EKF node is the dark orange/red one. Below, you have a list of the main topics available in my dataset:

 - /laser/scan_filtered            --> Laserscan data 
 - /mavros/global_position/local   --> Odometry msg fusing GPS+IMU (from mavros: Brown in the video)
 - /mavros/global_position/raw/fix --> GPS data 
 - /mavros/imu/data                --> Mavros IMU data 
 - /robot/odom                     --> Encoder data (odometry: Green in the video) 
 - /robot/new_odom                 --> Encoder data (odometry with covarince -- added offline) 
 - /tf                             --> Transforms 
 - /uwb/multilateration_odom       --> Multilateration (triangulation method providing global x,y,z) 
 - /yei_imu/data                   --> Our own IMU data 
 - /zed/odom_with_twist            --> Visual odometry from the ZED Stereolabs outdoor camera (Blue in the video)

Although I have plenty of data, I am trying to fuse in a first stage the estimation given by the onboard Ultra Wide band (UWB) multilateration software (just positional data, no orientation given) + the robot encoders, which are decent + our IMU (on yei_imu/data).

However, as you can see, the estimated orientation of the robot is sometimes odd. I would expect the blue axis of the base_link frame (in the video) to always point up, and the red axis to always point forward. However, it is clear that especially the red axis points outwards sometimes, instead of pointing to the direction of movement. This is clear here:

image description

Do you have any suggestion to improve the orientation estimation of my robot?

Also, I notice that for positional tracking, it doesn't seem to make much of a different to use just the UWB estimation, when compared to fusing UWB + robot encoders. I was expecting to smooth out the trajectory a bit, as the UWB data is subject to some jumps in positional data.

These are the params that I am currently using in the robot_localization software, in case you want to advise me to change anything.

Btw, I'm on ROS Kinetic, Ubuntu 16.04. Just some general guidelines and things that I could try from your perspective would be greatly appreciated. If you are interested in trying out my dataset, I can send a rosbag later.

Thank you in advance!

EDIT: posting config in-line:

    frequency: 10

    sensor_timeout: 0.25 #NOTE [D]: UWB works at 4Hz.

    two_d_mode: false
    transform_time_offset: 0.0
    transform_timeout: 0.25
    print_diagnostics: true

    publish_tf: true
    publish_acceleration: false

    map_frame: map
    odom_frame: odom
    base_link_frame: base_link
    world_frame: odom

    # UWB (x,y,z):
    odom0: uwb/multilateration_odom
    odom0_config: [true, true, true,    #x,y,z
                   false, false, false,
                   false, false, false,
                   false, false, false,
                   false, false, false]
    odom0_differential: false
    odom0_relative: false 
    odom0_queue_size: 2
    odom0_pose_rejection_threshold: 3 ...
(more)
edit retag flag offensive close merge delete

Comments

1

If you could post a bag file that would be useful.

stevejp gravatar image stevejp  ( 2018-05-22 14:39:38 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
4

answered 2018-05-24 06:49:59 -0500

Tom Moore gravatar image

updated 2018-05-30 04:55:49 -0500

There's quite a bit going on here, and I'm afraid that without sample input messages from every sensor input, it may be hard to help.

However, I can comment on a few things.

  • You are fusing yaw from two sources, and I'd be willing to bet they don't agree, at least not in the long term. Your odometry yaw is going to drift, and your IMU likely won't, so relative mode won't help. I'd pick one source for each absolute pose variable (unless you know they will always be reported in the same frame), and fuse the other sources as velocities, or turn on differential mode for them.
  • Get rid of any advanced parameters, namely, the rejection thresholds. My advice is to always start with one sensor, make sure it's behaving as expected, then add other sensors, until it's working as expected. Once all of that works, then you can introduce the advanced params, if needed.
  • If your robot can't move laterally, fuse the 0 value you're getting from your robot's wheel encoders for Y velocity.
  • Make your the frames of your sensor data match, or that a transform is being provided from each sensor to get the data into your world_frame. For example, let's say you drive straight forward for 10 meters, and your IMU says you are going at a heading of pi/3. Your UWB position, though, says you went from (0, 0) to (10, 0), which would imply a heading of 0. That means your sensors' coordinate frames don't agree, and you need to provide a transform between them (though that will assume that your UWB setup is fixed).

EDIT 1 in response to comments and updates:

  • If you've updated your parameters, please update the parameters in the question as well. I still see the advanced parameters in there.
  • Re: the comment below, if the UWB is producing a pose estimate in some world frame, then you don't want a base_link->uwb_frame transform. You want a transform from odom->uwb_frame. With the notable exception of IMUs, all pose data should typically be in the world_frame or have a static transform from the sensor frame directly to the world_frame. Likewise, velocities should always be reported in the base_link_frame, or there should be a static transform directly from the sensor frame to the base_link_frame.
  • If your IMUs are mounted at your vehicle's center and are mounted in their neutral orientation, then their frame_id being base_link makes sense. If they are offset from the vehicle's center, then you need static transforms to account for that.
  • Re: the IMUs, it's getting a bit out of scope for this question (you should probably post another one, or raise the issue with the package authors). However, if you have two IMUs and they both report absolute orientation, then yes, they ought to be within some reasonable distance of one another, assuming they are ...
(more)
edit flag offensive delete link more

Comments

Thanks! Did as u suggest in the first 3 points. wrt 4th point: I have base_link -> uwb_frame (the child frame id of the msgs coming from /uwb/multilateration_odom). Yet, both IMUs have frame_id=base_link. Should I provide static TFs for each of them? (pls see my update to the main question as well)

DavidPortugal gravatar image DavidPortugal  ( 2018-05-25 12:12:34 -0500 )edit

Could you please give me some details about how you calculated the covariance matrices for the imu?

Chubba_07 gravatar image Chubba_07  ( 2019-11-09 05:52:05 -0500 )edit

Question Tools

3 followers

Stats

Asked: 2018-05-21 11:20:28 -0500

Seen: 1,561 times

Last updated: May 30 '18