ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Specifics of sensor fusion algorithm of robot_localization package

asked 2018-01-25 08:40:10 -0500

eroniki gravatar image

Hello,

I am wondering how robot_localization package is formulating the sensor fusion problem if more than one sensor is used during the localization. Specifically, the sensors operate at different frequencies how this package handle the fusion of the measurements coming from different sensors.

Considering a robot equipped with an IMU, working at 50 Hz, and a camera providing visual odometry at 20 Hz, how this package is employing the measurements as the package localizes.

Thanks in advance.

edit retag flag offensive close merge delete

Comments

@ayrton04 asked me to post my initial question on the github issues, it is just a repost.

eroniki gravatar image eroniki  ( 2018-01-25 08:40:41 -0500 )edit

Could you include a link to your github post?

gvdhoorn gravatar image gvdhoorn  ( 2018-01-25 09:34:01 -0500 )edit
1

1 Answer

Sort by » oldest newest most voted
4

answered 2018-01-26 03:25:44 -0500

Tom Moore gravatar image

OK, so there are really two questions here:

  1. How do we handle updates when we have sensors coming in with different frequencies?
  2. How do we update only part of the state vector?

For (1), the EKF's cycle looks like this:

  • ROS does a spinOnce, and we collect all the sensor measurements that are waiting in the ROS callback queue. We then put them in a C++ priority_queue, which is ordered by their time stamps. So after all the callbacks fire, we have an ordered measurement queue that is waiting to be processed. If you have different sensor frequencies, you may have, e.g., two IMU measurements and one wheel encoder odometry measurement in the queue.
  • We then look at the time stamp of our most recent state/EKF output (t_state), and the time stamp of the first sensor measurement in the queue (t_m0). We compute the time delta, and then we carry out an EKF prediction step over that time delta.
  • We then "correct" the prediction using the measurement obtained at t_m0, and update the EKF state. Now t_state = t_m0.
  • We then pop the next measurement from the priority queue, and repeat, until the priority queue is empty.
  • Finally, we publish the EKF state in a ROS message and via tf

For (2), the EKF always carries out a full prediction for the EKF state at every measurement. In other words, no matter what the sensor data is measuring, we project the full state across the time delta, as described in (1). However, we only correct the state for the variables that were measured by that sensor. We do this by exploiting the state-to-measurement space matrix, commonly referred to as H in the literature. We compute an MxN matrix for H, where M is the number of variables in the measurement, and N is the size of the full EKF state (currently 15). We construct similar sub-matrices for the other relevant EKF matrices, and carry out our correction as in any standard EKF formulation.

edit flag offensive delete link more

Comments

If the buffer in ros_robot_localization_listener is just acting as a priority queue why are findAncestor and findAncestorRecursiveYAML necessary?

coniferous gravatar image coniferous  ( 2018-01-26 09:52:48 -0500 )edit

That class/node has nothing to do with the state estimation nodes in r_l. It was a recent contribution to the package that I’m afraid I didn’t write, so I can’t answer that question offhand.

Tom Moore gravatar image Tom Moore  ( 2018-01-26 12:58:52 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2018-01-25 08:40:10 -0500

Seen: 865 times

Last updated: Jan 26 '18