ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

ROS Robot_Localization Functionality

asked 2021-03-25 04:26:28 -0500

JamesS gravatar image

Hi there,

I recently started working with the ROS robot_localization package and wanted to enquire about a few unknowns about the package. If you could please provide some insight as to these unknowns I would be greatly appreciative.

  1. How does the package handle multiple sensors of the same type (10 IMUs for instance) if one of the sensors is publishing a grossly erroneous value compared to the other sensors? Does it do some form of weighted averaging or is there some built in statistical analysis that assesses the data and detects outliers? I saw there was the differential feature to resolve this issue but are there any other systems in place to resolve this issue.

  2. Is there a diagram of the control functionality for the EKF? I'm looking to understand how the EKF integrates/ differentiates sensory data to eventually achieve a dead reckoning estimate.

  3. Does the package generate data points from sensors that do not necessarily possess the required information? What I mean by this is, if you have a velocity sensor, do you derive its value to achieve an acceleration estimate to assist in EKF comparison between sensors or is it all flowing downward from a highest order down to a position estimate? My thinking is that if I have a position sensor and an acceleration sensor, would you derive position data to achieve another acceleration estimate to help the acceleration sensor determine any outliers, etc.

Any and all insight with respect to the aforementioned questions would be greatly appreciated.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
2

answered 2021-03-25 05:27:34 -0500

Tom Moore gravatar image

updated 2021-03-25 05:28:16 -0500

How does the package handle multiple sensors of the same type (10 IMUs for instance) if one of the sensors is publishing a grossly erroneous value compared to the other sensors? Does it do some form of weighted averaging or is there some built in statistical analysis that assesses the data and detects outliers?

Kalman filters are just fancy weighted averaging. The weights are determined by the filter's estimate of the error in its state and the error reported by the sensor measurements. The pure Kalman filter math doesn't account for outliers as such (it assumes that outlier measurements would correctly report their increased error), but the robot_localization package does allow you to reject outliers via the rejection_threshold parameters. Those are given as Mahalanobis distances.

Is there a diagram of the control functionality for the EKF?

Not for this particular EKF, but most of the stuff you'd find online for EKFs would be accurate for this package.

What I mean by this is, if you have a velocity sensor, do you derive its value to achieve an acceleration estimate to assist in EKF comparison between sensors or is it all flowing downward from a highest order down to a position estimate?

In the core EKF math, not explicitly, no. But that's not needed. The pose and velocity variables, for example, are correlated through the transition function and the transfer function jacobian matrix. The magic of matrix inversion is such that if you were to measure the pose at time A and again at time B, the filter output will produce a velocity in the state output.

Having said that, the filter allows you to use differential mode for a given sensor, which will differentiate the poses between two (temporally) adjacent measurements and pass the resulting velocity to the filter. This is handy when you have two sources of disagreeing pose data, but the relative motion in each of them is correct.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2021-03-25 04:26:28 -0500

Seen: 149 times

Last updated: Mar 25 '21