ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

robot_pose_ekf and GPS

asked 2012-07-25 08:11:49 -0500

Ryan gravatar image

Hi all,

We're porting some of our internal code over to ROS-standard nodes for release, and I've got two questions related to robot_pose_ekf. For background, I have been working primarily from this tutorial (which advises adding a GPS as a VO sensor) and this ROS Answers thread.


Is there a reason that this will only initialize on odometry data if it's available? I don't want our odometry publisher to know about the GPS sensor, so it would be ideal if there was a setting to allow it to "snap" to the VO data when it's first received. I can add this and submit a patch, but I wanted to see if there was something I was missing first.


Given my review of the source, the VO data is used in a relative sense, same as the other data. Though this makes sense for true visual odometry (scan matching, etc), this isn't the standard way that GPS data would be used for correction. In fact, I'm pretty sure it means that we lose the benefit of the global reference frame the GPS provides.

Given a quick once over on the core code, there doesn't seem to be an explicit correction step at all. Am I missing something?

edit retag flag offensive close merge delete


Wholeheartedly agree, GPS should be used in a more comprehensive ekf (or ukf) framework that provides prediction and correction. Other improvements would be to allow N sensor sources. I would participate in a development effort if there's some interest.

phil0stine gravatar image phil0stine  ( 2012-07-25 09:46:47 -0500 )edit

The issues that you mentioned are the reason that we were forced to write our own EKF to properly make use of the GPS on our robot

DimitriProsser gravatar image DimitriProsser  ( 2012-07-26 03:39:13 -0500 )edit

@DimitriProsser We are looking into writing our own as well. I am curious if there are libraries you recommend. We are looking at bfl and bfilt, leaning toward bflit.

phil0stine gravatar image phil0stine  ( 2012-07-26 04:59:23 -0500 )edit

We wrote our entirely from the ground up using pure matrix math with Eigen. I don't recommend this unless you're really good with math and have a lot of time to spend debugging. I'm not familiar with any of the libraries, so I can't really help there unfortunately.

DimitriProsser gravatar image DimitriProsser  ( 2012-07-26 05:01:06 -0500 )edit

+1 to a new community-developed and supported Kalman filter implementation

joq gravatar image joq  ( 2012-07-26 05:26:38 -0500 )edit

Alright, seems I wasn't off base here. Thanks, all!

Ryan gravatar image Ryan  ( 2012-07-30 14:51:02 -0500 )edit

I am interested in this issue too, I found this stack which is still under development but is worth it to keep an eye

martimorta gravatar image martimorta  ( 2012-07-31 21:14:35 -0500 )edit

Any news on the idea of a new implementation of the kalman filter? i'll be happy to help

Mario Garzon gravatar image Mario Garzon  ( 2012-09-05 07:03:25 -0500 )edit

3 Answers

Sort by ยป oldest newest most voted

answered 2015-06-24 13:31:47 -0500

Ryan gravatar image

As part of ROS Answers cleanup week: Many companies/organizations/people are now using robot_localization instead of robot_pose_ekf

edit flag offensive delete link more

answered 2013-02-02 10:50:54 -0500

Regarding a generic pose estimation solution, I also would like to point you to our hector_localization stack. Team Hector Darmstadt uses it since more than a year to estimate the full 3D pose of our rescue robot Hector, within the real-time loop of the Hector quadrotor and even for estimating the position, velocity and attitude of a small airplane as part of our flight mechanics lab.

The design goals have been similar to what Chad Rockey described in his answer. The core package currently provides a system model for generic 6DOF rigid body pose estimation based on pure IMU inputs (rates and accelerations), which can be specialized depending on the robot and for additional input values. As measurement models we currently provide direct and barometric height measurements, GPS postion and velocity, magnetic field sensors as heading reference and a generic pose and twist measurement to fuse nav_msgs/Odometry messages from arbitrary sources (e.g. wheel odometry or SLAM).

The stack consists of a core library without dependencies to ROS beside message types and additional packages providing a node, nodelet or Orocos Real-Time Toolkit TaskContext interface. The system and measurement models could also be implemented and loaded as plugins with some minor modifications. In the background an Extended Kalman Filter based on the Bayesian Filter Library is responsible for fusing all information into a consistent state estimate and additionally keeps track which state variables are observable and which are not.

The major drawback is that hector_localization currently lacks documentation. I hope to find the time to create some wiki pages soon. Currently I am working on a redesign to remove the dependency from BFL and use Eigen directly and give models even more flexibility, e.g. adding augmented variables to the state vector.

However, we never used robot_pose_ekf and I cannot tell if hector_localization can serve as a replacement. In general, I think it is a good idea to come up with a new solution to estimate the robot pose using all available information which is adaptable to as many robots as possbile. I would like to contribute to a community solution after having finished my thesis.

edit flag offensive delete link more


Looking forward to see some documentation of hector_localization!

ZdenekM gravatar image ZdenekM  ( 2013-05-23 01:43:03 -0500 )edit

answered 2013-02-02 09:01:30 -0500

Chad Rockey gravatar image

First to answer the question:


robot_pose_ekf initializes to the odometry just so that it is comparable with the wheel odometry. It's arbitrary and could be fixed.


robot_pose_ekf only uses the EKF logic to determine dx, dy, and dheading (equivalent to calculating the Twist) for the N sensors. It then integrates these outside of the EKF - producing an odometry - not a localization estimate. This is common when you do not have absolute references and your variance estimates for x, y, and yaw will tend toward infinity.

When using GPS in robot_pose_ekf, it is difficult to not lose the global reference. If you chose to use the global reference, then you tend to wash out your other corrections in GPS noise and your initial "corrections" will tend to be very large.

The future: I'm working on an odometry and localization filter to be released in Hydro.

It's built on Eigen and is probably going to be a UKF implementation, thus the generic name, although in version + N, I may support EKF implementations as well.

It's going to start out small and replace robot_pose_ekf level of functionality. The eventual difference is that it will also support global localization sensors. The best way to add GPS to these measurements is through a chain like the following:

drifty GPS frame (like 'map' from amcl) -> fused odometry -> robot

Then the GPS version just sends corrections from the origin of the fused odometry to the GPS origin/UTM origin. This will likely rely on geographic_info for the GPS to meter conversion, so I don't expect it the integration to be 100% in Hydro.

I'm not taking feature requests at this time, as my priority is to get something out for Hydro so that we can all stop struggling with using robot_pose_ekf outside of what it was designed for. It will support nav_msgs::Odometry and sensor_msgs::Imu in various configurations for the locally continuous version, so between this and my other projects, I'd prefer to properly unit test and rostest as opposed to supporting modifications just quite yet.

edit flag offensive delete link more

Question Tools



Asked: 2012-07-25 08:11:49 -0500

Seen: 10,316 times

Last updated: Jun 24 '15