Robotics StackExchange | Archived questions

How to fuse correctly GPS for localization?

Hi all,

I'm following this tutorial to integrate GPS data in my localization solution.

The question I have is pretty general. In the linked page it is stated that:

navsattransformnode transforms GPS data into a frame that is consistent with your robot’s starting pose (position and orientation) in its world frame. This greatly simplifies fusion of GPS data.

What if one wants to do the opposite (that is, transforming estimated pose in the map frame into the GPS frame)? Is it feasible? Why should it complicate the fusion?

Thanks

Asked by schizzz8 on 2022-05-23 14:29:35 UTC

Comments

Answers

What do you mean in the gps frame? You mean frame or latitude-longitude?

Your gps data comes in a gps_frame in latitude, longitude and height. This data is transformed from lat-long to utm. Then from utm to map if a datum is available. In this way you have the pose of the GPS device in reference to the map frame. After, navsat uses the tf tree to look the transform between gps frame_id and base_link in order to output /odometry/gps topic which is the pose of the robot in the map frame. The ekf listens to this topic in order to estimate the final pose.

What you want is the final estimated pose by the ekf from the gps frame in reference to the map frame?

Asked by xaru8145 on 2022-06-04 19:37:26 UTC

Comments

I'm not an expert on this subject, but my understanding is that , in general, it "complicates" getting to a final estimated pose by requiring extra matrix multiplies. Don't forget that in a good implementation, the covariance also needs to be transformed.

Asked by Mike Scheutzow on 2022-06-05 08:06:27 UTC

Comments