Ask Your Question

robot_localization global param

asked 2019-08-15 13:47:19 -0500

EdwardNur gravatar image

I have a wheel odometry which is in odom->base_link and I am producing this TF and publishing odom topic based on encoders.

Now, I have a visual odometry in place (orb slam) and I can produce a PoseWithCovariance type message where covariance matrix consists of 0 and 1s. So time to time, while navigating or doing a SLAM, I want to correct my pose due to wheel odometry errors and I was thinking to use VO in robot_localization package to correct it. If I will fuse the VO in a global frame (map -> base_link) will it correct the position of my robot based on map->odom tf?

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2019-09-26 07:34:15 -0500

Tom Moore gravatar image

So the trouble with visual odometry is that it's still odometry, and while it's probably more accurate than wheel encoders, it's still going to be subject to drift over long periods. So you wouldn't really be "correcting" your odometry pose; your map frame pose would just be a bit more accurate than your odom frame pose. But over time, both will drift to unusable levels, if remaining localized is important.

What you really need is a node like amcl that is giving you a global pose estimate. That will prevent drift in the map frame EKF. (Note that you can also just run amcl by itself, without feeding the output to an EKF). If you're doing SLAM, then you probably don't need an EKF either. The SLAM package will be generating the map->odom transform, and will be localizing the robot globally as it drives around.

edit flag offensive delete link more


@Tom Moore I do not think that you are correct. Visual odometry if based on keypoint will localize the robot using the loop closure as it will compare the keypoints to previously stored keypoints and its covariance will collapse, ultimately fixing the odometry drift.

EdwardNur gravatar image EdwardNur  ( 2019-09-26 11:58:39 -0500 )edit

I was assuming that your visual odometry was using sequential frames to estimate motion, much like scan-to-scan matching can be used with planar lasers. If you have loop closure involved, then your pose will still drift over time. You'll just get a corrective signal when the loop closure happens, and you EKF pose will jump with it. But it won't correct the previous poses in the chain, like a graph-based approach might.

Tom Moore gravatar image Tom Moore  ( 2019-10-07 05:22:12 -0500 )edit

@Tom Moore I think you are telling obvious things which are not relevant. In order to correct the pose of the robot due to local localization drift, feature based localization is enough and we do not need to know whether the position of the robot was incorrect before. That only matters for SLAM but not for navigation. So whenever the VO based in features corrects the pose of the robot when sees the previously saved keypoint, the TF of the robot will jump to a most probable position which is enough and I do not see how the knowledge of the previous positions matter in that case. There is a reason why HMM exists

EdwardNur gravatar image EdwardNur  ( 2019-10-07 10:19:11 -0500 )edit

Yes, Edward, I know. I'm just letting you know how the filter will behave when you give it an absolute pose after a long period of relative-only measurements. If you're unhappy with my answers, I will close mine and you are welcome to solicit advice from someone else.

Tom Moore gravatar image Tom Moore  ( 2019-10-07 10:37:33 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools



Asked: 2019-08-15 13:47:19 -0500

Seen: 170 times

Last updated: Sep 26 '19