robot_localization global param
I have a wheel odometry which is in odom->base_link and I am producing this TF and publishing odom topic based on encoders.
Now, I have a visual odometry in place (orb slam) and I can produce a PoseWithCovariance type message where covariance matrix consists of 0 and 1s. So time to time, while navigating or doing a SLAM, I want to correct my pose due to wheel odometry errors and I was thinking to use VO in robotlocalization package to correct it. If I will fuse the VO in a global frame (map -> baselink) will it correct the position of my robot based on map->odom tf?
Asked by EdwardNur on 2019-08-15 13:47:19 UTC
Answers
So the trouble with visual odometry is that it's still odometry, and while it's probably more accurate than wheel encoders, it's still going to be subject to drift over long periods. So you wouldn't really be "correcting" your odometry pose; your map frame pose would just be a bit more accurate than your odom frame pose. But over time, both will drift to unusable levels, if remaining localized is important.
What you really need is a node like amcl
that is giving you a global pose estimate. That will prevent drift in the map frame EKF. (Note that you can also just run amcl
by itself, without feeding the output to an EKF). If you're doing SLAM, then you probably don't need an EKF either. The SLAM package will be generating the map->odom transform, and will be localizing the robot globally as it drives around.
Asked by Tom Moore on 2019-09-26 07:34:15 UTC
Comments
@Tom Moore I do not think that you are correct. Visual odometry if based on keypoint will localize the robot using the loop closure as it will compare the keypoints to previously stored keypoints and its covariance will collapse, ultimately fixing the odometry drift.
Asked by EdwardNur on 2019-09-26 11:58:39 UTC
I was assuming that your visual odometry was using sequential frames to estimate motion, much like scan-to-scan matching can be used with planar lasers. If you have loop closure involved, then your pose will still drift over time. You'll just get a corrective signal when the loop closure happens, and you EKF pose will jump with it. But it won't correct the previous poses in the chain, like a graph-based approach might.
Asked by Tom Moore on 2019-10-07 05:22:12 UTC
@Tom Moore I think you are telling obvious things which are not relevant. In order to correct the pose of the robot due to local localization drift, feature based localization is enough and we do not need to know whether the position of the robot was incorrect before. That only matters for SLAM but not for navigation. So whenever the VO based in features corrects the pose of the robot when sees the previously saved keypoint, the TF of the robot will jump to a most probable position which is enough and I do not see how the knowledge of the previous positions matter in that case. There is a reason why HMM exists
Asked by EdwardNur on 2019-10-07 10:19:11 UTC
Yes, Edward, I know. I'm just letting you know how the filter will behave when you give it an absolute pose after a long period of relative-only measurements. If you're unhappy with my answers, I will close mine and you are welcome to solicit advice from someone else.
Asked by Tom Moore on 2019-10-07 10:37:33 UTC
I have the same problem. But I was wondering why need to code own node like amcl when can use robot_localization package??
Asked by Astronaut on 2023-03-30 02:44:20 UTC
Comments