Visual odometry with Pose-graph optimization
Hello,
I am performing visual odometry with viso2 on a webcam and want to do pose-graph optimization. I can visualise the poses and draw a trajectory with jskrvizplugins. I am publishing a /base_link -> /camera
tf needed for viso2. I am also performing depth estimation via deep learning in another node.
Now I want to create keyframes and do pose-graph optimization on them. I know about g2o, Ceres solver, openkarto, cartographer and openslamvertigo, but I am unsure how to use them or where to even start with them. I found other packages that require laser data be published, but I am trying to implement monocular vSLAM w/o other sensors.
Any direction, advice or reference code would be appreciated
Asked by incendiary on 2019-08-24 12:23:36 UTC
Answers
First of all, for graph-slam you need to establish relative constraints between your key frames. With only (visual) odometry, you cannot do optimization. If you create pointclouds from your depth-camera, you can use ICP to create constraints between key frames. Then you can add all key frames and constraints to your optimizer (e.g. g2o) to create a global map.
Asked by Sebastian Kasperski on 2019-08-26 06:48:27 UTC
Comments
Hey, thanks for the reply
What kind of global map optimization can I do for visual odom-only slam? If I understand correctly my task, I am required to correct accumulated/drift error in pose estimation.
Asked by incendiary on 2019-08-26 14:52:32 UTC
With only odometry, you cannot do any optimization. Think of optimization as a method of data fusion. With only one source of information, there is nothing to fuse. So you need to think about what additional information you have. (e.g.: relative poses from scan matching, GPS, innertial measurements, loop closure detection,...) Optimization can then use all this information to create a most likely environment model based on your sensor data.
Asked by Sebastian Kasperski on 2019-08-27 04:51:34 UTC
Comments