ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

How to correct ORBSLAM2 scale drift, using mono or stereo camera.

asked 2021-02-11 19:36:21 -0600

Tonzie gravatar image

updated 2021-02-13 04:46:57 -0600

Hi everyone, i have a problem. I need to implement and tune ORBSLAM2, in ROS. I can do it, with mono and stereo camera, The problem is that if i try to plot the trajectories, especially in mono case, there is a scale drift. i have been reading a lot of documents about that but i know now that is a not solved problem in the entire community or there are studies at reserch level. i tried many ways but none of them worked. My question is: what easy task or tasks can i achieve, to run parallel tasks to ORBSLAM2? and how? And then: if i would like to write an ekf or a filter to fuse for example laser data to orbslam to adjust scale drift, in which way can i do that? which data i need and how can i use them simultaneously using multiple subscribers? thanks in advance!

edit retag flag offensive close merge delete

Comments

From the github:

ORB-SLAM2 is a real-time SLAM library for Monocular, Stereo and RGB-D cameras that computes the camera trajectory and a sparse 3D reconstruction (**in the stereo and RGB-D case with true scale**).

So right off the box, monocular cannot return you true scale, and the scale will drift. I would suggest using stereo (you said its an option) and you should automatically get a metric reconstruction with essentially no scale drift.

JackB gravatar image JackB  ( 2021-02-13 09:00:13 -0600 )edit

Thank you @JackB, i know but i need to find a way to get at least the scale in mono case (which is different at every initialization). I read something about the robot_localization package which integrates ekf and ukf to fuse sensor data from laser/imu/odometry and others but i don't know how to use that. So i tried to create myself a simple ROS NODE in PYTHON without success unfortunately, because i have not really understood how to adjust orbslam with other data. There are a few problems, in fact, because for example i can't subscribe to multiple topics and simulteneously manipulate them. Im following this task for enough time and i can't really find a way because i read a lot of papers but none show me the way and possible solutions. I need this for an exam at university but I didnt know ...(more)

Tonzie gravatar image Tonzie  ( 2021-02-13 17:19:04 -0600 )edit

1 Answer

Sort by ยป oldest newest most voted
4

answered 2021-02-14 11:00:35 -0600

johnconn gravatar image

@JackB is right, ORB-SLAM2 is explicit about being unable to observe scale without stereo or depth data. You will need to provide an RGBD or stereo camera, or use a different package.

If you really want to stick with ORB-SLAM2, you could try estimating depth of your RGB image, and bolt on that depth estimation to produce an RGBD image that you feed into ORB-SLAM2. There are a few monocular depth estimation packages out there you can try out, including some that don't rely on a neural network.

An IMU + a mono camera are enough to detect scale. Visual-inertial integration is beyond ORB-SLAM2. You could use ORB-SLAM3 for that though, or any of the other visual-inertial systems.

There are some approaches (like this) that use just a RGB image that more tightly integrate depth estimation. There will be more work done here, since the idea of using a cheap camera without any other sensors to localize is very attractive.

VSLAM and mono depth estimation are both open research topics, with new approaches being introduced all the time.

edit flag offensive delete link more

Question Tools

4 followers

Stats

Asked: 2021-02-11 19:36:21 -0600

Seen: 1,063 times

Last updated: Feb 14 '21