ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Improving odometry from RTAB mapping

asked 2019-04-05 01:51:48 -0500

tsbertalan gravatar image

I'm using real-time appearance-based mapping (rtabmap_ros) to perform SLAM with a first-generation Kinect RGBD sensor, and this is working fairly well. However, the odometry is generated by rtabmap wholly through visual information, and so it updates somewhat slowly, and is at risk of losing its fix when a featureless wall fills the view.

With this in mind, I'd like to supplement the visual odometry with the (likely highly noisy) motor encode+steering angle information I have on hand, as well as, perhaps, IMU data from either the Kinect's on-board accelerometer (no gyro, but I can get linear acceleration components with the kinect_aux package), or the Adafruit 9DoF board.

It seems that the robot_localization package can do odometry fusion from multiple sources. However, how can I do this such that rtabmap make use of the extra information, rather than just down stream odometry consumers, like my path planner (teb_local_planner)? I would like, for instance, for the continued wheel and IMU information to allow us to track position through loss of visual odometry, as in the white-wall situation.

I did find this question and answer, which is very close to the what I'm asking about, though it looks like the question wasn't fully answered there.

My rtabmap launch file is here. Would the approach be something like the following?

  1. Run rtabmap's visual odometry, but publish it to an itermediate topic (or, a different tf frame name).
  2. Use robot_localization to fuse the visual odometry with my other information sources.
  3. Publish the fused odometry to the topic that rtabmap expects (or to the tf frame name that it expects).

You can read more about this project here, including a video which can be seen directly on YouTube here.

edit retag flag offensive close merge delete

1 Answer

Sort by » oldest newest most voted
1

answered 2019-04-08 13:30:20 -0500

tsbertalan gravatar image

updated 2019-09-07 18:47:00 -0500

It appears that this is possible by

  1. remapping the output of the visual odometry from odom to e.g. /vo,
  2. reading this in to robot_localization along with other data sources for fusion,
  3. republishing from there to e.g. /odometry/filtered, and then
  4. telling the rtabmap node to use that instead of odom.

There may also be some tf frames that need adjusting.

I gather this from the provided sensor_fusion.launch file, as well as these two forum threads.

I'll edit this answer once I've actually tried this approach.

Edit: Well, this approach did work, but a major difficulty I had was tuning the covariance on my IMU measurements, to be fused with robot_localization, and also specifying just what types of information should be fused. In particular, I never really decided whether I should include accelerometer data or not, or whether I should just let it be used for orientation computation (via gravity vector subtraction).

I tried some wonky things, like allowing a settling time on startup to read the magnitude of the accelerator vector, and then scaling it to be properly g (which it wasn't naturally).

Additionally, I fused wheel odometry, which initially seemed to help a lot with smoothness of localization, but I think eventually caused drift because my turning angle wasn't, in reality, what I reported it as.

I now have replaced my Kinect device with an Intel RealSense R435 and T265, and I'm using the localization stream that the black-box T265 provides directly as an odometry source, with visual odometry in RTAB-map turned off completely. This works quite well. I haven't yet brought back the wheel odometry, since I expect it still to cause drift unless I fix that steering-bias problem. The realsense ROS packages provide their own way to ingest external odometry information (such as these wheel/steering measurements), so this would be a complete replacement for robot_localization.

edit flag offensive delete link more

Comments

Hi, tsbertalan. Could you kindly share how you are using T265? I mean, use it as odometry directly or filter its output before using. Have you experienced any pose drift or abnormal velocity? And, do you have any plan to fuse the VO of T265 with the wheel odometry? Thanks

mattcenoo gravatar image mattcenoo  ( 2019-09-11 07:27:37 -0500 )edit

Did you managed to fuse the robot wheels odometry into T265? That can be done both using t265 config file or robot_localisation node, but I couldn't manage to get either of them working yet. Thank you!

b.lazarescu gravatar image b.lazarescu  ( 2020-01-08 08:26:55 -0500 )edit

@mattcenoo See tomsb.net/Gudrun and github.com/tsbertalan/gudrun . I’m using it directly, as in the realsense tutorials. As I mentioned in my answer above, there is the possibility of adding wheel odometry to the realsense (they do the fusion for you, apparently, and robot_localization is not needed), but I think that would cause problems that I’d additionally need to solve. As far as drift goes, the biggest problem I’ve seen is with a lack of loop closure at the dozens-of-meters range due to z misalignment. However, this might be fixable by rtabmap settings (lack of recognition of previously recorded scenery), rather than the fault of the realsense odometry.

tsbertalan gravatar image tsbertalan  ( 2020-01-08 13:19:33 -0500 )edit

Yes, this should be possible, but, as I mentioned in my own answer, I have some issues with my wheel odometry that might be specific to my built hardware which I’d need to fix first, and I don’t have time to work on these right now,

tsbertalan gravatar image tsbertalan  ( 2020-01-08 13:21:25 -0500 )edit

Question Tools

3 followers

Stats

Asked: 2019-04-05 01:51:48 -0500

Seen: 2,102 times

Last updated: Jan 08 '20