Improving odometry from RTAB mapping
I'm using real-time appearance-based mapping (rtabmap_ros
) to perform SLAM with a first-generation Kinect RGBD sensor, and this is working fairly well. However, the odometry is generated by rtabmap wholly through visual information, and so it updates somewhat slowly, and is at risk of losing its fix when a featureless wall fills the view.
With this in mind, I'd like to supplement the visual odometry with the (likely highly noisy) motor encode+steering angle information I have on hand, as well as, perhaps, IMU data from either the Kinect's on-board accelerometer (no gyro, but I can get linear acceleration components with the kinect_aux
package), or the Adafruit 9DoF board.
It seems that the robot_localization
package can do odometry fusion from multiple sources. However, how can I do this such that rtabmap make use of the extra information, rather than just down stream odometry consumers, like my path planner (teb_local_planner
)? I would like, for instance, for the continued wheel and IMU information to allow us to track position through loss of visual odometry, as in the white-wall situation.
I did find this question and answer, which is very close to the what I'm asking about, though it looks like the question wasn't fully answered there.
My rtabmap launch file is here. Would the approach be something like the following?
- Run rtabmap's visual odometry, but publish it to an itermediate topic (or, a different tf frame name).
- Use
robot_localization
to fuse the visual odometry with my other information sources. - Publish the fused odometry to the topic that rtabmap expects (or to the tf frame name that it expects).
You can read more about this project here, including a video which can be seen directly on YouTube here.