Modifying turtlebot odometry module
Hello there,
I'm working on a project that consists of running the gmapping application with a turtlebot that contains an additional sensor that should improve my odometry (that is: I want to use all standard turtlebot sensor plus the additional one).
My first attempt to do so was by checking the slam_gmapping source, from which I discovered that this app listens to either a "odom" or a "odom_frame" node.
As I didn't find any documentation on such a node, I started my search over again and found this robot_pose_ekf node, which might be what I'm looking for, but the documentation on how to include additional sensors is still very incomplete, and, as far as I could understand, suggests that it is only possible to have three odometry sources to the Kalman Filter. In other words, I'd have to sacrifice one of the original turtlebot sensors, since it already deals with three different sensors for localization.
So, my questions are:
- Is the turtlebot odometry really provided by the robot_pose_ekf node? In case it isn't, what node should I be looking for?
- If so, I suppose there's a specific turtlebot node that provides robot_pose_ekf with the required data (from the robot's sensors). In this case, which module is it?
- Still considering robot_pose_ekf is responsible for the turtlebot odometry, is it only capable of dealing with three different sensors? Perhaps I could create a similar node using the OpenCV Kalman Filter class, and publish the filtered odometry to one of the "odom" or "odom_frame" channels, if that's the correct way of dealing with it.
Update
So I've implemented a publisher that sends visual odometry information to the vo channel in the robot_pose_ekf, as suggested below. I also have a transform broadcaster that's supposed to publish the relationship between the robot and the camera coordinate systems, as follows:
tf::StampedTransform odometryTransform(
tf::Transform(
tf::Quaternion(0, 0, 0, 1.0), tf::Vector3(0, 0.04, 0.0)),
currentTime, "base_link", "vo");
As I do not know what is the default turtlebot coordinate system name, I assume it is "base_link". Also, there's a 4cm distance between the center of my sensor and the robot's center, which explains the 0.04 in my translation.
To confirm that my odometry data was correctly being processed by robot_pose_ekf, I set the sensor covariance to 1e-8 (x and y), 1.0 (z, which is irrelevant) and 99999.0 (velocities). After a simple experiment, I believe I should expect the output from the kalman filter to be similar to the visual odometry. However, this is what I got: the blue path was measured by my sensor, and the red is the output from the kalman filter, which is clearly messed up. Am I missing something?
Update 2
After a few more experiments (where, in one of them, I disabled all EKF inputs except for my vo sensor), I realized I was passing 99999.0 as the orientation covariance, instead ...
What sort of sensor do you want to add, out of curiosity?
It's an additional camera, with which I plan to study the impact of visual odometry over the original system.
Then I think what @tfoote said below should work for you. The robot_pose_ekf with the turtlebot does not use the third input source by default, only odometry and the imu data. You should be able to use the output of your visual odometry algorithm as the "vo" input.