# Modifying turtlebot odometry module

Hello there,

I'm working on a project that consists of running the gmapping application with a turtlebot that contains an additional sensor that should improve my odometry (that is: I want to use all standard turtlebot sensor plus the additional one).

My first attempt to do so was by checking the slam_gmapping source, from which I discovered that this app listens to either a "odom" or a "odom_frame" node.

As I didn't find any documentation on such a node, I started my search over again and found this robot_pose_ekf node, which might be what I'm looking for, but the documentation on how to include additional sensors is still very incomplete, and, as far as I could understand, suggests that it is only possible to have three odometry sources to the Kalman Filter. In other words, I'd have to sacrifice one of the original turtlebot sensors, since it already deals with three different sensors for localization.

So, my questions are:

1. Is the turtlebot odometry really provided by the robot_pose_ekf node? In case it isn't, what node should I be looking for?
2. If so, I suppose there's a specific turtlebot node that provides robot_pose_ekf with the required data (from the robot's sensors). In this case, which module is it?
3. Still considering robot_pose_ekf is responsible for the turtlebot odometry, is it only capable of dealing with three different sensors? Perhaps I could create a similar node using the OpenCV Kalman Filter class, and publish the filtered odometry to one of the "odom" or "odom_frame" channels, if that's the correct way of dealing with it.

Update

So I've implemented a publisher that sends visual odometry information to the vo channel in the robot_pose_ekf, as suggested below. I also have a transform broadcaster that's supposed to publish the relationship between the robot and the camera coordinate systems, as follows:

tf::StampedTransform odometryTransform(
tf::Transform(
tf::Quaternion(0, 0, 0, 1.0), tf::Vector3(0, 0.04, 0.0)),


As I do not know what is the default turtlebot coordinate system name, I assume it is "base_link". Also, there's a 4cm distance between the center of my sensor and the robot's center, which explains the 0.04 in my translation.

To confirm that my odometry data was correctly being processed by robot_pose_ekf, I set the sensor covariance to 1e-8 (x and y), 1.0 (z, which is irrelevant) and 99999.0 (velocities). After a simple experiment, I believe I should expect the output from the kalman filter to be similar to the visual odometry. However, this is what I got: the blue path was measured by my sensor, and the red is the output from the kalman filter, which is clearly messed up. Am I missing something?

Update 2

After a few more experiments (where, in one of them, I disabled all EKF inputs except for my vo sensor), I realized I was passing 99999.0 as the orientation covariance, instead ...

edit retag close merge delete

What sort of sensor do you want to add, out of curiosity?

( 2012-03-08 10:14:33 -0500 )edit

It's an additional camera, with which I plan to study the impact of visual odometry over the original system.

( 2012-03-09 08:40:17 -0500 )edit

Then I think what @tfoote said below should work for you. The robot_pose_ekf with the turtlebot does not use the third input source by default, only odometry and the imu data. You should be able to use the output of your visual odometry algorithm as the "vo" input.

( 2012-03-09 09:11:51 -0500 )edit

Sort by » oldest newest most voted

Problem solved. Let me summarize all my questions and their respective answers:

1. Is the turtlebot odometry really provided by the robot_pose_ekf node? In case it isn't, what node should I be looking for?
• Not exactly. robot_pose_ekf receives odometry from three different sources and combines them using a kalman filter. Nevertheless, the turtlebot package is designed in such a way that the turtlebot driver odometry (from the encoders on the wheels and its gyroscope) can be sent to the ekf module (this is done by launching this module, which can be done either manually, with rosrun, or in a launch file, as suggested by @tfoote and @mjcarroll). In short: robot_pose_ekf is not specific to turtlebot, therefore it must receive the odometry from the robot.
2. If so, I suppose there's a specific turtlebot node that provides robot_pose_ekf with the required data (from the robot's sensors). In this case, which module is it?
• As said before, the turtlebot driver is responsible for retrieving odometry data from this robot. Specifically, this is done in the turtlebot_node.py file (from the turtlebot_node package).
3. Still considering robot_pose_ekf is responsible for the turtlebot odometry, is it only capable of dealing with three different sensors? Perhaps I could create a similar node using the OpenCV Kalman Filter class, and publish the filtered odometry to one of the "odom" or "odom_frame" channels, if that's the correct way of dealing with it.
• Currently (when my question was published), robot_pose_ekf was only able to deal with three different input channels (i.e. "sensors"). However, I originally thought that turtlebot was retrieving odometry data from the kinect (for instance, with a particle filter), but I was wrong. Thus, there was an "idle slot" (the "vo" channel) in the ekf that could be used by my sensor, which is what I did.
4. Does robot_pose_ekf convert odometry from the sensor frame to the robot frame?
• When this question was published, the robot_pose_ekf did not. Therefore, I had to convert my data, which specifies the sensor position in the sensor frame (according to the position of the first measurement) to the robot position in the robot frame (again, according to the first measurement). To evaluate my new code, I did another experiment, which resulted in this path.
more

Adding a visual odometry sensor to the robot_pose_ekf should work without too much of an issue.

The first thing to do is create a new launch file with a modified configuration of the robot_pose_ekf, like so:

<launch>
<node pkg="robot_pose_ekf" type="robot_pose_ekf" name="robot_pose_ekf">
<param name="output_frame" value="odom"/>
<param name="freq" value="30.0"/>
<param name="sensor_timeout" value="1.0"/>
<param name="odom_used" value="true"/>
<param name="imu_used" value="true"/>
<param name="vo_used" value="true"/>
<param name="debug" value="false"/>
<param name="self_diagnose" value="false"/>

<remap from="vo" to="\$YOUR_VO_TOPIC_NAME"/>
</node>
</launch>


I believe that your message that inputs on the vo topic should be in the base_footprint frame. Otherwise, your transform looks correct.

From your image, it would seem like maybe one of your coordinate frames is incorrect, perhaps for the IMU?

more

Hi, I just wanted to know that why the input on the vo topic needs to be on the base_footprint? I did that and it solved some problems but when i run gmapping with vo and the robot is stationary then for some reason it keep on registering the scans and after some time it fails to get the scan.

( 2012-06-03 08:02:13 -0500 )edit

The turtlebot does use the robot_pose_ekf. Default launch file

That should get you going. Depending on your sensor you may need to modify it to add another input channel of the right type.

more