Ask Your Question

Revision history [back]

Filtered odometry drifts when no measurements received

I'm trying to fuse wheel odometry with visual poses computed from observed visual markers. Whenever the robot is standing still (no odometry received) and there's no visual marker in sight (no visual pose as well), the filtered estimation starts drifting with increasing velocity.

First I thought this is caused by nonzero velocity remaining in the "twist" part of the odometry message. But it also occurs if I use absolute x, y and yaw measurements, like in the following launch file:

<launch>

    <node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization" clear_params="true">

      <param name="frequency" value="10"/>
      <param name="sensor_timeout" value="0.1"/>
      <param name="two_d_mode" value="true"/>

      <param name="world_frame" value="odom"/>

      <param name="odom0" value="odometryPose"/>
      <rosparam param="odom0_config">[true, true, false,
                                      false, false, true,
                                      false, false, false,
                                      false, false, false,
                                      false, false, false]</rosparam>

      <param name="pose0" value="visualPose"/>
      <rosparam param="pose0_config">[true, true, false,
                                      false, false, true,
                                      false, false, false,
                                      false, false, false,
                                      false, false, false]</rosparam>

    </node>

</launch>

Is there any configuration I'm missing? I'd like to avoid introducing another dummy node publishing pseudo zero measurements in case of stillstand.

Filtered odometry drifts when no measurements received

I'm trying to fuse wheel odometry with visual poses computed from observed visual markers. Whenever the robot is standing still (no odometry received) and there's no visual marker in sight (no visual pose as well), the filtered estimation starts drifting with increasing velocity.

First I thought this is caused by nonzero velocity remaining in the "twist" part of the odometry message. But it also occurs if I use absolute x, y and yaw measurements, like in the following launch file:

<launch>

    <node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization" clear_params="true">

      <param name="frequency" value="10"/>
      <param name="sensor_timeout" value="0.1"/>
      <param name="two_d_mode" value="true"/>

      <param name="world_frame" value="odom"/>

      <param name="odom0" value="odometryPose"/>
      <rosparam param="odom0_config">[true, true, false,
                                      false, false, true,
                                      false, false, false,
                                      false, false, false,
                                      false, false, false]</rosparam>

      <param name="pose0" value="visualPose"/>
      <rosparam param="pose0_config">[true, true, false,
                                      false, false, true,
                                      false, false, false,
                                      false, false, false,
                                      false, false, false]</rosparam>

    </node>

</launch>

Is Sure, when there's no measurement (not even an "I'm not moving!"), the uncertainty grows. Thus, I shouldn't believe in the computed expected pose too much.

But, is there any configuration I'm missing? Like don't filter without any new measurement? I'd like to avoid introducing another dummy node publishing pseudo zero measurements in case of stillstand.

Filtered odometry drifts when no measurements received

I'm trying to fuse wheel odometry with visual poses computed from observed visual markers. Whenever the robot is standing still (no odometry received) and there's no visual marker in sight (no visual pose as well), the filtered estimation starts drifting with increasing velocity.

First I thought this is caused by nonzero velocity remaining in the "twist" part of the odometry message. But it also occurs if I use absolute x, y and yaw measurements, like in the following launch file:

<launch>

    <node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization" clear_params="true">

      <param name="frequency" value="10"/>
      <param name="sensor_timeout" value="0.1"/>
      <param name="two_d_mode" value="true"/>

      <param name="world_frame" value="odom"/>

      <param name="odom0" value="odometryPose"/>
      <rosparam param="odom0_config">[true, true, false,
                                      false, false, true,
                                      false, false, false,
                                      false, false, false,
                                      false, false, false]</rosparam>

      <param name="pose0" value="visualPose"/>
      <rosparam param="pose0_config">[true, true, false,
                                      false, false, true,
                                      false, false, false,
                                      false, false, false,
                                      false, false, false]</rosparam>

    </node>

</launch>

Sure, when there's no measurement (not even an "I'm not moving!"), the uncertainty grows. Thus, I shouldn't believe in the computed expected pose too much.

But, is there any configuration I'm missing? Like don't filter without any new measurement? I'd like to avoid introducing another dummy node publishing pseudo zero measurements in case of stillstand.

Update

I noticed that the filtered pose jumps back to where it should be, as soon as a new measurement arrives. So the drift only seems to affect the prediction and vanishes with the next correction step. But how can a node - receiving the /odometry/filtered message - distinguish between a vague prediction and a corrected pose based on measurements?

Filtered odometry drifts when no measurements received

I'm trying to fuse wheel odometry with visual poses computed from observed visual markers. Whenever the robot is standing still (no odometry received) and there's no visual marker in sight (no visual pose as well), the filtered estimation starts drifting with increasing velocity.

First I thought this is caused by nonzero velocity remaining in the "twist" part of the odometry message. But it also occurs if I use absolute x, y and yaw measurements, like in the following launch file:

<launch>

    <node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization" clear_params="true">

      <param name="frequency" value="10"/>
      <param name="sensor_timeout" value="0.1"/>
      <param name="two_d_mode" value="true"/>

      <param name="world_frame" value="odom"/>

      <param name="odom0" value="odometryPose"/>
      <rosparam param="odom0_config">[true, true, false,
                                      false, false, true,
                                      false, false, false,
                                      false, false, false,
                                      false, false, false]</rosparam>

      <param name="pose0" value="visualPose"/>
      <rosparam param="pose0_config">[true, true, false,
                                      false, false, true,
                                      false, false, false,
                                      false, false, false,
                                      false, false, false]</rosparam>

    </node>

</launch>

Sure, when there's no measurement (not even an "I'm not moving!"), the uncertainty grows. Thus, I shouldn't believe in the computed expected pose too much.

But, is there any configuration I'm missing? Like don't filter without any new measurement? I'd like to avoid introducing another dummy node publishing pseudo zero measurements in case of stillstand.

Update

I noticed that the filtered pose jumps back to where it should be, as soon as a new measurement arrives. So the drift only seems to affect the prediction and vanishes with the next correction step. But how can a node - receiving the /odometry/filtered message - distinguish between a vague prediction and a corrected pose based on measurements?

Filtered odometry drifts when no measurements received

I'm trying to fuse wheel odometry with visual poses computed from observed visual markers. Whenever the robot is standing still (no odometry received) and there's no visual marker in sight (no visual pose as well), the filtered estimation starts drifting with increasing velocity.

First I thought this is caused by nonzero velocity remaining in the "twist" part of the odometry message. But it also occurs if I use absolute x, y and yaw measurements, like in the following launch file:

<launch>

    <node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization" clear_params="true">

      <param name="frequency" value="10"/>
      <param name="sensor_timeout" value="0.1"/>
      <param name="two_d_mode" value="true"/>

      <param name="world_frame" value="odom"/>

      <param name="odom0" value="odometryPose"/>
      <rosparam param="odom0_config">[true, true, false,
                                      false, false, true,
                                      false, false, false,
                                      false, false, false,
                                      false, false, false]</rosparam>

      <param name="pose0" value="visualPose"/>
      <rosparam param="pose0_config">[true, true, false,
                                      false, false, true,
                                      false, false, false,
                                      false, false, false,
                                      false, false, false]</rosparam>

    </node>

</launch>

Sure, when there's no measurement (not even an "I'm not moving!"), the uncertainty grows. Thus, I shouldn't believe in the computed expected pose too much.

But, is there any configuration I'm missing? Like don't filter without any new measurement? I'd like to avoid introducing another dummy node publishing pseudo zero measurements in case of stillstand.

UpdateUpdate 1

I noticed that the filtered pose jumps back to where it should be, as soon as a new measurement arrives. So the drift only seems to affect the prediction and vanishes with the next correction step. But how can a node - receiving the /odometry/filtered message - distinguish between a vague prediction and a corrected pose based on measurements?

Update 2

After apparently having fixed the issue, there's a drift again. This time I boiled it down to the following minimum example:

<launch>                                                                                  

    <node pkg="robot_localization" type="ekf_localization_node" name="robot_localization_ekf" clear_params="true">

      <param name="two_d_mode" value="true"/>                                             
      <param name="world_frame" value="odom"/>                                            

      <param name="odom0" value="wheelOdometry"/>                                         
      <rosparam param="odom0_config">[false, false, false,                                
                                      false, false, false,                                
                                      true, true, false,                                  
                                      false, false, true,                                 
                                      false, false, false]</rosparam>                     

    </node> 

    <node pkg="unit_tests" type="publishers.py" name="publishers" />                      

</launch>

The launch script runs robot_localization and the following message publisher:

rospy.init_node('publishers')
wheelPub = rospy.Publisher("/wheelOdometry", Odometry, queue_size=1)
while not rospy.is_shutdown():
    wheelPub.publish(Odometry(header=Header(stamp=rospy.Time.now(), frame_id="odom"), child_frame_id="base_link"))
    rospy.Rate(10).sleep()

In this example, there is no drift. But the covariance of /odometry/filtered is growing unboundedly which indirectly causes a drift when adding a second data source like visual pose estimations.

So how can the uncertainty increase if the only information given is "My velocity is zero. I'm 100 % sure."?