Ask Your Question
2

robot_localization Behavior when fusing delayed odometry messages

asked 2016-03-02 01:42:42 -0500

UnfortunateConflictOfEvidence gravatar image

updated 2016-03-02 06:35:18 -0500

Hi everyone,

I use the ukf_localization_node to fuse a laserscan based odometry with a camera based odometry. Unfortunately the camera postprocessing delays the update by about 60-80 ms per localization update. So when the filter encounters these measurements, the time delta against the last measurement is negative as the other odometry source has higher frequency as well as lower latency.

Will the filter still behave correctly when dealing with negative deltas? Is there a better way I should set things up?

Thanks, Marc

EDIT: Thanks for your suggestion! I indeed use two seperate odometry topics which are both registered with the ukf node as follows:

<node pkg="robot_localization" type="ukf_localization_node" name="MapFusion" >
<param name="two_d_mode" value="true"/>
<param name="frequency" value="100"/>
<param name="map_frame" value="map"/>
<param name="odom_frame" value="odom"/>
<param name="world_frame" value="map"/>

<!-- Inputs -->

<!--  **** IMU0 : Orientation roll, pitch yaw from IMU **** -->
<param name="imu0" value="sensor/imu"/>
<rosparam param="imu0_config">[false, false, false,   <!-- x,y,z -->
                               true, true, true,      <!-- roll, pitch, yaw -->
                               false, false, false,   <!-- x,y,z velocities -->
                               false, false, false,   <!-- roll, pitch , yaw velocity -->
                               false, false, false ]  <!-- x,y,z acceleration -->
</rosparam>
<param name="imu0_differential" value="false"/>

<!--  **** ODOM0 : odometry from Scanreg (highfreq) **** -->
<param name="odom0" value="sensor/scanreg/odometry"/>
<rosparam param="odom0_config">[true, true, true,       <!-- x,y,z -->
                               false, false, false,     <!-- roll, pitch, yaw -->
                               false, false, false,     <!-- x,y,z velocities -->
                               false, false,false,      <!-- roll, pitch , yaw velocity -->
                               false, false, false]     <!-- x,y,z acceleration -->
</rosparam>
<param name="odom0_queue_size" value="20"/>
<param name="odom0_differential" value="false"/>
<param name="odom0_pose_rejection_threshold" value="4"/>



<!--  **** ODOM1 : odometry from VisGlob **** -->
<param name="odom1" value="sensor/visglob/odometry"/>
<rosparam param="odom1_config">[true, true, true,     <!-- x,y,z -->
                               false, false, false,   <!-- roll, pitch, yaw -->
                               false, false, false,   <!-- x,y,z velocities -->
                               false, false,false,    <!-- roll, pitch , yaw velocity -->
                               false, false, false]   <!-- x,y,z acceleration -->
</rosparam>
<param name="odom1_queue_size" value="20"/>
<param name="odom1_differential" value="false"/>
<param name="odom0_pose_rejection_threshold" value="7"/>

</node<>

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
0

answered 2016-03-02 05:30:07 -0500

demmeln gravatar image

updated 2016-03-02 16:43:20 -0500

It sounds like you are feeding the odometry messages to the same topic. You ought to be using two separate topics and add 2 separate odometry inputs to the filter. Comparing the latest camera odometry measurement to the last laser-odom-pose is probably quite meaningless.

For more specific help, please update your question with your r_l configuration.

EDIT:

Ah, so initially I thought you were getting some sort of error message about negative time delays. So as far as I know r_l does the following: In every interation (determined by the filter frequency) it orders all new measurements by timestamp. It then starts from lowest timestamp to highest, and incorporates the measurement. If the latest incorporated measurement (== current filter time) is older than the current measurement, it first predicts the filter to the time of the measurement, then corrects the filter state with the measurement. Otherwise, if the current filter time is newer than the current measurement (which will be the case for your delayed vision measurements), it skips the predict, and simply corrects the current filter state with the measurement as if the measurement timestamp and the filter timestamp were equal. So depending on your dynamics, and on the amount of delay, this might introduce something from little to a lot of error due to the wrong timing.

@Tom Moore, does that sound correct?

In any case, looking at your configuration there might be different issues as well. Firstly, you seem to incorporate both odometry sensors as absolute x,y,z measurements. However, I would assume that they are not in the same reference frame and therefore they would result in conflicting updates, with the filter position jumping from one reference to the other. You might want to look at the differential option to convert poisition measurements into velocity measurements. However, with that, the lower frequency updates from vision might not have much influence, in particular if they are delayed and therefore in every iteration followed by (multiple) new measurements from laser odometry.

Moreover, is it intenional that you want to rely purely on your IMU for the orientation information? This can make sense I guess, I just wanted to point it out.

edit flag offensive delete link more

Comments

1

Yes, your description of the current behavior is accurate. FYI, @UnfortunateConflictOfEvidence, @demmeln has added a ticket to address this on the r_l GitHub page.

Tom Moore gravatar imageTom Moore ( 2016-03-02 19:00:10 -0500 )edit

I will probably then try to implement a smoothing scheme as suggested in that tickets discussion. I'll keep that discussions url in case I'll have code to contribute. Thanks for your effort @demmeln

UnfortunateConflictOfEvidence gravatar imageUnfortunateConflictOfEvidence ( 2016-03-03 02:54:46 -0500 )edit

Sounds great. Feel free to discuss over at github if you want.

demmeln gravatar imagedemmeln ( 2016-03-03 04:46:53 -0500 )edit
1

answered 2016-03-02 19:05:38 -0500

Tom Moore gravatar image

updated 2016-03-02 19:05:54 -0500

One note: not critical, but you have two_d_mode set to true, but have 3D variables (Z, roll, and pitch) being measured.

Also, I agree with @demmeln's other point: you have two sensors measuring X, Y, and Z position. The filter will automatically transform them into the frame specified by the world_frame parameter, but even so, you may find that the filter output jumps back and forth between the measurements if their covariances are overconfident. I tend to take my "best" measurement source for any given pose variable as the absolute measurement, and turn differential mode on for the others (unless they provide velocities, in which case you should just use those instead).

edit flag offensive delete link more

Comments

I tend to take my "best" measurement source for any given pose variable as the absolute measurement what I find problematic with this (often it's also not a problem) is that this way you cannot compensate outliers in that best sensor, since they are forever baked into the accumulted absolute pose.

demmeln gravatar imagedemmeln ( 2016-03-02 19:22:44 -0500 )edit

Both odometries have been converted into a common 'map' frame. I am indeed only interested in yaw information and assumed that pitch and roll estimation would just be skipped over. But marking pitch and roll as false makes definitely sense. Thanks!

UnfortunateConflictOfEvidence gravatar imageUnfortunateConflictOfEvidence ( 2016-03-03 02:45:33 -0500 )edit

Roll and pitch would indeed be ignored in two-d mode. However set it to false only of your imu is mounted level.

demmeln gravatar imagedemmeln ( 2016-03-03 04:43:58 -0500 )edit

I wonder about how you convert both odometries into the same frame. Do they not have unbounded drift? Would you mind explaining?

demmeln gravatar imagedemmeln ( 2016-03-03 04:45:23 -0500 )edit

Every message has an associated frame_id in the header. I use tf to transform the data in the message to the frame_id specified by the world_frame parameter. The onus is on the user to ensure that two messages with the same frame_id are actually reported in the same frame.

Tom Moore gravatar imageTom Moore ( 2016-03-03 07:16:27 -0500 )edit

Oh, and velocities are transformed into the frame specified by the base_link_frame parameter. For more information, see this page.

Tom Moore gravatar imageTom Moore ( 2016-03-03 07:18:18 -0500 )edit

@demmeln perhaps I am misunderstanding your point, but you can use the _rejection_threshold parameter to reject outliers in any sensor.

Tom Moore gravatar imageTom Moore ( 2016-03-03 07:40:49 -0500 )edit

The laserscanner odometry is derived by map matching against a map whose origin coincides with the visual odometries map. Odometry hereby only refers to the message type. Both approaches deliver global pose estimations which usually dont diverge by much.

UnfortunateConflictOfEvidence gravatar imageUnfortunateConflictOfEvidence ( 2016-03-03 07:48:17 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

2 followers

Stats

Asked: 2016-03-02 01:42:42 -0500

Seen: 525 times

Last updated: Mar 02 '16