ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
16

How to improve the reliability of my robot's localization ?

asked 2016-11-08 04:31:06 -0600

Inounx gravatar image

updated 2016-11-17 07:56:18 -0600

Context

I have a differential drive robot with 4 wheels, similar to the Clearpath Jackal. It moves in a closed environment in indoor and outdoor, and the map is known. It is equipped with a laser range, an IMU, and odometry is computed thanks to wheel encoders. The robot is also equipped with a GPS, but I would prefer not to use it for now.

Some area of the map (outdoor) are such that the robot walk through "weak" localization phases due too lack of landmarks. I am trying to improve the localization chain, to be more robust to these "weak" localization phases.

Basically what I am trying to do is to be able to navigate in a degraded mode whitout being completely lost (i.e. AMCL could be lost temporarily but tf map -> odom would still be available).

Setup

Setup

I use a first "local" robot_localization instance to fuse odom and imu data. This outputs what I called /odom/filtered. Next I have an AMCL instance, which takes as input laser range measurements and /odom/filtered. Then I fuse the AMCL position, odom and imu with a second "global" robot_localization instance. The first local ekf_localizer provides odom -> base_link transform The second global ekf_localizer provides map -> odom

The output of the first local ekf_localization is really good as I can see in rviz (The robot return to about 1m from it's departure point after a loop of 93m x 35m, cf image).

odom(blue) vs odom filtered(red)

odom(blue) vs odom filtered(red)

  • Os: Ubuntu Server 14.04 with ROS Indigo
  • Localization: robot_localization (v2.3.0) package with AMCL (v 1.12.13)
  • Visualization: To visualize the covariance of the position and odometry, the rviz_plugin_covariance is used.

AMCL Covariance problem

All this chain is based on the covariance of the respective measurements. So the covariance matrix values must be as representative of the reality as possible and it seems that's not the case.

At first I thought that the position covariance provided by AMCL was an estimation of how confident AMCL is on the current robot position in the map, but I am not sure. By displaying the covariance, I see that the robot localization is inside an ellipsoid about 20 to 30cm in diameter when it is well localized.

Note: For readability purpose, the displayed ellipsoid is 3sigma (estimated position +- 3 times the standard deviation)

Well Localized

When AMCL begin to get lost the covariance increase significantly,

Not well localized

and then decrease to, again, 20 to 30 cm in diameter, but with AMCL sometimes being really lost !

Lost

Why this strange behavior ? This causes the global ekf_localization to "trust" the position more than the odometry. In an attempt to workaround this behavior, I tried a naive approach : if the covariance ellipsoid is more than 50cm in diameter, I stop sending the AMCL position to the global ekf node. When the diameter is less than 50cm, I send the current estimated positon as initial position of AMCL. In some way it works well, but the reinitialization of AMCL is ... (more)

edit retag flag offensive close merge delete

Comments

Great question! I'll need some time to look over it, but I'll respond.

Tom Moore gravatar image Tom Moore  ( 2016-11-08 07:29:53 -0600 )edit
1

You've done a great job clearly outlining your problem and providing plenty of explanation. I probably won't be able to help too much, but if you can provide a short bagfile or two with all your data that will definitely help others answer your question.

M@t gravatar image M@t  ( 2016-11-08 14:35:38 -0600 )edit

Thanks, I will try to make a small package with everything needed included, to easily replay this scenario.

Inounx gravatar image Inounx  ( 2016-11-09 03:51:20 -0600 )edit

Here is the catkin package with a rosbag included: localization_replay.tar.bz2

All the config is done (rviz too), just add to a catkin workspace, build, and roslaunch localization_replay localization_replay.launch

Inounx gravatar image Inounx  ( 2016-11-09 07:57:17 -0600 )edit

I have some warnings from both robot_localization nodes : Transform from imu to base_link was unavailable for the time requested. Using latest instead. I think it comes from the non-synchronization between /odom /imu_data and /amcl_pose, but maybe I am wrong ?

Inounx gravatar image Inounx  ( 2016-11-14 06:37:45 -0600 )edit
1

Sounds to me like you are using a regular tf static_transform_broadcaster instead of a tf2 one. Is that true?

Tom Moore gravatar image Tom Moore  ( 2016-11-14 06:57:31 -0600 )edit

I am using robot_state_publisher with an urdf model. The default parameters are used, so use_tf_static is false. I didn't heard about tf2 before, so yes I guess, I am using regular tf. I will look into it.

Inounx gravatar image Inounx  ( 2016-11-14 07:13:08 -0600 )edit

Why do I see lateral "jumps" in your robot pose as it goes around the curve? Is the amcl pose estimate being fused at that point?

Tom Moore gravatar image Tom Moore  ( 2016-11-17 05:12:48 -0600 )edit

2 Answers

Sort by ยป oldest newest most voted
4

answered 2017-05-05 03:38:32 -0600

Inounx gravatar image

Here are the last modifications I did to improve this localization system, as I think it is not useful to let this post open forever. Note: this is not (for now) a complete solution, as there are still some points to be improved.

I adapted and tried the icp I mentionned in my question : in practice, the results are not so good. It is computationally expensive and does not really improve AMCL localization. I had some cases where the ICP gave worse results than AMCL, so I decided to not use it.

To check the AMCL output I project the lidar point cloud :

  • once to the position given by AMCL
  • once to the global fused position (output of global ekf node)

and check how much it matches with the map : if AMCL is the best match, position is sent to the global ekf to be fused. if global ekf is the best match, I send the current global position to amcl. If neither ekf match nor amcl match are good, we stay in "degraded" mode for a maximum distance waiting to get again a good match.

In pratice it works pretty well, excepted in open areas where a certain distance is travelled and AMCL do not manage to re-localize successfully. The difficulty here is to set a good threshold to consider when a match is "good enough" or not.

Improvement to be made

Some improvement can still be made :

  • When AMCL begin to be lost its computation time to give a position increase significantly, maybe it can be used as another indicator to detect when it is lost. It is kind of ugly because setting a threshold here is dependent on the hardware and actual computation load from other nodes...

  • Keep track of the count of "landmark" points in the map in a radius around the robot, and if there are too few points, stop fusing AMCL poses as there is not enough landmarks to be localized.

edit flag offensive delete link more

Comments

Hi Inoux, First, i have to tell that you did a very good job in the AMCL issue explanation, i have facing the same problem and i think that it can be solved with a point to map ICP, do you think that you can share with us the method that you used or some tips on how to achieve that ? Thank you very much

Lotfi_Z gravatar image Lotfi_Z  ( 2019-06-05 16:01:26 -0600 )edit

@Lofti_Z Hi, I didn't use the ICP in the end, check the https://github.com/dejanpan/snap_map_icp package, it should give you a good start.

Inounx gravatar image Inounx  ( 2019-06-10 04:42:05 -0600 )edit

Thanks for the excellent question. Could you please share how does the system operates during "degraded mode" ?

Danny_PR2 gravatar image Danny_PR2  ( 2021-03-03 09:07:50 -0600 )edit

Hi, this is an excellent question. Sorry if my query is very dated, but can I ask how did you check whether the AMCL or global EKF provides the best match to the lidar point cloud? Did you use any algo to calculate some metric score?

Muhd Hizam gravatar image Muhd Hizam  ( 2021-11-17 00:28:44 -0600 )edit
4

answered 2016-11-16 03:35:35 -0600

Tom Moore gravatar image

updated 2016-11-17 03:32:33 -0600

So I have yet to run your test package, and I can't speak at all to the amcl issues, but re: the EKF covariance, for any given time step, there are three covariance matrices that matter:

  1. The estimate error covariance (the error for the entire state)
  2. The process noise covariance (how much noise we add to the filter every time we do a prediction)
  3. The measurement covariance (the error in the measurement itself)

At every time step, we make a prediction about where we should be, given the motion model, and the also make a prediction about what the estimate error covariance matrix will be. The equation for prediction is this:

P = JPJ' + Q

Q is the process noise covariance matrix. In general, if your pose covariance is growing too quickly, you'll want to check the values for your pose AND twist variables in the process_noise_covariance parameter for the EKF. The odometry twist covariances are also important, so you can tune those as well.

EDIT 1 in response to comment question:

In many Kalman Filter formulations, I believe the time step is assumed to be constant, so Q is also constant. In the state estimation nodes in r_l, Q gets multiplied by the time delta, so the process_noise_covariance parameter is how much process noise you'd accumulate per second. This is necessary in order to cope with the measurements arriving at uncertain intervals. Tuning the values for Q is not easy, and, from what I've seen, is typically done by watching the error growth and determining if it's reasonable (i.e., hand tuning).

edit flag offensive delete link more

Comments

Thanks for your answer! If I understand correctly, what your are saying is that the process noise covariance Q is added during the state estimation process, so the values in this matrix must be choosen according to the frequency parameter ?

Inounx gravatar image Inounx  ( 2016-11-17 03:05:38 -0600 )edit

Ok, it's much clearer now. With a few tests and a process noise covariance at 0.0001 in the diagonal values, I get pretty good results. I will update my post as soon as I have some nice screenshots.

Inounx gravatar image Inounx  ( 2016-11-17 04:00:12 -0600 )edit

Question Tools

14 followers

Stats

Asked: 2016-11-08 04:31:06 -0600

Seen: 6,843 times

Last updated: May 05 '17