ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

how to use process_noise_covariance

asked 2017-06-30 04:01:13 -0500

linusnie gravatar image

updated 2017-06-30 04:14:08 -0500

I am using a ekf_localization_node from the robot_localization package to track the state of a remote controlled car. As is indicated in this question the filtering is based on an omnidirectional motion model so I would like to tune to process_noise_covariance parameter to give the ekf some information about how the system is expected to behave.

Currently I am setting the x-axis entry in process_noise_covariance to be larger than the y and z-axis entries (I'm assuming the coordinates refer to the base-link frame, but I haven't found confirmation of this). The idea is that this biases the ekf towards forward motion.

Is this the correct approach? Is there anything else I can keep in mind?

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2017-09-22 18:30:13 -0500

Tom Moore gravatar image

The process_noise_covariance matrix parameter is 15x15, which means it is applied to the covariance for the entire state. In other words, the values on the diagonals are the variances for the state vector, which include pose, then velocities, then linear acceleration.

In the case of your robot, I'm assuming it would be best described by the Ackermann kinematic model. The r_l EKF model, when two_d_mode is enabled, simplifies to the unicycle model. So at some time step, we make a prediction using the r_l model. This will produce some error between where the robot thinks it is vs. where it really is. This error will be considerably larger for you, since the motion model doesn't accurately describe your robot. Since we apply the process_noise_covariance at the end of the prediction step, you can use it to account for the fact that the models don't match.

The only problem for you is that the error is going to be a function of your steering angle. When the robot is driving straight, the unicycle and Ackermann models will both produce the same prediction. When the robot is turning, the models will clearly differ during prediction. You could compute the upper bound on this error and use it to dictate the process node covariance values for the affected variables.

If you can, I'd fuse the pose data from your odometry, rather than the velocity.

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2017-06-30 04:01:13 -0500

Seen: 1,141 times

Last updated: Sep 22 '17