UKF vs EKF performance
I hear alot about how UKF is better than EKF at the cost of CPU usage. How much are we talking about here in terms of the robot_localization ekf and ukf nodes?
Asked by mugetsu on 2019-08-27 14:58:07 UTC
Answers
I just ran a test on 10th-get Nuc i7 computer and the performance requirements of ekf_localization_node vs ukf_localization_node are negligible. I use the same config for both, just change the nodes. I have one input IMU topic @100 Hz and one input 2D odometry @15 Hz in the testcase. Output frequency is set to 50 Hz. two_d_mode is false.
Both EKF and UKF use 5.3 % +- 1 % of one CPU core on average.
EKF rostopic delay stats:average delay: 0.021 min: 0.001s max: 0.050s std dev: 0.01381s window: 200
UKF rostopic delay stats:average delay: 0.025 min: 0.005s max: 0.054s std dev: 0.01379s window: 200
Maybe on a less performant computer like Raspberry pi, the results would differ more. But for the strong computers, there is really no difference.
Asked by peci1 on 2022-10-23 08:34:46 UTC
Comments
What metrics are you looking for? These are really well documented algorithms used across many industries. I'd recommend consulting research and benchmark papers. This isn't specific to robot localization.
Asked by stevemacenski on 2019-08-27 17:11:31 UTC
@stevemacenski:
so it would appear the metric OP is interested in is "CPU usage".
Asked by gvdhoorn on 2019-08-28 03:05:45 UTC
For most localization methods I would expect these things to be use-case specific. It is probably true that you can rank methods from heavy on the CPU to lightweight, but depending on the size of your map, amount of landmarks/features your sampling methods etc. the differences between methods will change. I don't think you'll find an easy answer to this question (besides collecting your own empirical results). If you don't want to perform real life testing or simulation then at least provide us with more information about your use-case. This way people that have experimented with these methods before using similar setups may be able to provide you with some rough estimates.
Asked by MCornelis on 2019-08-28 04:56:05 UTC
Slightly off-topic, but: big-O analysis and similar methods can definitely be used to express the (runtime) complexity of algorithms, without having to use benchmarking. It would allow to compare algorithms based on their input sizes (ie: "size of your map, amount of landmarks/features your sampling methods etc") and say something about which would outperform which.
Whether that exists for these specific algorithms is something else.
Asked by gvdhoorn on 2019-08-28 05:31:52 UTC
This is true but I don't think the difference between UKF and EKF is of that magnitude (I could be wrong here), if one would be O(n) and the other O(n^2) for instance, then yes of course you could get a pretty good idea based on the size of your input.
Asked by MCornelis on 2019-08-28 06:33:16 UTC