I believe `amcl`

package is what you need for your dynamic environment with three robots and what you can immediately adopt for a variety of robots.

If you look into the function `pf_update_resample`

in source code, you could find out how both Augmented_MCL and KLD_Sampling_MCL are implemented.

KLD_Sampling_MCL algorithm is dynamic to the number of particles. Augmented_MCL decides how many particles from uniform distribution are added into the particle cloud. The reason why the class of MCL algorithms can handle dynamic environment is measurement models they use. Both **beam_range_finder_model** and **likelihood_field_range_finder_model** incorporate unexpected objects so it is the main reason I think MCLs are able to handle dynamic environments.

Speaking to Markov Localization for robosoccer, I assume you refer to grid-based Markov Localization, called Grid Localization algorithm in chapter 8.2 of the book *Probabilistic Robotics*, by Thrun, Burgard, and Fox, because I really don't understand how Markov Localization in chapter 7.2 is going to be implemented (hope someone could give me some information). If the map for robosoccer is not too big, I agree that Markov Localization could be the best choice for robosoccer. However, considering the dynamic environment, it is better to use MCL algorithms instead. Otherwise, in *Probabilistic Robotics*, it is mentioned that the Dynamic Markov Localization can work in the dynamic environment.