Questions on adapting an algorithm to work with multiple state-space configurations and releasing to the ROS community
Hello,
After successful development of an initial version of a multi-robot localization and object tracking algorithm, we have the desire to make it accessible to the ROS community. I have some questions regarding the best approach towards this goal.
Repository of our package: https://github.com/guilhermelawless/pfuclt_omni_dataset
Our algorithm is working currently in ROS indigo and kinetic, but only for a specific configuration of sensors and state-space configuration, as well as the map type (landmark-based). Due to the nature of the method (PF-based), it is not hard to adapt to multiple configurations of these. Here is a list of what we desire or have done:
- Adapt to multiple 3D and 6D robot state-space configurations
- Adapt to other motion models for robots and objects/targets
- Adapt to use other kinds of maps (feature-based, occupancy grids as examples)
All robots should use the same configuration.
While we know that this is not easy to achieve, we would like to put effort into it. Our most important and yet unanswered question is the following, regarding implementation. I see 2 different paths we can take:
- Use of agnostic high-level virtual functions for the algorithm, which are then implemented in different classes (one for each robot configuration type, for instance). Developers can then implement their own configuration in this way.
- Make one implementation only, that then uses ROS parameters to decide which configurations to use.
1 will require compilation-time evaluation, so in order to provide many available configurations, either the user will have to compile themself, or we would provide many binaries (nodes) that run different configurations.
2 would be one binary only that takes many ROS parameters to decide what to run. This would mean a much bigger (in size) binary file, and perhaps become too confusing to use.
Has anyone developed something on ROS in the same fashion? That can take different robot configurations and observation models? We would be very interested in hearing their thoughts.
What would be the best approach with respect to sharing this implementation with the robotics community?
Thank you for your help.