ROS2 variant and dependency versioning for deployment
Hello,
Looking for advice on how others have approached the following situation with a base ROS variant (ie, ros-base
) and corresponding ROS dependencies (resolved through rosdep
).
Consider a released ROS2 Docker application that has been built off of ros-${ROS_DISTRO}-ros-base
and has used rosdep
to resolve the dependencies of custom packages used in the application. If this release is built, say on a given date, and it turns out a feature/fix (with a new dependency) needs to be added to the release at a later date, there is no guarantee that the versions of the ROS variant and dependencies will be the same. I believe this is due to the following:
- Specifying a version of a dependency in a package's package.xml is not handled by
rosdep
[Refs: #q254811, https://github.com/ros-infrastructure... ] - ROS debian binaries (generated by
bloom
) do not specify specific versions of package dependencies in the CONTROL file (see example CONTROL file forros-humble-ros-base
below) andapt
uses this for install - I do not believe there exists a "snapshot" apt mirror for ROS or Ubuntu packages, like Debain provides [Refs: https://snapshot.debian.org/ ]
I know that breaking changes to ROS packages are not allowed/expected in a given distribution but there could be instances in which a package has unintentionally introduced a bug or breaking change. It would be ideal to "lock" the version of the variant and the corresponding dependencies to prevent unintentional regressions in the release and reduce the testing required to ensure a "dot" release is stable.
Sooo...what do others do in this situation? Deal with any breaks/regressions when a old releases need to be updated and rebuilt?
Here are some of the ideas I had:
- Make a snapshot of the apt mirror(s) used to build specific releases and tie the mirror sources to the release [Ref: #q236249 ]
- Build ROS variant and dependencies from scratch using the following approach: fork https://github.com/ros/rosdistro, create a branch off of the desired <distro date=""> tag, manually generate the compressed cache file using </distro>
rosdistro_build_cache
, update the _index.yaml_ and _index-v4.yaml_distribution_cache
URL with the manually generated cache file, useros_install_generator
with theROSDISTRO_INDEX_URL
point to theindex.yaml
to generate a _.repos_ file which can be used to build the ROS variant.
The issue I ran into with approach #2 was that I still needed to use rosdep
to resolve the Ubuntu dependencies of the packages contained in the .repos file, which introduces the same non-versioned issues as above.
ros-humble-ros-base
CONTROL File:
Package: ros-humble-ros-base
Version: 0.10.0-1jammy.20230428.163401
Architecture: amd64
Maintainer: Steven! Ragnarök <steven@openrobotics.org>
Installed-Size: 41
Depends: ros-humble-geometry2, ros-humble-kdl-parser, ros-humble-robot-state-publisher, ros-humble-ros-core, ros-humble-rosbag2, ros-humble-urdf, ros-humble-ros-workspace
Section: misc
Priority: optional
Description: A package which extends 'ros_core' and includes other basic functionalities like tf2 and urdf.
sure there is: wiki/SnapshotRepository. That's the ROS 1 wiki, but it works for ROS 2 as well. See snapshots.ros.org/humble for the latest Humble snapshot fi.
It won't help with non-ROS packages of course.
This reminded me of a project I contributed to a couple years ago: rosin-project/rosinstall_generator_time_machine. Haven't used it in quite a while, so it may have bitrotted a bit, but it would take the manually out of your workflow.
I don't believe there is a solution to this, except making complete mirrors of the package repositories you're using.
Alternative approaches build everything from source, like Nix.
@gvdhoorn, this is awesome! Thanks for the reply and clarification. I think a combination of using the ROS SnapshotRepository with a "homegrown" Ubuntu snapshot would work great. I will for sure look into Nix and the
rosinstall_generator_time_machine
.re: Nix: you might want to check out the presentation by Clearpath from ROSCon'22: "Better ROS Builds with Nix" (clearpathrobotics/nix-ros-base). It won't solve your entire reproducibility problem, but it's probably interesting in any case.
Note: I haven't posted an answer, as I don't have the answer. It's all just a bunch of comments on small parts of your question.
Awesome thanks for the presentation link. Would you like me to close this question as maybe there isn't a perfect answer to this as it can depend on the specific application?
No, no need. Let's just see whether you'd get some other responses.
It is actually relatively common to fork rosdistro to have static versions of things that you can manually upgrade to manage dependencies. You need to build everything yourself, but that can be done in a CI job or via dockerhub so that you can pull it down and have everything as if you were using the officially released debians
Forking
rosdistro
is one thing, but the issue @danambrosio highlights is the dependencies.With a package based OS distribution, you can't achieve fully deterministic/reproducible builds without having a full copy/snapshot of the package repositories you use.
Building ROS from source is maybe 30% of the solution.
Build systems like Yocto (with meta-ros) build everything from source, including the OS. That would allow reproducible builds, but is not what the OP asked for (ie: Docker image recreation).
Note btw: for Docker images specifically, it is possible to "pin" which image you start your own from. See #q411417 for a previous Q&A.