ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Archive Download Failure from Buildfarm

asked 2020-09-10 02:41:34 -0500

DavidMansolino gravatar image

We have a ROS2 package that is downloading a large file that we don't want to commit directly on the repository:

Unfortunately, this download is refused on the ROS2 buildfarm (

08:26:53 Traceback (most recent call last):
08:26:53   File "", line 29, in <module>
08:26:53     os.path.join(os.path.dirname(__file__), archiveName))
08:26:53   File "/usr/lib/python3.6/urllib/", line 248, in urlretrieve
08:26:53     with contextlib.closing(urlopen(url, data)) as fp:
08:26:53   File "/usr/lib/python3.6/urllib/", line 223, in urlopen
08:26:53     return, data, timeout)
08:26:53   File "/usr/lib/python3.6/urllib/", line 526, in open
08:26:53     response = self._open(req, data)
08:26:53   File "/usr/lib/python3.6/urllib/", line 544, in _open
08:26:53     '_open', req)
08:26:53   File "/usr/lib/python3.6/urllib/", line 504, in _call_chain
08:26:53     result = func(*args)
08:26:53   File "/usr/lib/python3.6/urllib/", line 1368, in https_open
08:26:53     context=self._context, check_hostname=self._check_hostname)
08:26:53   File "/usr/lib/python3.6/urllib/", line 1327, in do_open
08:26:53     raise URLError(err)
08:26:53 urllib.error.URLError: <urlopen error [Errno 111] Connection refused>

So my question is, are we allowed to download files from the buildfarm, or is the Docker container not allowed to communicate with the outside world? And if not, what is the recommended way to deal with such cases?

Thank you for the help. David

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2020-09-11 13:57:54 -0500

nuclearsandwich gravatar image

Once the sources are fetched network access should not be required to create a debian source package. I'm not actually certain whether this is being intentionally blocked but it's reproducible off the buildfarm by cloning the release repository and trying to run debian/rules clean or fakeroot debian/rules clean in a Docker container with network access and the same set of packages used in the buildfarm containers yields the same result. I'm not actually certain why the request is failing. I tried stracing it and when the script attempts to issue a connect(2) to localhost port 9. So if there's something in the debhelper / pybuild pipeline intercepting DNS requests and blocking them that could be the mechanism.

Your is making a network request every time it is run which I already can't advise and furthermore. It also looks like it's attempting to pull archives larger than 1GB every single time is invoked even if the archive is already present locally which is going to be awful for individuals with low bandwidth or for our build environment where traffic outside the datacenter is not unlimited. I would suggest rethinking how this payload is fetched. At the very least, you shouldn't fetch it when it's not needed and you should use some mechanism to avoid re-fetching it if it is already present.

edit flag offensive delete link more


I tried stracing it and when the script attempts to issue a connect(2) to localhost port 9

The discard protocol? Is that perhaps used to determine whether a network connection is even possible/the stack is working?

gvdhoorn gravatar image gvdhoorn  ( 2020-09-11 14:39:37 -0500 )edit

The discard protocol?

Yep! The /dev/null of TCP and UDP. However this connection is only attempted when running via debian/rules. In the same environment I checked both python3 and curl and neither of them ever attempted to connect. Instead they both hit the local DNS server's port 53, which is why my best hypothesis with the current information is that something is intercepting and redirecting the system calls but I couldn't find any docs on what would do that.

nuclearsandwich gravatar image nuclearsandwich  ( 2020-09-11 15:20:40 -0500 )edit

This must be some sort of new protection in the debian helpers. I know people have abused things in the past in similar ways. But i agree that this approach is not the optimal way. In particular we don't want to be distributing debian packages with 1Gb size or anything near that. They take up a lot of resources on our hosting infrastructure and a lot of bandwidth for our users. What is making these packages so big? Are there assets included?

It would seem to me that we should be building webots here not downloading release binaries. In particular the logic for binaries gets pretty complicated for alternative architectures.

Related the most common way to do this sort of thing in CMake is using the external project macros which can fetch things at build time. So I think this might be related to the python settings. And the ...(more)

tfoote gravatar image tfoote  ( 2020-09-11 16:42:18 -0500 )edit

Thank you all for these information, it is now more clear why it is failing (and also why we should avoid this), we will change our package to avoid including this dependency inside, this will be cleaner and at the same time resolve this issue.

DavidMansolino gravatar image DavidMansolino  ( 2020-09-15 04:50:48 -0500 )edit

Question Tools



Asked: 2020-09-10 02:41:34 -0500

Seen: 89 times

Last updated: Sep 11 '20