ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

You could take a look at Singularity. It uses the same underlying technology as Docker, but has a different approach.

Where Docker isolates (almost) everything by default and you must configure it to allow access to certain resources of the host, Singularity only selectively shields containers from the host's resources and you have to configure it to isolate it more. By defeault, Singularity will for instance mount the $USER's $HOME, /dev /tmp and a bunch of other directories. It also allows access to the network by default (similar to Docker 's --net=host, which makes using network applications quite a bit easier. And because of these default binds, using UI applications is essentially a zero-config affair (it typically just works). Access to GPUs is also supported (no need for nvidia-docker).

All of this makes it much easier to use it to only to package a deployment and run it on a different host, but also means that it's (somewhat) less isolated from the host. Whether that is something that fits your use-cases is something only you can decide.

Personally, whenever I need to deploy something on a machine where I'm only interested in avoiding "polluting" the host system with all the dependencies of my application, but would still like to run the packaged application(s) almost as-if they were installed on the host, I use Singularity.

With any alternative there are of course some things that seem strange or unexpected compared to Docker (such as read-only containers by default), but most of it is again configurable.

@stevemacenski wrote:

In terms of the restrictiveness of tools, I'd argue Docker is one of the most forgiving.

Singularity might actually be even easier.

I believe packaging entire deployments into single images is not really what Docker was made for, and the default isolation of containers gets in the way of federated development and distributed network applications, requiring the user to poke a lot of holes in it. That's all supported, but it does complicate configuration quite a bit.

Not to say it doesn't have its use-cases, as evidenced by the many companies that use Docker for deployment of their (distributed) applications.

You could take a look at Singularity. It uses the same underlying technology as Docker, but has a different approach.

Where Docker isolates (almost) everything by default and you must configure it to allow access to certain resources of the host, Singularity only selectively shields containers from the host's resources and you have to configure it to isolate it more. By defeault, Singularity will for instance mount the $USER's $HOME, /dev /tmp and a bunch of other directories. It also allows access to the network by default (similar to Docker 's --net=host, which makes using network applications quite a bit easier. And because of these default binds, using UI applications is essentially a zero-config affair (it typically just works). Access to GPUs is also supported (no need for nvidia-docker).

All of this makes it much easier to use it to only to package a deployment and run it on a different host, but also means that it's (somewhat) less isolated from the host. Whether that is something that fits your use-cases is something only you can decide.

Personally, whenever I need to deploy something on a machine where I'm only interested in avoiding "polluting" the host system with all the dependencies of my application, or have nodes that require specific sets of dependencies that are incompatible with the dependencies of other parts of my application, but would still like to run the packaged application(s) almost as-if they were installed on the host, I use Singularity.

With any alternative there are of course some things that seem strange or unexpected compared to Docker (such as read-only containers by default), but most of it is again configurable.

@stevemacenski wrote:

In terms of the restrictiveness of tools, I'd argue Docker is one of the most forgiving.

Singularity might actually be even easier.

I believe packaging entire deployments into single images is not really what Docker was made for, and the default isolation of containers gets in the way of federated development and distributed network applications, requiring the user to poke a lot of holes in it. That's all supported, but it does complicate configuration quite a bit.

Not to say it doesn't have its use-cases, as evidenced by the many companies that use Docker for deployment of their (distributed) applications.

is there an alternative to docker that is less restrictive?

You could take a look at Singularity. It uses the same underlying technology as Docker, but has a different approach.

Where Docker isolates (almost) everything by default and you must configure it to allow access to certain resources of the host, Singularity only selectively shields containers from the host's resources and you have to configure it to isolate it more. By defeault, Singularity will for instance mount the $USER's $HOME, /dev /tmp and a bunch of other directories. It also allows access to the network by default (similar to Docker 's --net=host, which makes using network applications quite a bit easier. And because of these default binds, using UI applications is essentially a zero-config affair (it typically just works). Access to GPUs is also supported (no need for nvidia-docker).

All of this makes it much easier to use it to only to package a deployment and run it on a different host, but also means that it's (somewhat) less isolated from the host. Whether that is something that fits your use-cases is something only you can decide.

Personally, whenever I need to deploy something on a machine where I'm only interested in avoiding "polluting" the host system with all the dependencies of my application, or have nodes that require specific sets of dependencies that are incompatible with the dependencies of other parts of my application, but would still like to run the packaged application(s) almost as-if they were installed on the host, I use Singularity.

With any alternative there are of course some things that seem strange or unexpected compared to Docker (such as read-only containers by default), but most of it is again configurable.

@stevemacenski wrote:

In terms of the restrictiveness of tools, I'd argue Docker is one of the most forgiving.

Singularity might actually be even easier.

I believe packaging entire deployments into single images is not really what Docker was made for, and the default isolation of containers gets in the way of federated development and distributed network applications, requiring the user to poke a lot of holes in it. That's all supported, but it does complicate configuration quite a bit.

Not to say it doesn't have its use-cases, as evidenced by the many companies that use Docker for deployment of their (distributed) applications.

is there an alternative to docker that is less restrictive?

You could take a look at Singularity. It uses the same underlying technology as Docker, but has a different approach.

Where Docker isolates (almost) everything by default and you must configure it to allow access to certain resources of the host, Singularity only selectively shields containers from the host's resources and you have to configure it to isolate it more. By defeault, Singularity will for instance mount the $USER's $HOME, /dev /tmp and a bunch of other directories. It also allows access to the network by default (similar to Docker 's --net=host, which makes using network applications quite a bit easier. And because of these default binds, using UI applications is essentially a zero-config affair (it typically just works). Access to GPUs is also supported (no need for nvidia-docker).

All of this makes it much easier to use it to only to package a deployment and run it on a different host, but also means that it's (somewhat) less isolated from the host. Whether that is something that fits your use-cases is something only you can decide.

Personally, whenever I need to deploy something on a machine where I'm only interested in avoiding "polluting" the host system with all the dependencies of my application, or have nodes that require specific sets of dependencies that are incompatible with the dependencies of other parts of my application, but would still like to run the packaged application(s) almost as-if they were installed on the host, I use Singularity.

With any alternative there are of course some things that seem strange or unexpected compared to Docker (such as read-only containers by default), but most of it is again configurable.

@stevemacenski wrote:

In terms of the restrictiveness of tools, I'd argue Docker is one of the most forgiving.

Singularity might actually be even easier.

I believe packaging entire deployments into single images is not really what Docker was made for, and the default isolation of containers gets in the way of federated development and distributed network applications, requiring the user to poke a lot of holes in it. That's all supported, but it does complicate configuration quite a bit.

Not to say it doesn't have its use-cases, as evidenced by the many companies that use Docker for deployment of their (distributed) applications.

is there an alternative to docker that is less restrictive?

You could take a look at Singularity. It uses the same underlying technology as Docker, but has a different approach.

Where Docker isolates (almost) everything by default and you must configure it to allow access to certain resources of the host, Singularity only selectively shields containers from the host's resources and you have to configure it to isolate it more. By defeault, Singularity will for instance mount the $USER's $HOME, /dev /tmp and a bunch of other directories. It also allows access to the network by default (similar to Docker 's --net=host, which makes using network applications quite a bit easier. And because of these default binds, using UI applications is essentially a zero-config affair (it typically just works). Access to GPUs is also supported (no need for nvidia-docker).

It also supports importing and running Docker images, so if you are a fan of Docker's way of building images, you could still use that, and then import the resulting image and use it with Singularity.

All of this makes it much easier to use it to only package a deployment and run it on a different host, but also means that it's (somewhat) less isolated from the host. Whether that is something that fits your use-cases is something only you can decide.

Personally, whenever I need to deploy something on a machine where I'm only interested in avoiding "polluting" the host system with all the dependencies of my application, or have nodes that require specific sets of dependencies that are incompatible with the dependencies of other parts of my application, but would still like to run the packaged application(s) almost as-if they were installed on the host, I use Singularity.

With any alternative there are of course some things that seem strange or unexpected compared to Docker (such as read-only containers by default), but most of it is again configurable.

@stevemacenski wrote:

In terms of the restrictiveness of tools, I'd argue Docker is one of the most forgiving.

Singularity might actually be even easier.

I believe packaging entire deployments into single images is not really what Docker was made for, and the default isolation of containers gets in the way of federated development and distributed network applications, requiring the user to poke a lot of holes in it. That's all supported, but it does complicate configuration quite a bit.

Not to say it doesn't have its use-cases, as evidenced by the many companies that use Docker for deployment of their (distributed) applications.

is there an alternative to docker that is less restrictive?

You could take a look at Singularity. It uses the same underlying technology as Docker, but has a different approach.

Where Docker isolates (almost) everything by default and you must configure it to allow access to certain resources of the host, Singularity only selectively shields containers from the host's resources and you have to configure it to isolate it more. By defeault, Singularity will for instance mount the $USER's $HOME, /dev /tmp and a bunch of other directories. It also allows access to the network by default (similar to Docker 's --net=host, ), which makes using network applications quite a bit easier. And because of these default binds, using UI applications is essentially a zero-config affair (it typically just works). Access to GPUs is also supported (no need for nvidia-docker).

It also supports importing and running Docker images, so if you are a fan of Docker's way of building images, you could still use that, and then import the resulting image and use it with Singularity.

All of this makes it much easier to use it to only package a deployment and run it on a different host, but also means that it's (somewhat) less isolated from the host. Whether that is something that fits your use-cases is something only you can decide.

Personally, whenever I need to deploy something on a machine where I'm only interested in avoiding "polluting" the host system with all the dependencies of my application, or have nodes that require specific sets of dependencies that are incompatible with the dependencies of other parts of my application, but would still like to run the packaged application(s) almost as-if they were installed on the host, I use Singularity.

With any alternative there are of course some things that seem strange or unexpected compared to Docker (such as read-only containers by default), but most of it is again configurable.

@stevemacenski wrote:

In terms of the restrictiveness of tools, I'd argue Docker is one of the most forgiving.

Singularity might actually be even easier.

I believe packaging entire deployments into single images is not really what Docker was made for, and the default isolation of containers gets in the way of federated development and distributed network applications, requiring the user to poke a lot of holes in it. That's all supported, but it does complicate configuration quite a bit.

Not to say it doesn't have its use-cases, as evidenced by the many companies that use Docker for deployment of their (distributed) applications.