ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Using Docker containers would be one way.

If you don't want to / can't use that, you could start a separate roscore for each user. If they then make sure their ROS_MASTER_URI is set to point to that roscore instance, all node graphs will be isolated from each other.

Note however there is no protection or authentication, nor access control. All it takes for one user to be able to access (and interfere with) the node graph of another is for them to set their ROS_MASTER_URI to point to a roscore instance and "they're in".

Using Docker containers would be one way.way (as it is able to isolate processes and network traffic).

If you don't want to / can't use that, you could start a separate roscore for each user. If they You can then make sure their use roslaunch's -p argument to tell it to use that specific roscore. That would automatically update the port in the ROS_MASTER_URI is set to point to for that roscoreroslaunch instance, all session, and node graphs will would be isolated from each other.

I'd probably still recommend setting ROS_MASTER_URI manually though, as rosrun (and even directly run node binaries/scripts) would use it.

Note however there is no protection or authentication, nor access control. All it takes for one user to be able to access (and interfere with) the node graph of another is for them to set their ROS_MASTER_URI to point to a roscore instance and "they're in".

Using Docker containers would be one way (as it is able to isolate processes and network traffic).

If you don't want to / can't use that, you could use roslaunch's -p argument to tell it to start a separate roscore for each user. You can on a specific port. It will then use roslaunch's -p argument to tell it to use that specific roscore. That would for all the nodes it starts. The -p argument also automatically update updates the port in the ROS_MASTER_URI for that roslaunch session, and node graphs would be isolated from each other.other (note: the ROS_MASTER_URI in the environment of the current shell is not changed, only the value passed to processes started by roslaunch).

I'd probably still recommend setting ROS_MASTER_URI manually though, as rosrun (and even , rostopic et al. and directly run node binaries/scripts) binaries/scripts would use it.then use it instead of the default value.

Note however there is no protection or authentication, nor access control. All it takes for one user to be able to access (and interfere with) the node graph of another is for them to set their ROS_MASTER_URI to point to a roscore instance and "they're in".

Using Docker containers would be one way (as it is able to isolate processes and network traffic).

If you don't want to / can't use that, you could use roslaunch's -p argument to tell it to start a separate roscore on a specific port. It will then use that specific roscore for all the nodes it starts. The -p argument also automatically updates the port in the ROS_MASTER_URI for that roslaunch session, and node graphs would be isolated from each other (note: the ROS_MASTER_URI in the environment of the current shell is not changed, only the value passed to processes started by roslaunch).

I'd probably still recommend setting ROS_MASTER_URI manually though, as rosrun, rostopic et al. and directly run node binaries/scripts would then use it instead of the default value.

Note however there is no protection or authentication, nor access control. All it takes for one user to be able to access (and interfere with) the node graph of another is for them to set their ROS_MASTER_URI to point to a another roscore instance and "they're in".

Using Docker containers would be one way (as it is able to isolate processes and network traffic).

If you don't want to / can't use that, you could use roslaunch's -p argument to tell it to start a separate roscore on a specific port. It will then use that specific roscore for all the nodes it starts. The -p argument also automatically updates the port in the ROS_MASTER_URI for that roslaunch session, and node graphs would be isolated from each other (note: the ROS_MASTER_URI in the environment of the current shell is not changed, only the value passed to processes started by roslaunch).

I'd probably still recommend setting ROS_MASTER_URI manually though, as rosrun, rostopic et al. and directly run node binaries/scripts would then use it instead of the default value.

Note however Note: regardless of how you approach this (docker, separate roscore) there is no protection or authentication, nor access control. All it takes for one user to be able to access (and interfere with) the node graph of another is for them to set their ROS_MASTER_URI to point to another roscore instance and "they're in".

Using Docker containers would be one way (as it is able to isolate processes and network traffic).

If you don't want to / can't use that, you could use roslaunch's -p argument to tell it to start a separate roscore on a specific port. It will then use that specific roscore for all the nodes it starts. You'd assign each user a different port.

The -p argument also automatically updates the port in the ROS_MASTER_URI for that roslaunch session, and node graphs would be isolated from each other (note: the ROS_MASTER_URI in the environment of the current shell is not changed, only the value passed to processes started by roslaunch).

I'd probably still recommend setting ROS_MASTER_URI manually though, as rosrun, rostopic et al. and directly run node binaries/scripts would then use it instead of the default value.

Note: regardless of how you approach this (docker, separate roscore) there is no protection or authentication, nor access control. All it takes for one user to be able to access (and interfere with) the node graph of another is for them to set their ROS_MASTER_URI to point to another roscore instance and "they're in".

Using Docker containers would be one way (as it is able to isolate processes and network traffic).

If you don't want to / can't use that, you could use roslaunch's -p argument to tell it to start a separate roscore on a specific port. It will then use that specific roscore for all the nodes it starts. You'd assign each user a different port.port which would result in the node graphs tied to that roscore to be isolated from each other.

The -p argument also automatically updates the port in the ROS_MASTER_URI for that roslaunch session, and node graphs would be isolated from each other (note: the ROS_MASTER_URI in the environment of the current shell is not changed, only the value passed to processes started by roslaunch).

session. I'd probably still recommend setting ROS_MASTER_URI manually though, as the ROS_MASTER_URI in the environment of the current shell is not changed, only the value passed to processes started by roslaunch, so rosrun, rostopic et al. and directly run node binaries/scripts would then use it instead of still use the default value.value if you don't change it.

Note: regardless of how you approach this (docker, separate roscore) there is no protection or authentication, nor access control. All it takes for one user to be able to access (and interfere with) the node graph of another is for them to set their ROS_MASTER_URI to point to another roscore instance and "they're in".

Using Docker containers would be one way (as it is able to isolate processes and network traffic).

If you don't want to / can't use that, you could use roslaunch's -p argument to tell it to start a separate roscore on a specific port. It will then use that specific roscore for all the nodes it starts. You'd assign each user a different port which would result in the node graphs tied to that roscore to be isolated from each other.

The -p argument also automatically updates the port in the ROS_MASTER_URI for that roslaunch session. I'd probably still recommend setting ROS_MASTER_URI manually though, as the ROS_MASTER_URI in the environment of the current shell is not changed, only the value passed to processes started by roslaunch, so rosrun, rostopic et al. and directly run node binaries/scripts would still use the default value if you don't change it.

Note: regardless of how you approach this (docker, separate roscore) there is no protection or authentication, nor access control. All it takes for one user to be able to access (and interfere with) the node graph of another is for them to set their ROS_MASTER_URI to point to another roscore instance and "they're in".


Edit:

For docker solution (very new in this): If I have 5 users, it means to pull 5 docker stations, and run them by users right?

I'm not sure I understand what you mean by this. You'd create (at least) 5 containers, which might also be started from 5 different images, but that would depend on what your users want/need to run.

I've looked at docker stuff regarding ROS, and I've had a little trouble connecting two terminals (one for starting ROSCORE, the other one for ROSRUN).

I would probably suggest to start a container and then run ROS-related commands inside that session. You could either use screen or tmux to multiplex a single bash session, or use docker exec to start additional bash sessions. That would keep all network traffic inside the container.

B This solution will allow building packages without making a mess is a user makes a mistake?

Again, not sure what you mean by this.

I assumed all users would be actual Linux users with their own $HOME and thus also their own Catkin workspace. I'm not sure how they would interfere with each other in that case.

Which solution will use less resources from the server?

There might be a slight overhead caused by Docker's networking infrastructure, but I'd be surprised if that influences performance very much. Unless you start pushing hundreds of MB/s over ROS topics. On the CPU side there should be almost zero overhead.

Using Docker containers would be one way (as it is able to isolate processes and network traffic).

If you don't want to / can't use that, you could use roslaunch's -p argument to tell it to start a separate roscore on a specific port. It will then use that specific roscore for all the nodes it starts. You'd assign each user a different port which would result in the node graphs tied to that roscore to be isolated from each other.

The -p argument also automatically updates the port in the ROS_MASTER_URI for that roslaunch session. I'd probably still recommend setting ROS_MASTER_URI manually though, as the ROS_MASTER_URI in the environment of the current shell is not changed, only the value passed to processes started by roslaunch, so rosrun, rostopic et al. and directly run node binaries/scripts would still use the default value if you don't change it.

Note: regardless of how you approach this (docker, separate roscore) there is no protection or authentication, nor access control. All it takes for one user to be able to access (and interfere with) the node graph of another is for them to set their ROS_MASTER_URI to point to another roscore instance and "they're in".

If that's a concern (or the UX of Docker, or the UX of having to maintain separate roscores), perhaps virtual machines could offer a solution. Give each user their own VM. That would isolate them completely.


Edit:

For docker solution (very new in this): If I have 5 users, it means to pull 5 docker stations, and run them by users right?

I'm not sure I understand what you mean by this. You'd create (at least) 5 containers, which might also be started from 5 different images, but that would depend on what your users want/need to run.

Note that Docker will require some training, and is probably not suited for users with little experience with terminals or Linux process management.

I've looked at docker stuff regarding ROS, and I've had a little trouble connecting two terminals (one for starting ROSCORE, the other one for ROSRUN).

I would probably suggest to start a container and then run ROS-related commands inside that session. You could either use screen or tmux to multiplex a single bash session, or use docker exec to start additional bash sessions. That would keep all network traffic inside the container.

B This solution will allow building packages without making a mess is a user makes a mistake?

Again, not sure what you mean by this.

I assumed all users would be actual Linux users with their own $HOME and thus also their own Catkin workspace. I'm not sure how they would interfere with each other in that case.

Which solution will use less resources from the server?

There might be a slight overhead caused by Docker's networking infrastructure, but I'd be surprised if that influences performance very much. Unless you start pushing hundreds of MB/s over ROS topics. On the CPU side there should be almost zero overhead.

Using Docker containers would be one way (as it is able to isolate processes and network traffic).

If you don't want to / can't use that, you could use roslaunch's -p argument to tell it to start a separate roscore on a specific port. It will then use that specific roscore for all the nodes it starts. You'd assign each user a different port which would result in the node graphs tied to that roscore to be isolated from each other.

The -p argument also automatically updates the port in the ROS_MASTER_URI for that roslaunch session. I'd probably still recommend setting ROS_MASTER_URI manually though, as the ROS_MASTER_URI in the environment of the current shell is not changed, only the value passed to processes started by roslaunch, so rosrun, rostopic et al. and directly run node binaries/scripts would still use the default value if you don't change it.

Note: regardless of how you approach this (docker, separate roscore) there is no protection or authentication, nor access control. All it takes for one user to be able to access (and interfere with) the node graph of another is for them to set their ROS_MASTER_URI to point to another roscore instance and "they're in".

If that's a concern (or the UX of Docker, or the UX of having to maintain separate roscores), perhaps virtual machines could offer a solution. Give each user their own VM. That would isolate them completely.


Edit:

For docker solution (very new in this): If I have 5 users, it means to pull 5 docker stations, and run them by users right?

I'm not sure I understand what you mean by this. You'd create (at least) 5 containers, which might also be started from 5 different images, but that would depend on what your users want/need to run.

Note that Docker will require some training, and is probably not suited for users with little experience with terminals or Linux process management.

I've looked at docker stuff regarding ROS, and I've had a little trouble connecting two terminals (one for starting ROSCORE, the other one for ROSRUN).

I would probably suggest to start a container and then run ROS-related commands inside that session. You could either use screen or tmux to multiplex a single bash session, or use docker exec to start additional bash sessions. That would keep all network traffic inside the container.

B This solution will allow building packages without making a mess is a user makes a mistake?

Again, not sure what you mean by this.

I assumed all users would be actual Linux users with their own $HOME and thus also their own Catkin workspace. I'm not sure how they would interfere with each other in that case.

Which solution will use less resources from the server?

There might be a slight overhead caused by Docker's networking infrastructure, but I'd be surprised if that influences performance very much. Unless you start pushing hundreds of MB/s over ROS topics. On the CPU side there should be almost zero overhead.

VMs of course incur quite some performance overhead (so whether they're worth it would be a trade-off only you can make).