Difference in the delay time between ROS and ROS2 for RT applications ?
I know that ROS is made for Real Time (RT) applications but why ? This should obviously depend on the complexity of the application. Since ROS uses a MASTER as the name service, there has to be a certain delay in the signal exchange just for the 1st time but i guess once the publisher and subscriber nodes are connected, there should be a minimal delay. ROS2 doesnt have a MASTER as it uses a distributed discovery mechanism and thus the message exchange can start instantly.
Now my questions :
- Is this the only difference between the delay times in ROS and ROS2 ? What exactly is the time delay for a normal ROS application. Is it in a few milliseconds or some seconds or does it totally depend on the types of signals exchanged. What is the real-time factor ? can the delay be monitored ?
- Upto what frequencies can ROS handle signals ? 1-100Hz, or in the range of KHz or MHz ?
EDIT 1:
Let us take a Simulation Example of a Vehicle.
We have 3 nodes namely Driver
, Engine
and Drivetrain
. Engine Supplies torque to the drivetrain and receives back the speed from the drivetrain and the control (i.e. how much torque is required and gear shifting) is done by the driver. Thus it is a closed loop where all the nodes need to exchange messages instantly.
If there is any delay in message exchange (beyond a tolerable), the behaviour of the car would totally change.
For this application, can I use a ROS system :
- Can I build all three nodes in ROS ? Be it on the same or different PC's.
- Lets say I build these 3 functions (instead) of nodes in different simulation tools (e.g. Simulink). Then can I use ROS as a message exchanging service between them ? Or is it possible only in ROS2 (through DDS).
- If yes, thn what would be the CPU load, delay times etc ?
- Which timestamps would be used by ROS ? How are the timestamps handled when different PC's are used ? Is it done by the Master ?
EDIT 2: For my application, i need to exchange messages every 1ms i,e, at a frequency of 1KHz.
I hope it is a bit clearer now ?
You might want to be careful with these sort of assertions. Both with what you mean with 'real-time' and with stating something like that.
real-time != (on-line || fast enough)
.ROS 1 was also not designed to be hard RT at all.
how? Do nodes somehow not have to find each other and setup communication channels in ROS 2?
I'm rather confused about your statements here tbh.
ok...i meant the delay in ROS2 would be lesser than ROS1 because of the distributed discovery mechanism. But what is approx a delay time for signals exchanged in ROS ?
There is no answer to this I believe. On what networks? Using what computer platforms? How much memory? Under what load? How many nodes? How many connections / which connection topology? Etc.
Could I ask you to update your question with some more careful wording?
Upto what frequencies can ROS handle signals ? 1-100Hz, or in the range of KHz or MHz ?
This is too broad a question. It depends on what type of data you're passing (point cloud or an integer?) and how capable is your hardware. ...
Also, in applications requiring high update rates (more than 1 KHz), there is often a need for very low jitter. (Ideally, for 1 KHz update rate, you want to send a message every 1 ms). Since ROS is running on a fully fledged, non-real time OS, the jitter can be quite high.
And I believe that the best way for assessing real-time performance of your system would be to first implement it and then measure.
The distributed discovery used in ROS 2 can actually lead to longer start up delays than the start up delay caused by the use of the master in ROS 1. You use the word "instantly" a lot but there is no "instantly" in software. You need to specify your timing requirements properly for us to help.