tl;dr: yes, there is one 'trick', which would be using nodelets. That is however a compile/development time decision, so if you're driver doesn't support it, it either will be unavailable to you, or you have to change the nodes you want to use.
Additionally, nodelets will introduce other restrictions, such as loss of location transparency (need to run on the same machine), introduction of single point of failure (single process hosting multiple nodelets) and introduction of synchronisation coupling.
Finally: not sending msgs at all is of course also one way to greatly reduce latency, but that has other implications.
longer: this is not specific to ROS, but is something that will always be the case when you start using networked middlewares: overhead in communication (there are other factors, such as process scheduling, but I have a hunch that those factors will not contribute significantly to the latency you've observed. I'll also assume that the driver you're using interfaces in the proper way with the hardware, and is not doing 'strange' things internally, such as introducing artificial delays).
Unfortunately this is a consequence of the desire to decouple nodes (ie: programs). In order to guarantee that a receiving node has a correct and coherent representation of the in-memory image of a message sent by a transmitting node, the transmission process will need to use an intermediate representation that leaves no room for ambiguity. This in turn will require a decomposition and recomposition step in the communication (ie: serialisation and deserialisation).
Especially for larger messages the time this step needs dominates the total time it takes to exchange a message (provided both sides are not so far apart (both geographically as well as in terms of networking) that actual transmission of data also takes significant time (ie: from Amsterdam to Tokyo takes around 250 msec fi)).
Are there any techniques or ways to lower the camera latency in ROS?
So the basic (or honest) answer would be: no (provided you'd want to keep total decoupling between participants in tact, both in time as well as in space).
However, there are some tricks that you can use. These will however lead to a (partial) loss of some of the advantages of using a networked middleware, but that is a trade-off you can make yourself.
The most commonly used approach is to use a piece of shared memory (or: sharing an address space) between transmitter and receiver: this will allow to skip (de)serialisation entirely, effectively making a message exchange equal to an exchange of pointers. Advantage: in cases where (de)serialisation dominates communication overhead, you can skip that completely. Disadvantage: both receiver and transmitter need to be running on the same system, so loss of location transparency is one disadvantage. Depending on implementation, others could be loss of synchronisation decoupling and introduction of a single point of failure.
Sharing an address space is supported in ROS, but you'll need to use nodelets, not nodes ... (more)
Where do you get 'current time on the stream'? Is that time
now()
when the message arrives in a callback in a subscribing node?Yes, I take the time in the callback func.
It looks like you are using a usb camera with opencv VideoCapture, you could try replacing that with a variant of
usb_cam
with nodelet support https://github.com/ros-drivers/usb_ca... .https://github.com/ros-drivers/usb_ca... may be of interest also, the v4l timestamp may be close enough to the stopwatch time that you won't need it to characterize latency.