In my opinion, the cleanest way would probably be:
- write a Linux kernel driver for monitoring the IRQ and keeping track of the encoder count. It could also do quadrature decoding for you if your encoder requires that
- have the driver expose a (char) device that can be interacted with through appropriate IOCTLs
- write a C++/Python/Whatever library that interacts with the (char) device and provides things like
init()
, reset()
, read_count()
, etc - write a ROS node that uses that library to periodically read out the encoder count, and publish it as a message.
Depending on your architecture, the ROS node could either publish raw encoder counts, or already attach some semantics to it by converting to an angle fi (which would require knowledge of the attached encoder, something which might not be appropriate at that level of abstraction).
Note that periodically does not necessarily mean that you'd use ros::Rate
instances to regulate the polling cycle. The Linux Device Driver book has a section on Blocking IO that explains how a user space process may wait for an event in kernel space.
An alternative to a kernel driver could be the Linux user space GPIO interfaces (through sysfs
or something similar) or whatever your particular board supports. But most user space interfaces incur some kind of overhead / latency, which a kernel driver would not have. The article you linked shows one possible way using sysfs
and glib2
, it also shows how to deal with interrupts; this could be sufficient, but that is something only you can decide based on your requirements. Integration a glib event processor with ROS has been done before, and should not pose too many problems.
Since environment variables can only hold strings [..]
Could you explain why you think you'd need to use environment variables for any of this?