How do ROS2 developers work with instrumentation-heavy projects?
How do experienced developers handle projects where a single robot is actuator-minimal but instrumentation-heavy, using ROS2? I have begun building some "precursors" for a robotic submarine I have been working on for some time. In the process, I have discovered just how egregiously I underestimated the work required to simply get the data in and out of the computer. For now, I am trying to use an Arduino MKR 1010 WiFi I had laying around, and a Raspberry Pi 4 8GB model, with ROS2 Humble on Ubuntu 22.04.
My precursor is a float, of sorts, so I can get the hang of data logging and remote monitoring. USB Camera, USB GPS, and I2C-based IMU. That was all. But I had never worked with ROS before, so you can certainly realize before reading any further that I ran into some serious problems.
First, my discovery that my Fedora 36 personal workstation and Pi OS-based Pi would not install ROS Humble by any means I attempted. So, I installed Ubuntu on the Pi and ate the sluggish user interface (since Ubuntu 22.04 lacks hardware acceleration for the board, apparently). Now, I got the camera publishing frames, and moved onto the IMU. The plan was to use ROSSerial to receive IMU data from the Arduino. Experienced developers will realize why that failed: ROSSerial is not compatible with ROS2.
As you can imagine, I have begun re-considering my approach to the internal electronics. PixHawk for I/O, with MAVROS? Pi Pico, running MicroROS? Controller Area Network (CAN) hats for the Pi and Arduino? Connecting the IMU via I2C and writing a custom node?
My concern is scalability. See, a big part of this project is that I want to expand my skills in the area. But I have other, bigger, plans for this AUV. So, I want to do this right. That's why I'm not going for a monolithic Python program, and just connecting this sensor to the Pi's I2C pins. It's because I would probably need to re-build from the ground-up eventually. Down the line, the vehicle could become very instrumentation-heavy. The most conservative estimate for that involves: 4x cameras, 6x thrusters, 16x analog sensors, 3 microphones, 3x UART-based sonar, 4x IMUs, GPS. From what I can gather of most Bots and Drones, and tutorials therein that involve ROS, this is a fairly radical amount of I/O.
My original thought was, eventually, running the Jetson (or 4x Pi boards) and a series of USB-connected Arduino boards to handle the I/O. I had also considered a CAN bus and borrowing peripherals normally used for drones. Another thought was just a string of USB-to-whatever adapters on a USB 3 splitter. Even using PixHawk flight controllers and adding drivers to receive the I/O and send/receive over MAVLink/MAVROS. But, as mentioned, I cannot seem to find anyone working on anything nearly this I/O dense.
How do experienced developers do this?
For what it's worth, I've got a robot that I was planning to use a PX4 based flight controller board on. But getting that working well with my jetson nano has been quite a pain. On another project, I'm using a teensy 4.1 with micro-ros to drive 5 steppers, 1 servo, and 6 encoders, and that's been working really well. (hardest part there is lack of an OCD on the teensy.)
Nice thing about micro-ros is that you're using rclc on the micro-controller, and you just need to run the agent node on the host computer. (laptop, pi, etc.) No need to deal with mavlink, custom de/serializatoin, etc. It's all just normal ros2 messages.
Nice thing though about the teensy 4.1 is that it's got a 600Mhz clock.. The pi pico is only 133Mhz and the arduino Uno is 16Mhz ...(more)