# controllers vs drivers

Hello, I'm having trouble getting an overview of what's going on in actually making something move. In particular, I'm confused by the term "controller." Just what exactly does that mean and how is it different from a driver? Is it just for real time support, perhaps? In the pr2_controller tutorials, the following is written multiple times:

" Commanding a joint trajectory for the arm requires the following components:

• a controller that sends commands directly to the joints;
• an action interface to the controller in the form of a ROS action, that takes in a trajectory command expressed as a series of joint angles and sends the appropriate low-level commands to the controller;
• the high-level program that uses the action interface to issue the desired joint trajectories. I figure

The first two components are available in ROS. In this tutorial we will show you how to use them by writing the third component, the high level program. "

Now I'm not using a pr_2, but I feel these aspects of ROS are there for very good reasons and I'd like to know what they are before I make my robot framework. From those statements, controllers look very much like another way to say drivers. However, I looked at the actual code in the controller modules, and I didn't see any low level stuff at all. Looking around elsewhere, I see ethercat drivers for the pr2, and I'm back to thinking there is a distinct difference. Another thing I'm confused about is whether the controller is separate from the action server or not. It would makes sense to me that these would be the same bit of code, but in the above explanation, they seem to be separate.

Any links or help are much appreciated. I think I understand the reasoning for actionlib (preemption, feedback, reuse of client code, etc.) but I just can't find much on lower level stuff and the reasoning behind it. Thanks in advance

edit retag close merge delete

Sort by » oldest newest most voted

Yes, this part is rather confusing and underdocumented, and it took me quite some time to figure it out when I learned ROS myself. I know that any problem can be solved by yet another layer of abstraction, but I wondered how they made the joints move when I couldn't find a single piece of hardware-related code at first! ;-) When I finally got it, I must say it's a rather beautiful design, so I hope I can help others understand quicker than I did.

Let's take the example of sending a joint trajectory to a robot arm (although the following should generalize well to any kind of PR2 controllers). This is what is going on from top to bottom:

1. Some node (the client) calculates the trajectory to be executed, for example using motion planning. This code is completely independent of the actual arm being used (although the calculated trajectory will of course be specific to that arm). It uses an actionlib action client to connect to an action, send the trajectory and get feedback when the execution terminates, and whether it was successful. Example: move_arm.

2. Another node (the action) has to provide the joint trajectory action server to which the client code connects. This is the ROS interface to any robot arm, so if you write your own driver, this action server is the only thing you need to provide in order for the rest of the ROS ecosystem to work with your code. How you go about making the arm actually follow the trajectory is your decision. Example: joint_trajectory_action.

3. On the PR2, the action then sends on the trajectory to the controller. It doesn't use actionlib to do so, instead it sends the trajectory on a normal topic called "command" and receives status updates from the controller also on a normal topic called "state". A controller is written as a dynamically loadable and unloadable plugin for the pr2_controller_manager, which runs in a realtime context. The controller samples the joint trajectory at each time step and translates it into efforts for the joints (using a PID controller). Example: joint_spline_trajectory_controller.cpp.

4. The controller doesn't send the computed efforts directly to the hardware. Instead, it calls the function propagateEffort() from pr2_mechanism_model. Now, depending on whether we're running in simulation or on a real robot, pr2_mechanism_model will decide whether to send these efforts to Gazebo or to the real robot via EtherCAT. This allows the same controller code to run in simulation and on the real robot. See the following image (from here):

@tbh wrote:

Another thing I'm confused about is whether the controller is separate from the action server or not.

It's usually separate (as I wrote before), although since there's usually a 1-on-1 mapping from action to corresponding controller, they could be in the same piece of code. If I'm not mistaken, the decision to split them was taken because actionlib is not particularly realtime-friendly, so the Willow people wanted to ...

more