ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

Yes, this part is rather confusing and underdocumented, and it took me quite some time to figure it out when I learned ROS myself. I know that any problem can be solved by yet another layer of abstraction, but I wondered how they made the joints move when I couldn't find a single piece of hardware-related code at first! ;-) When I finally got it, I must say it's a rather beautiful design, so I hope I can help others understand quicker than I did.

Let's take the example of sending a joint trajectory to a robot arm (although the following should generalize well to any kind of PR2 controllers). This is what is going on from top to bottom:

  1. Some node (the client) calculates the trajectory to be executed, for example using motion planning. This code is completely independent of the actual arm being used (although the calculated trajectory will of course be specific to that arm). It uses an actionlib action client to connect to an action, send the trajectory and get feedback when the execution terminates, and whether it was successful. Example: move_arm.

  2. Another node (the action) has to provide the joint trajectory action server to which the client code connects. This is the ROS interface to any robot arm, so if you write your own driver, this action server is the only thing you need to provide in order for the rest of the ROS ecosystem to work with your code. How you go about making the arm actually follow the trajectory is your decision. Example: joint_trajectory_action.

  3. On the PR2, the action then sends on the trajectory to the controller. It doesn't use actionlib to do so, instead it sends the trajectory on a normal topic called "command" and receives status updates from the controller also on a normal topic called "state". A controller is written as a dynamically loadable and unloadable plugin for the pr2_controller_manager, which runs in a realtime context. The controller samples the joint trajectory at each time step and translates it into efforts for the joints (using a PID controller). Example: joint_spline_trajectory_controller.cpp.

  4. The controller doesn't send the computed efforts directly to the hardware. Instead, it calls the function propagateEffort() from pr2_mechanism_model. Now, depending on whether we're running in simulation or on a real robot, pr2_mechanism_model will decide whether to send these efforts to Gazebo or to the real robot via EtherCAT. This allows the same controller code to run in simulation and on the real robot. See the following image (from here):

pr2_mechanism_model

@tbh wrote:

Another thing I'm confused about is whether the controller is separate from the action server or not.

It's usually separate (as I wrote before), although since there's usually a 1-on-1 mapping from action to corresponding controller, they could be in the same piece of code. If I'm not mistaken, the decision to split them was taken because actionlib is not particularly realtime-friendly, so the Willow people wanted to split that part out of the controller. There are cases where the action and the controller have later been merged together, e.g. the joint_trajectory_action_controller.cpp (beware, due to the realtime complications it's not the most readable piece of code).

Now, if you want to implement a driver for a new robot arm, should you use PR2's controller manager architecture or not? Well, it depends. If you want to implement your PID controllers in software, and if your actuators have a hardware interface that allows you to send efforts fast enough (at around 1000 Hz), then I'd say yes. If your arm already contains hardware controller boards, and you can send the joint trajectory directly to those controller boards, I'd say no. I'd just take the joint_trajectory_action as a reference for the interface to implement and not use the PR2-specific controller manager architecture.