Ask Your Question

Cartesian controller for ROS

asked 2013-08-26 03:10:57 -0600

Flavian gravatar image

updated 2013-11-14 11:12:58 -0600

tfoote gravatar image

Hello everyone,

I want to use a robot from the ROS industrial package (the Motoman arm). I have an Orocos module that generates cartesian commands, but I am still confused about the whole chain inside ROS Groovy.

I have an URDF for the Motoman (available in the package). I'd like to be able to visualize the robot in a simulator and perform inverse kinematics on the cartesian values (which are Eigen types). I would also like to send the resulting joint positions to the robot.

I would need documentation pages, packages names, information about the types being exchanged, and it would be nice if I had nearly no code to write at all (which seems like a decent wish, since we have an IK solver, and a simulator through MoveIt. But I don't know how to plug it all together).

Many thanks

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted

answered 2013-08-26 06:23:12 -0600

In ROS, the task of converting Cartesian goal positions into a stream of joint positions is traditionally described as the "planning" step. This task typically involves not only IK, but also collision detection and joint-space search algorithms to identify a valid trajectory between the "current" joint state and the specified goal state. Additionally, this planning step may include additional constraints (e.g. maintain desired end-effector orientation) and various trajectory-smoothing filters.

As a first step, you'll need to describe how much of this planning capability you want/need to utilize. I will outline 3 potential approaches below. I recommend #2, but have never used this method. In the past, I have used #1, but that approach requires you to manually provide some of the functionality that MoveIt does automatically in method #2.

1) Direct IK Solution

In this approach, you will run IK to calculate a joint position for each cartesian point. These joint points will be combined into a single trajectory and sent to the robot. The trajectory may require additional code support to be robust: velocity/accel smoothing, arm-configuration continuity, etc.

  • create a JointTrajectoryActionGoal message to hold the joint trajectory
  • create a MoveIt JointStateGroup to use for calculating IK (as shown here)
  • for each cartesian point:
    • call joint_state_group->setFromIK() to calc IK
      • NOTE: this can take an Eigen::Affine3d input
      • NOTE: may need to call joint_state_group->setVariableValues() to set initial joint state
    • call joint_state_group->getVariableValues() to get computed joint position
    • create a JointTrajectoryPoint from your joint position
      • NOTE: you'll need to compute the appropriate velocity/accel values yourself
    • add JointTrajectoryPoint to FollowJointTrajectoryActionGoal.goal.trajectory.points*
  • send the completed joint trajectory (FollowJointTrajectoryActionGoal) to the appropriate action. For ROS-I nodes, this is typically "joint_trajectory_action".
  • When you start the motoman ROS-I nodes, it typically brings up both the interface to the actual robot and a joint_trajectory_action node that manages sending the FollowJointTrajectoryActionGoal to the robot interface nodes.
    • check to make sure that action node is running: rosnode info joint_trajectory_action

2) MoveIt ComputeCartesianPath

In this approach, you pass the full sequence of cartesian points to MoveIt. MoveIt computes the full joint trajectory between points, while also providing additional functionality beyond the basic approach described above: It inserts additional points to maintain linear motion, automatically checks for collisions and arm-configuration "flips", and generates the velocities and accelerations needed for smooth motion.

  • create a MoveIt MoveGroupInterface object, as shown here
  • call move_group->ComputeCartesianPath() with your sequence of cartesian points
  • send your trajectory to MoveIt
    • the easiest way I've found is to use the move_group node's ExecuteKnownTrajectory service, which is available on my setup at "/execute_kinematic_path".
    • alternatively, you could probably send it to the TrajectoryExecutionManager object directly, as shown here, but that seems messy

3) MoveIt Point-to-Point

This approach is probably the "easiest", but is unlikely to give you the motion you are looking for. You pass individual ... (more)

edit flag offensive delete link more



First of all, you answer rocks! Secondly, one question: is the streaming asynchronous or synchronous? (do you have to send whole chunks of trajectories or can you just feed new positions continuously? My application requires the latter.).

Flavian gravatar image Flavian  ( 2013-08-26 07:35:17 -0600 )edit

Thanks! Much of ROS's internal design supports the idea of merging together successive trajectories using the "time_from_start" field in JointTrajectoryPoint. I'm not sure whether this is carried all the way through the robot drivers, though. You may need to do some "hacking" to get it working.

Jeremy Zoss gravatar image Jeremy Zoss  ( 2013-08-26 07:38:40 -0600 )edit

Flavian, do you only need IK, or do you also need collision checking, planning and/or filtering?. At which frequency do you expect to send task-space commands?. The appropriate solution will largely depend on the answer to the previous questions.

Adolfo Rodriguez T gravatar image Adolfo Rodriguez T  ( 2013-08-26 21:44:44 -0600 )edit

I need this for a teleoperation task in a possibly complex environment, so I would need IK, collision checking, and possibly filtering, but not planning (which is why I don't find Jeremy's answer completely satisfying, although it is quite useful).

Flavian gravatar image Flavian  ( 2013-08-26 21:58:40 -0600 )edit

What about the expected command frequency?. An order of magnitude should be enough (tens, hundreds, thousands of Hz?).

Adolfo Rodriguez T gravatar image Adolfo Rodriguez T  ( 2013-08-26 22:05:27 -0600 )edit

Sorry I forgot this point. Ideally 1kHz with very low jitter but that's impossible in ROS. So let's say only 1kHz.

Flavian gravatar image Flavian  ( 2013-08-26 22:06:34 -0600 )edit

Some of your links are dead; it would be nice to update them to hydro links :)

VictorLamoine gravatar image VictorLamoine  ( 2014-12-10 08:07:36 -0600 )edit

Also note that with the second method; you should wait for the last transform before launching a new trajectory:

tf::TransformListener listener (*node_ptr);
listener.waitForTransform("/base", "/tool0", ros::Time::now(), ros::Duration(0.5));
VictorLamoine gravatar image VictorLamoine  ( 2014-12-11 08:45:10 -0600 )edit

answered 2013-08-26 23:04:08 -0600

Adolfo Rodriguez T gravatar image

updated 2013-08-26 23:42:32 -0600

Feedback given in the comments section of another answer states the following requirements:

  • Task-space commands updated at 1KHz.
  • Collision checking.
  • Some form of filtering (maybe enforcing joint limits? ie. position and some of its derivatives?).
  • No motion planning.

If the 1KHz update rate is a hard requirement, it will limit dramatically the list of possible solutions. I this case, you'll necessarily need to implement a realtime controller in some form or another. Existing tools for doing this are Orocos and ros_control, but no out-of-the-box solution exists yet. We have two interns working on precisely this problem, and their work will eventually be made public, but I wouldn't expect working deliverables to be available before the end of the year.

The approach we're following is:

  • Use the Stack of Tasks for multi-task prioritized IK. Tasks can be defined as equality and inequality linear constraints in the velocity domain. Inequality constraints are important, as they provide the framework for enforcing constraints like joint limits, collision avoidance and visibility.

  • Wrap the Stack of Tasks inside a ros_control controller. We run ros_control inside an Orocos component when working with real hardware.

  • Implement a (local) collision avoidance task leveraging simple collision models (eg. capsule decomposition) and fast collision checkers implementing minimum distance computation like FCL.

The first two points are working well in simulation. We're in the process of testing in hardware with a RTOS, and are also starting with the collision avoidance task, which should take some time to complete. Let me know if you'd like to be an early tester, or stay updated on progress.

Update: I've started a thread on the robot control SIG on whole-body control for ros_control. Please joint the SIG and the discussion if you're interested in following the subject.

edit flag offensive delete link more


This seems really interesting. I didn't know that a ros_control could be run in an Orocos component, how do you do that? Do you use a visualization tool? I would indeed be interested in testing your solution as soon as possible.

Flavian gravatar image Flavian  ( 2013-08-26 23:13:49 -0600 )edit

I don't have public code to share at the moment, but the hooks of an Orocos component state machine (configure, start, update, stop, cleanup) provide a convenient structure for setting up a ros_control-based robot hardware abstraction. What kind of visualization are you referring to?.

Adolfo Rodriguez T gravatar image Adolfo Rodriguez T  ( 2013-08-26 23:46:53 -0600 )edit

Dear Flavian I am doing a similar task. Can you please give me some hints on how you did it ?

ZainMehdi gravatar image ZainMehdi  ( 2016-11-22 16:56:43 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools



Asked: 2013-08-26 03:10:57 -0600

Seen: 8,349 times

Last updated: Aug 26 '13