ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
In ROS, the task of converting Cartesian goal positions into a stream of joint positions is traditionally described as the "planning" step. This task typically involves not only IK, but also collision detection and joint-space search algorithms to identify a valid trajectory between the "current" joint state and the specified goal state. Additionally, this planning step may include additional constraints (e.g. maintain desired end-effector orientation) and various trajectory-smoothing filters.
As a first step, you'll need to describe how much of this planning capability you want/need to utilize. I will outline 3 potential approaches below. I recommend #2, but have never used this method. In the past, I have used #1, but that approach requires you to manually provide some of the functionality that MoveIt does automatically in method #2.
In this approach, you will run IK to calculate a joint position for each cartesian point. These joint points will be combined into a single trajectory and sent to the robot. The trajectory may require additional code support to be robust: velocity/accel smoothing, arm-configuration continuity, etc.
joint_state_group->setFromIK()
to calc IK
joint_state_group->setVariableValues()
to set initial joint statejoint_state_group->getVariableValues()
to get computed joint positionJointTrajectoryPoint
to FollowJointTrajectoryActionGoal.goal.trajectory.points
*FollowJointTrajectoryActionGoal
) to the appropriate action. For ROS-I nodes, this is typically "joint_trajectory_action
".joint_trajectory_action
node that manages sending the FollowJointTrajectoryActionGoal
to the robot interface nodes.
rosnode info joint_trajectory_action
In this approach, you pass the full sequence of cartesian points to MoveIt. MoveIt computes the full joint trajectory between points, while also providing additional functionality beyond the basic approach described above: It inserts additional points to maintain linear motion, automatically checks for collisions and arm-configuration "flips", and generates the velocities and accelerations needed for smooth motion.
move_group->ComputeCartesianPath()
with your sequence of cartesian points
std::vector<Pose>
, so you'll need to convert your cartesian positions using tf::poseEigenToMsg()move_group
node's ExecuteKnownTrajectory service, which is available on my setup at "/execute_kinematic_path
".TrajectoryExecutionManager
object directly, as shown here, but that seems messyThis approach is probably the "easiest", but is unlikely to give you the motion you are looking for. You pass individual cartesian points to the MoveIt planning/execution pipeline, and MoveIt will handle everything from there. However, the robot will stop at each cartesian waypoint, and the motion between waypoints is not guaranteed to be linear.
move_group_interface
object, as abovegroup.setPoseTarget()
with your cartesian pointgroup.move()
to plan and execute motion