MoveIt! move_group_python_interface_tutorial.py fails to produce cartesian path
Hi
I have the following setup:
OS: Ubuntu 20.04
Ros Distro: Noetic
One of my ultimate goals for the robot is to have its end-effector move through a series of waypoints using the blend_radius feature of the sequence capability provided by the Pilz industrial planner MoveIt! plugin
To see how movement through waypoints look without the blending and to get familiar with the movegroupnode I started following this tutorial:
I am currently altering the movegrouppythoninterfacetutorial.py code to fit it to my robot.
Planning to a jointgoal or a posegoal works fine with the above mentioned python script, at least it shows the desired robot movements in Rviz ( I have not tested it on the real robot yet but that is irrelevant for my question here).
I prepared 2 coordinates (w1,w2) and checked whether it was possible for MoveIt to find a trajectory for (w0,w1) and (w0,w2) with w0 being the starting position of the end-effector for the desired planning group. In both cases MoveIt planned and displayed a correct trajectory in rviz.
This led me to believe that it would also be possible to create an array with waypoints [w0,w1,w2] and to plan to a cartesian path going through these waypoints. But when I ask the move_group node to plan this cartesian path, it does not find one.
Here is the relevant code snippet which specifies the waypoints and tries to find a cartesian path:
# Create waypoints
current_pose = tutorial.move_group.get_current_pose().pose
w0 = copy.deepcopy(current_pose)
w1 = copy.deepcopy(w0)
w1.position.z += 0.05
w1.position.y += 0.07
w2 = copy.deepcopy(w0)
w2.position.z += 0.09
w2.position.y += 0.08
w3 = copy.deepcopy(current_pose)
waypoints = [w0,w1,w2]
#plan
(plan, fraction) = tutorial.move_group.compute_cartesian_path(waypoints, 0.0005, 0.0)
#display
display_trajectory = moveit_msgs.msg.DisplayTrajectory()
display_trajectory.trajectory_start = tutorial.robot.get_current_state()
display_trajectory.trajectory.append(plan)
#publish
tutorial.display_trajectory_publisher.publish(display_trajectory)
#execute
tutorial.move_group.execute(plan, wait=True)
This snippet is an adapted version of the code from movegrouppythoninterfacetutorial.py using the exact same functions in the exact same order.
When the original movegrouppythoninterfacetutorial.py is executed with the panda robot it says in the terminal:
[ INFO] [1684139671.501745044]: Attempting to follow 3 waypoints for link 'panda_link8' using a step of 0.010000 m and jump threshold 0.000000 (in global reference frame)
[ INFO] [1684139671.507262405]: Computed Cartesian path with 44 points (followed 100.000000% of requested trajectory)
When I run movegrouppythoninterfacetutorial.py with the only modification being that my robots end-effector, planning group and lower step are used, then the terminal says:
[ INFO] [1684140333.395828072]: Attempting to follow 3 waypoints for link 'L_Hand_Link' using a step of 0.001000 m and jump threshold 0.000000 (in global reference frame)
[ INFO] [1684140333.401456556]: Computed Cartesian path with 1 points (followed 0.000000% of requested trajectory)
And in Rviz there is no movement, just a constant flickering of the robot (but only if the alpha is set to 0.5, for a robot alpha of 1 the robot does not flicker).
Weirdly enough the terminal also says:
[ INFO] [1684140598.326323861]: Execution request received
[ INFO] [1684140598.686430742]: Completed trajectory execution with status SUCCEEDED ...
[ INFO] [1684140598.686487111]: Execution completed: SUCCEEDED
even though nothing moves in Rviz.
One last thing I ought to mention. My robot has less then 3DOF. It is roughly humanoid and I am working with the left arm at the moment, which has certain joint limits which dictate a certain reachable space. I am rather new to robotics but I think this amounts to less than 3 DOF.
Does anyone have a theory for why the planning to a cartesian path fails?
I am grateful for any input and am happy to provide more details about my setup and terminal outputs.
Asked by Jaú Gretler on 2023-05-15 03:56:05 UTC
Answers
My robot has less then 3DOF. It is roughly humanoid and I am working with the left arm at the moment, which has certain joint limits which dictate a certain reachable space.
I missed this comment at first. Yes, it's going to be a problem, especially if you have to rely on cartesian planning. Increasing the cartesian tolerance will get you started, but likely will give imprecise results. If you are seeing IK failures during planning, try increasing the values of GoalPositionTolerance and GoalOrientationTolerance.
If you have to support "random" goals (goals you can't pre-plan for), to rapidly-calculate high-quality paths you may have to create a custom OMPL planner that understands the physical limitations of your robot.
Asked by Mike Scheutzow on 2023-05-16 09:04:02 UTC
Comments
Thank you for this answer as well. So far the planning has worked well, but I will keep the option of creating a custom OMPL planner in the back of my mind. Thanks!
Asked by Jaú Gretler on 2023-05-31 11:56:16 UTC
Comments
The notification "This post is awaiting moderation, please do not repost the same content!" appeared above my post; what exactly did I do wrong? Did someone else already post about this problem?
Asked by Jaú Gretler on 2023-05-15 08:10:02 UTC
Spam is a huge problem for this site. New users who's post includes urls have to wait for a human to verify the post is not spam. This rule applies to everyone.
Asked by Mike Scheutzow on 2023-05-16 08:08:22 UTC
Thanks for the info and for the beautification of my post!
Asked by Jaú Gretler on 2023-05-16 08:38:28 UTC
You typically should not specify the current_pose as the first waypoint (the planner will pick that up from the robot_state you pass.)
In my opinion, 0.5 mm is an insanely small step length for a plan for a normal-sized arm (and the info message tells you it got rounded up to 1.0 mm anyway.) Try a bigger step.
If this message shows any value less than 100%, the path planning has failed. Usually it means that the Inverse Kinematic (IK) solver was unable to find joint values within the specified cartesian tolerance of a step's goal. Try increasing the tolerance.
Asked by Mike Scheutzow on 2023-05-16 08:39:26 UTC
Thanks a lot for your advice! I removed the first waypoint (current_pose) and played around with the step size. A cartesian plan was successfully computed after setting the step size to 9cm. I must admit I don't understand what this step size means exactly. My robot is only about 20cm high, his arms are about 10cm long. I thought that if I set two waypoints at a vertical distance of lets say 5cm, then the cartesian plan would interpolate this distance with maybe 10 points, so each step between interpolated points being around 5mm seemed reasonable to me at the time. But that understanding is obviously wrong, since now a path between two waypoints which are approx. 5cm appart is successfully computed using an eef_step_size of 9cm. So what does that eef_step_size actually refer to?
Asked by Jaú Gretler on 2023-05-17 07:45:02 UTC
compute_cartesian_path()
does linear interpolation of the eef pose between the waypoint poses that you passed into the function. This method creates additional waypoints with the distance set by the eef_step_size.Asked by Mike Scheutzow on 2023-05-18 06:30:04 UTC
I am aware of that. But isn't it weird that when I set eef_step_size = 9cm it manages to interpolate between points which are less than 9cm appart?
Asked by Jaú Gretler on 2023-05-18 08:29:36 UTC
But anyway, that does not bother me too much, now that it is working. But I have a different question now (I wasn't sure if I should open a new forum entry for that but since I have described my whole setup here I will just continue in this thread).
So as stated above, I wish to set waypoints which the robot eef moves through. It's cool that this works by setting the waypoints in code now, but it would be a lot better if I could set the points visually in rviz and then let them be interpolated. I have found a plugin which promises to do pretty much that, but I cant get it to work. I think it was made for ROS hydro.
Another idea is to make use of the fact that I can move the eef via the moveit motionPlanning interface. Is there a way, to publish the pose/position of the eef (or the whole joint_state of an arm)? Because if there is, I could save different eef poses and use them as waypoints for the planning.
Asked by Jaú Gretler on 2023-05-18 08:34:59 UTC
I think what I essentially want is that the interactive markers, used to drag the eef of my robot arm around, continuously publish a message containing their position. Is that possible?
Asked by Jaú Gretler on 2023-05-18 08:56:09 UTC
I think I have found the information I need inside: /rviz_moveit_motion_planning_display/robot_interaction_interactive_marker_topic/feedback
Asked by Jaú Gretler on 2023-05-18 09:38:25 UTC
You can obtain the current joint values from a ros topic named something like
/joint_state
. In moveit, the RobotState object has a method to do Forward Kinematics (FK) to calculate the pose of the eef (or any other link.)Asked by Mike Scheutzow on 2023-05-18 11:24:10 UTC
Thanks you for your answer. I got it to work using the /rviz_moveit_motion_planning_display/robot_interaction_interactive_marker_topic/feedback approach.
Asked by Jaú Gretler on 2023-05-31 11:54:41 UTC