ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Move_group_interface_tutorial: orientation vs. position

asked 2017-02-14 06:31:17 -0500

FábioBarbosa gravatar image


I am learning how to plan my robot arm with moveit following pr2_tutorials:

What i dont understand is:

1 - what it's the diference between orientation and position?

2 - I try to move my end effector with only position x,y,z and it didnt work, why?

3 - Right now i always plan with joint space goal and read the positions to then plan with pose goal or path. Is this the way, or it is a better method to plan with coordinates?

Can you help me clarify my doubts? Thanks

edit retag flag offensive close merge delete

1 Answer

Sort by » oldest newest most voted

answered 2017-02-14 07:43:28 -0500

BrettHemes gravatar image

updated 2017-02-14 07:47:15 -0500

For 1) I am assuming you are asking about lines such as

box_pose.orientation.w = 1.0;

For object in 3-dimensional space, it requires 6-degrees of specification to uniquely identify a pose. Positions x, y, and z as well as orientation such as roll, pitch, and yaw. There are several ways to specify orientation with some easier for a human to understand/visualize and some "easier" for a computer to use. In this case quaternions are used which are a desirable representation for a number of reasons. Quaternions for rotation are represented by four numbers in vector form [x y z w]. The line of code above sets the w component to 1.0 with the rest defaulting to zero. As a rotation, this is equivalent to identity (no rotation) which in this case specifies orientation the same as the reference frame (frame_id).

For 2) By omitting that line, the quaternion values default to all zero, which is not a valid rotation (must be of unit length) which is probably why you get an error in that case.

You can try something like this tool for visualizing rotation quaternions or if you want to know more try these references:

And finally for 3) usually your task will dictate how you obtain coordinates. For example, industrial robots poses can be taught with a teach pendant (similar to your current approach) or in other applications where positions are determined online and/or from sensor data the pose is more easily expressed in Cartesian coordinates. I would suggest using whatever method makes the most sense for the application at hand. This will become more evident as you learn.

edit flag offensive delete link more


Thanks. I'm new to implementing robots so this is a new world for me. I really apreciate for the references you gived, it helped a lot to understand better this part

FábioBarbosa gravatar image FábioBarbosa  ( 2017-02-14 10:45:22 -0500 )edit

Question Tools

1 follower


Asked: 2017-02-14 06:31:17 -0500

Seen: 2,057 times

Last updated: Feb 14 '17