ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A answers.ros.org

# Move_group_interface_tutorial: orientation vs. position

Hi!

I am learning how to plan my robot arm with moveit following pr2_tutorials: http://docs.ros.org/indigo/api/pr2_mo...

What i dont understand is:

1 - what it's the diference between orientation and position?

2 - I try to move my end effector with only position x,y,z and it didnt work, why?

3 - Right now i always plan with joint space goal and read the positions to then plan with pose goal or path. Is this the way, or it is a better method to plan with coordinates?

Can you help me clarify my doubts? Thanks

edit retag close merge delete

Sort by » oldest newest most voted

For 1) I am assuming you are asking about lines such as

box_pose.orientation.w = 1.0;

For object in 3-dimensional space, it requires 6-degrees of specification to uniquely identify a pose. Positions x, y, and z as well as orientation such as roll, pitch, and yaw. There are several ways to specify orientation with some easier for a human to understand/visualize and some "easier" for a computer to use. In this case quaternions are used which are a desirable representation for a number of reasons. Quaternions for rotation are represented by four numbers in vector form [x y z w]. The line of code above sets the w component to 1.0 with the rest defaulting to zero. As a rotation, this is equivalent to identity (no rotation) which in this case specifies orientation the same as the reference frame (frame_id).

For 2) By omitting that line, the quaternion values default to all zero, which is not a valid rotation (must be of unit length) which is probably why you get an error in that case.

You can try something like this tool for visualizing rotation quaternions or if you want to know more try these references:

https://en.wikipedia.org/wiki/Quaternion

http://www.euclideanspace.com/maths/algebra/realNormedAlgebra/quaternions/

https://www.amazon.com/Quaternions-Rotation-Sequences-Applications-Aerospace/dp/0691102988

And finally for 3) usually your task will dictate how you obtain coordinates. For example, industrial robots poses can be taught with a teach pendant (similar to your current approach) or in other applications where positions are determined online and/or from sensor data the pose is more easily expressed in Cartesian coordinates. I would suggest using whatever method makes the most sense for the application at hand. This will become more evident as you learn.

more

Thanks. I'm new to implementing robots so this is a new world for me. I really apreciate for the references you gived, it helped a lot to understand better this part

( 2017-02-14 10:45:22 -0600 )edit