How do I set orientation of robot to face object after navigation?
If I am able to obtain the (x,y,z) coordinate point of a spot infront of an object (in the camera_depth_optical_frame
), and the orientation of the coordinate point from the robot (from the camera_depth_optical_frame
) using math.atan2(x, z)
. After I transform the (x,y,z) point to the corresponding map_frame
coordinate point. How do I calculate the orientation which would make the front of the robot face the object once it navigates to the transformed point (which again is infront of the object)?
This is what I have tried so far:
get the (x,y,z) and orientation `cam_orient=math.atan2(x, z)`. from robot in the `camera_depth_optical_frame`.
listener = tf.TransformListener()
"""Get transform matrix to convert Point in 3D space to from camera_depth_optical_frame"""
listener.waitForTransform('map', 'camera_depth_optical_frame', rospy.Time(0), rospy.Duration(10.0))
(trans, rot) = listener.lookupTransform('map', ''camera_depth_optical_frame', rospy.Time(0))
Transform_matrix=np.dot(tf.transformations.translation_matrix(trans),tf.transformations.quaternion_matrix(rot))
_, _, yaw = tf.transformations.euler_from_matrix(Transformation_matrix, axes='sxyz')
"""Get orientation of (x,y,z) in the map frame"""
map_orient = cam_orient + yaw
"""convert map_orient to quaternions"""
quaternion = quaternion_from_euler(0.0, 0.0, map_orient)
This did not provide the right orientation. The camera_depth_optical_frame
follows a left hand rule where the z axis faces outwards, the x axis faces right, and the y axis faces downwards. Perhaps there is some step I am missing or my entire logic is completely wrong?