Robotics StackExchange | Archived questions

How do I set orientation of robot to face object after navigation?

If I am able to obtain the (x,y,z) coordinate point of a spot infront of an object (in the camera_depth_optical_frame), and the orientation of the coordinate point from the robot (from the camera_depth_optical_frame) using math.atan2(x, z). After I transform the (x,y,z) point to the corresponding map_frame coordinate point. How do I calculate the orientation which would make the front of the robot face the object once it navigates to the transformed point (which again is infront of the object)?

This is what I have tried so far:

get the (x,y,z) and orientation `cam_orient=math.atan2(x, z)`. from robot in the `camera_depth_optical_frame`.

listener = tf.TransformListener()
"""Get transform matrix to convert Point in 3D space to from camera_depth_optical_frame"""
listener.waitForTransform('map', 'camera_depth_optical_frame', rospy.Time(0), rospy.Duration(10.0))

(trans, rot) = listener.lookupTransform('map', ''camera_depth_optical_frame', rospy.Time(0))

Transform_matrix=np.dot(tf.transformations.translation_matrix(trans),tf.transformations.quaternion_matrix(rot))

_, _, yaw = tf.transformations.euler_from_matrix(Transformation_matrix, axes='sxyz')

"""Get orientation of (x,y,z) in the map frame"""
map_orient = cam_orient + yaw

"""convert map_orient to quaternions"""
quaternion = quaternion_from_euler(0.0, 0.0, map_orient)

This did not provide the right orientation. The camera_depth_optical_frame follows a left hand rule where the z axis faces outwards, the x axis faces right, and the y axis faces downwards. Perhaps there is some step I am missing or my entire logic is completely wrong?

Asked by distro on 2023-07-04 21:13:08 UTC

Comments

Answers

I believe it would be similar as such:

  1. get map->camera_link pose
  2. get map->goal_pose
  3. direction_to_object = goal_pose.pose.x-camera_link.pose.x, goal_pose.pose.y-camera_link.pose.y (this gives vector from camera_link to goal_pose)

then yaw_angle = atan2(direction_to_object.y(), direction_to_object.x()); 4. tf::Quaternion desired_orientation; desired_orientation.setRPY(0.0, 0.0, yaw_angle); 5. build goal message and send to move_base

Asked by chased11 on 2023-07-04 22:19:57 UTC

Comments

@chased11, direction_to_object = goal_pose.pose.x-camera_link.pose.x, goal_pose.pose.y-camera_link.pose. This is in the map frame?

Asked by distro on 2023-07-04 22:37:05 UTC

Correct. it is (goal_pose wrt map_frame) - (camera_link wrt map_frame).

Asked by chased11 on 2023-07-04 22:41:01 UTC

@chased11 your method did not quite work although it helped me figure out a work around solution. I will provide it as an answer shortly.

Asked by distro on 2023-07-06 15:20:40 UTC

Interesting, I did something similar, but was only using the vector between the two and not converting to quaternion. Did it send the wrong orientation still?

Asked by chased11 on 2023-07-06 23:03:00 UTC

@chased11 So When I calculate the yaw_angle in the map_frame .I then have to convert that angle according to the quadrant it actually is placed in. I know that yaw angle increases counter clockwise and decreases counter clock wise. with the 0 rad orientation along the positive x axis.

Asked by distro on 2023-07-08 19:45:32 UTC

That makes sense, missed that the first time around. Did you just do another transform listener to correct the yaw in the camera_link frame then? How did you implement? I will eventually be doing something similar on my current project

Asked by chased11 on 2023-07-09 23:44:41 UTC

@chased11 I ended up not calculating the yaw angle in the camera_depth_optical_frame at all. I just directly calculate it in the map frame

Asked by distro on 2023-07-09 23:53:52 UTC