# How to use tf to point a Kinect toward a target location?

Hello ROS People,

I am trying to mimic the head pointing action found in the pr2_head_action package using a robot that has a Kinect as its head. The Kinect is mounted on top of the tilt servo with a 7cm long bracket in addition to the Kinect's own base. I have a URDF model of the whole robot that works well.

The trouble I am having is figuring out how to modify the pr2_head_action code to account for the fact that the Kinect's RGB frame is well above (and slightly offset from) the pan and tilt servos. For example, suppose I want the Kinect's RGB camera to look straight at a point 1 meter in front of the robot's base_link frame. If I use the pr2_head_action code without modification, the head will not tilt down far enough because the target's location is projected onto the pan and tilt frames rather than the Kinect's RGB frame. However, if I project the target's coordinates onto the Kinect's RGB frame, I initially get the correct tilt--the camera points at the target. But if I repeat the same goal, the head then tilts back toward vertical rather than simply not moving at all as it should.

The key part of the code I am using is as follows:

def transform_target_point(self, target):
pan_ref_frame = '/kinect_rgb_frame'
tilt_ref_frame = '/kinect_rgb_frame'

# Wait for tf info (timeout in 5 seconds)

# Transform target point to pan reference frame & retrieve the pan angle
pan_target = self.tf.transformPoint(pan_ref_frame, target)
pan_angle = math.atan2(pan_target.point.y, pan_target.point.x)

# Transform target point to tilt reference frame & retrieve the tilt angle
tilt_target = self.tf.transformPoint(tilt_ref_frame, target)
tilt_angle = math.atan2(-tilt_target.point.z,
math.sqrt(math.pow(tilt_target.point.x, 2) + math.pow(tilt_target.point.y, 2)))

return [pan_angle, tilt_angle]


I'm guessing a tf guru can see what I am missing. Some how I need to project the target onto the /kinect_rgb_frame but then publish the pan/tilt values as delta's on the current pan/tilt positions. While I could code it this way, I'm guessing there is a simpler way using tf and projecting on the right combination of frames?

--patrick

edit retag close merge delete

Sort by » oldest newest most voted

Well, this isn't the answer I was hoping for, but I've managed a hack that at least gets it working. The idea is to use the projection of the target coordinates on the Kinect RGB frame as deltas on the pan and tilt servo positions. So I have to subscribe to the /joint_states topic to keep track of the current pan and tilt joint angles, then I update them with the differences computed in the Kinect RGB frame. In other words, the last line of the code above becomes:

return [self.current_head_pan + pan_angle, self.current_head_tilt + tilt_angle]


where the current_head_pan and current_head_tilt values comes from a callback on the /joint_states topic as follows:

def update_joint_state(self, msg):
self.joint_state = msg


Because of some slop in the joints, I publish the same target 5 times with a 0.2 second sleep inbetween and this is usually sufficient to get the Kinect to zero in on the desired target. For example, I can alternate between looking at the robot's left and right hands by simply setting the target positions to be {x: 0, y: 0, z: 0} in the respective frames.

In any event, if someone can think of a more elegant way of computing the pan/tilt values directly from tf, I'd love to hear it.

--patrick

more