ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

Well, this isn't the answer I was hoping for, but I've managed a hack that at least gets it working. The idea is to use the projection of the target coordinates on the Kinect RGB frame as deltas on the pan and tilt servo positions. So I have to subscribe to the /joint_states topic to keep track of the current pan and tilt joint angles, then I update them with the differences computed in the Kinect RGB frame. In other words, the last line of the code above becomes:

return [self.current_head_pan + pan_angle, self.current_head_tilt + tilt_angle]

where the current_head_pan and current_head_tilt values comes from a callback on the /joint_states topic as follows:

def update_joint_state(self, msg):
    self.joint_state = msg
    self.current_head_pan = msg.position[msg.name.index(self.head_pan_joint)]
    self.current_head_tilt = msg.position[msg.name.index(self.head_tilt_joint)]

Because of some slop in the joints, I publish the same target 5 times with a 0.2 second sleep inbetween and this is usually sufficient to get the Kinect to zero in on the desired target. For example, I can alternate between looking at the robot's left and right hands by simply setting the target positions to be {x: 0, y: 0, z: 0} in the respective frames.

In any event, if someone can think of a more elegant way of computing the pan/tilt values directly from tf, I'd love to hear it.

--patrick