Trying to understand human-robot mapping (Baxter)

asked 2018-07-17 10:53:27 -0500

flightlesskite gravatar image

updated 2018-07-17 11:06:35 -0500

I'm having some difficulties understanding the concept of teleoperation in ROS so hoping someone can clear some things up.

I am trying to control a Baxter robot (in simulation) using a HTC Vive device. I have a node (publisher) which successfully extracts PoseStamped data (containing pose data in reference to the lighthouse base stations) from the controllers and publishes this on separate topics for right and left controllers.

So now I wish to create the subscribers which receive the pose data from controllers and converts it to a pose for the robot. What I'm confused about is the mapping... after reading documentation regarding Baxter and robotics transformation, I don't really understand how to map human poses to Baxter. I know I need to use IK services which essentially calculate the co-ordinates required to achieve a pose (given the desired location of the end effector).

But it isn't as simple as just plugging in the PoseStamped data from the node publishing controller data to the ik_service right? Like human and robot anatomy is quite different so I'm not sure if I'm missing a vital step in this...

Seeing code of others trying to do the same thing, I see that some people have created a 'base'/'human' pose which hard codes co-ordinates for the limbs to mimic a human's starting pose. Is this essentially what I need? Any insight is very much appreciated!

edit retag flag offensive close merge delete