Robotics StackExchange | Archived questions

How to use the detected pose matrix of transparent object recognition for grasping?

Hello,

I am using objectrecognitiontransparent_objects to estimate the pose of transparent object, but I don't know how to use the pose matrix for PR2 to grasp.

For now, I use "getProjectiveMatrix()" to get the projection matrix from detector:

vector<PoseRT> poses;
vector<string> detectedObjectsNames;
detector.detect(image, depth, registrationMask,
              poses, errors, detectedObjectsNames, &debugInfo);
 cv::Mat pose_mat = poses[0].getProjectiveMatrix();

The output in console looks like

poses rotation and trans in sample: [-0.9845744205613415, 0.1729370982511426, -0.02657010396617494, -0.0239983537453571;
  0.127148784173917, 0.6028772404118427, -0.7876371116678647, 0.2494694805827275;
  -0.1201931656101684, -0.7788657092476701, -0.6155662514293083, 0.8829539939093549;
  0, 0, 0, 1]

And it's still hard to interpret this matrix for grasping matched transparent objects directly. For example, I don't know what tf_frame is this matrix performed under, and how to use this matrix to move the gripper to the right position.

I've read the paper but still have no idea. Any advice is appreciated, thank you.

Asked by Po-Jen Lai on 2014-12-08 23:38:17 UTC

Comments

Answers