Robotic arm fails to pick object
Hello,
So I have the following problem. I have a monocular camera mounted on a robot arm and I want to find and publish the poses of 6 objects for a pick and place operation. The camera always captures objects from the same position with fixed z (depth). I have also mapped the pixel coordinates to world coordinates with (0, 0) being somewhere in the top left of the image. To locate the objecs I extract their contours and their centroids. Then I find the world coordinates of the centroids and I change the (0,0) to be the optical center of the camera by subtracting from every point the (optical center) - (previous origin). For the orientation I find only the yaw of the trimmers (pitch, rollo, yaw) because they can rotate only in the z axis. The results of the object estimation are shown in the image bellow (red line is the possitive x and green the positive y purple circle is the optical center from cameras calibration) Then I construct the tfs of the objects in respect to the camera's fame and then I publish them in respect to the robot base so that they aren't moving.
THE PROBLEM NOW IS:
To grab an object we go manually to one of the 6 detected and we echo its pose and the frame of the tool. Based on that echo we believe that we can pick every other object no matter rotation. When an object is translated but not rotated the robotic arm picks it successfully but when it is rotated there is an offset mostly on y axis. The rotation of the robotic arm is always correct (like we echo it) no matter rotation or translation. The offset became lower when I published the frames in respect to the camera's optical center (before I was publishing them with (0,0) being the top left corner of the image which was wrong). I believe that now my origin axis is correct and I can't figure out why it isn't working so if someone could help I would be grateful.
Please edit your description and upload the image using the icon that looks like a terminal. This way the image will persist with the question after it expires from the pasteboard site.
moveit
using for goals?I uploaded the image and inserted it.
1.For the object pose I use as parent frame a camera frame that I calculated by doing hand eye calibration. To create the tf I put as (x,y) the values of the photo and a fixed value for z. For the rotation I transform euler to quaternion and my euler angles are [0, 0, deg of the photo]. 2.After the construction of the tfs for the objects I publish them with parent frame the base of the robot the same base that the client uses for planning. Thanks for the image