ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Robotic arm fails to pick object

asked 2021-12-02 03:36:25 -0500

basest gravatar image

updated 2022-04-30 13:54:05 -0500

lucasw gravatar image

Hello,

So I have the following problem. I have a monocular camera mounted on a robot arm and I want to find and publish the poses of 6 objects for a pick and place operation. The camera always captures objects from the same position with fixed z (depth). I have also mapped the pixel coordinates to world coordinates with (0, 0) being somewhere in the top left of the image. To locate the objecs I extract their contours and their centroids. Then I find the world coordinates of the centroids and I change the (0,0) to be the optical center of the camera by subtracting from every point the (optical center) - (previous origin). For the orientation I find only the yaw of the trimmers (pitch, rollo, yaw) because they can rotate only in the z axis. The results of the object estimation are shown in the image bellow (red line is the possitive x and green the positive y purple circle is the optical center from cameras calibration) Then I construct the tfs of the objects in respect to the camera's fame and then I publish them in respect to the robot base so that they aren't moving.

THE PROBLEM NOW IS:

To grab an object we go manually to one of the 6 detected and we echo its pose and the frame of the tool. Based on that echo we believe that we can pick every other object no matter rotation. When an object is translated but not rotated the robotic arm picks it successfully but when it is rotated there is an offset mostly on y axis. The rotation of the robotic arm is always correct (like we echo it) no matter rotation or translation. The offset became lower when I published the frames in respect to the camera's optical center (before I was publishing them with (0,0) being the top left corner of the image which was wrong). I believe that now my origin axis is correct and I can't figure out why it isn't working so if someone could help I would be grateful.

My Image

edit retag flag offensive close merge delete

Comments

Please edit your description and upload the image using the icon that looks like a terminal. This way the image will persist with the question after it expires from the pasteboard site.

Mike Scheutzow gravatar image Mike Scheutzow  ( 2021-12-02 07:42:39 -0500 )edit
  1. What transform frame are you using for the target object's pose?
  2. What transform frame is moveit using for goals?
Mike Scheutzow gravatar image Mike Scheutzow  ( 2021-12-02 07:52:02 -0500 )edit

I uploaded the image and inserted it.

Mike Scheutzow gravatar image Mike Scheutzow  ( 2021-12-02 16:07:57 -0500 )edit

1.For the object pose I use as parent frame a camera frame that I calculated by doing hand eye calibration. To create the tf I put as (x,y) the values of the photo and a fixed value for z. For the rotation I transform euler to quaternion and my euler angles are [0, 0, deg of the photo]. 2.After the construction of the tfs for the objects I publish them with parent frame the base of the robot the same base that the client uses for planning. Thanks for the image

basest gravatar image basest  ( 2021-12-03 02:28:16 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
0

answered 2021-12-03 06:36:07 -0500

Mike Scheutzow gravatar image

updated 2021-12-03 06:42:59 -0500

The most likely source of the error is that it seems like you are assuming every image pixel covers the same number of "table-top units". For most camera lenses, that's not a good assumption.

Separately, given the way you are doing the calculation, if the table-top is not close to z=0 in the planning-frame, that's also going to introduce more error in x,y.

edit flag offensive delete link more

Comments

Yes indeed every pixel is 0.5mm^2 but I'm using a basler camera and the fx (1485) is very close to fy (1484). For calculating the (x,y) I detect circles with known world points and find their center in pixel coordinates. Then I use solvePnP to get the R/t matrices for the camera's (fixed) position. The world points of the circle centers have all the same z. What do you mean if the table-top is not close to z = 0?

basest gravatar image basest  ( 2021-12-03 07:41:41 -0500 )edit

My advice is to check the accuracy of your conversion from optical x,y to world x,y, particularly at all 4 edges of the camera image.

Mike Scheutzow gravatar image Mike Scheutzow  ( 2021-12-03 19:43:52 -0500 )edit

Question Tools

3 followers

Stats

Asked: 2021-12-02 03:36:25 -0500

Seen: 227 times

Last updated: Dec 03 '21