ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Where does the offset/rotation come from with Industrial Calibration Extrinsic Xtion to Arm Calibration?

asked 2016-05-11 11:55:06 -0500

fivef gravatar image

I calibrated our Xtion to the arm of our robot with the following parameters:

  • target: CircleGrid5x7
  • cost_type: LinkCameraCircleTargetReprjErrorPK
  • transform_interface: ros_camera_housing_cti Calibrated

Only one single static scene was used.

Outcome:

cost_per_observation: 0.0259

Recognized target:

image description

image description

Nothing was moved after calibration!

The target_frame is not perfectly oriented! What could be the source of this problem? Shouldn't everything be aligned perfectly with cost_per_observation this low? As I understand it should be perfect even if the intrinsic calibration of the camera is not correct?

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2016-05-11 13:26:50 -0500

Drclewis gravatar image

Several problems are apparent from your question and pictures. First, the target found is incorrectly found. The ros_camera_observer draws a line from the first point in the list to the end of the first row. There is clearly a line from the upper left to the 7th point. So, it found the target in the incorrect orientation. To fix this error, it is best to use the modified circle grid target where the origin point is a little bit larger than all the rest. Note, that the origin point is NOT the first point in the point list.

Second, there is a common misconception that when the residual error is low that the results should be good. It is true that when the residual error is high, that the results are not good, but the opposite is not necessarily true. One cannot perform camera to robot calibration with only one image. The residual will be low, but the covariance is very large. With a 5x7 target you get 75 equations. With the target's pose unknown, and the camera's pose unknown we only have 12 unknowns. 75 equations with only 12 unknowns sounds great, but in this case it is insufficient because the equations are dependent. Equal translations and planar rotations of both the target and the camera result in exactly the same set of observations. You have to have a diversity of viewpoints for this procedure to work. The algorithm will minimize the cost function to find one of a very large family of equally minimal solutions. I don't know why neither the camera nor the target's location was updated. If the mutable joint state node is active, and the transform interfaces calls it's update function, they should both move.

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2016-05-11 11:55:06 -0500

Seen: 179 times

Last updated: May 11 '16