Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

I can help get you going in calibrating your UR10 and Asus camera. Please confirm or clarify the following. 1. The camera is fixed in the workcell overlooking the robot 2. The calibration target is also fixed in the workcell and not held by the robot but is fully visible to the camera.

If this is true, then the calibration need only compute the transform between the camera and the target. The most basic code for doing this is the target_locator stack. This node continuously computes the transform between a calibrated camera and a target. It is pretty straight forward to understand especially compared to the more generic calibration node in industrial_extrinsic_cal. In either case, please use a "modified circle target" for best results. This type of target removes the ambiguity in orientation of typical grid targets.

If on the other hand you fix the target to the robot but don't know the exact transform from the tool-point to the target's origin(usually the case), then you will need to use the industrial_extrinsic_cal node. This requires significantly more effort to set up. Your urdf will need two sets of mutable transforms. Your launch files have to include a combined joint state publisher, a mutable joint state publisher and a joint state publisher to combine the robot and mutable joints into a single joint_state topic. For this to work, your robot state publisher needs to publish a topic other than /joint_states. You may send further inquiries to clewis@swri.org and I'll be happy to get you up and running.

I can help get you going in calibrating your UR10 and Asus camera. Please confirm or clarify the following. 1. following.

  1. The camera is fixed in the workcell overlooking the robot 2. robot
  2. The calibration target is also fixed in the workcell and not held by the robot but is fully visible to the camera.

If this is true, then the calibration need only compute the transform between the camera and the target. The most basic code for doing this is the target_locator stack. target_locator package. This node continuously computes the transform between a calibrated camera and a target. It is pretty straight forward to understand especially compared to the more generic calibration node in industrial_extrinsic_cal. industrial_extrinsic_cal. In either case, please use a "modified circle target" for best results. This type of target removes the ambiguity in orientation of typical grid targets.

If on the other hand you fix the target to the robot but don't know the exact transform from the tool-point to the target's origin(usually origin (usually the case), then you will need to use the industrial_extrinsic_cal industrial_extrinsic_cal node. This requires significantly more effort to set up. Your urdf will need two sets of mutable transforms. Your launch files have to include a combined joint state publisher, a mutable joint state publisher and a joint state publisher to combine the robot and mutable joints into a single joint_state topic. For this to work, your robot state publisher needs to publish a topic other than /joint_states. /joint_states. You may send further inquiries to clewis@swri.org and I'll be happy to get you up and running.