ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Drclewis's profile - activity

2018-08-22 05:05:38 -0500 received badge  Teacher (source)
2016-06-14 07:51:56 -0500 answered a question industrial_extrinsic_cal. Instructions for camera-robot calibration

I can help get you going in calibrating your UR10 and Asus camera. Please confirm or clarify the following.

  1. The camera is fixed in the workcell overlooking the robot
  2. The calibration target is also fixed in the workcell and not held by the robot but is fully visible to the camera.

If this is true, then the calibration need only compute the transform between the camera and the target. The most basic code for doing this is the target_locator package. This node continuously computes the transform between a calibrated camera and a target. It is pretty straight forward to understand especially compared to the more generic calibration node in industrial_extrinsic_cal. In either case, please use a "modified circle target" for best results. This type of target removes the ambiguity in orientation of typical grid targets.

If on the other hand you fix the target to the robot but don't know the exact transform from the tool-point to the target's origin (usually the case), then you will need to use the industrial_extrinsic_cal node. This requires significantly more effort to set up. Your urdf will need two sets of mutable transforms. Your launch files have to include a combined joint state publisher, a mutable joint state publisher and a joint state publisher to combine the robot and mutable joints into a single joint_state topic. For this to work, your robot state publisher needs to publish a topic other than /joint_states . You may send further inquiries to clewis@swri.org and I'll be happy to get you up and running.

2016-05-11 13:26:50 -0500 answered a question Where does the offset/rotation come from with Industrial Calibration Extrinsic Xtion to Arm Calibration?

Several problems are apparent from your question and pictures. First, the target found is incorrectly found. The ros_camera_observer draws a line from the first point in the list to the end of the first row. There is clearly a line from the upper left to the 7th point. So, it found the target in the incorrect orientation. To fix this error, it is best to use the modified circle grid target where the origin point is a little bit larger than all the rest. Note, that the origin point is NOT the first point in the point list.

Second, there is a common misconception that when the residual error is low that the results should be good. It is true that when the residual error is high, that the results are not good, but the opposite is not necessarily true. One cannot perform camera to robot calibration with only one image. The residual will be low, but the covariance is very large. With a 5x7 target you get 75 equations. With the target's pose unknown, and the camera's pose unknown we only have 12 unknowns. 75 equations with only 12 unknowns sounds great, but in this case it is insufficient because the equations are dependent. Equal translations and planar rotations of both the target and the camera result in exactly the same set of observations. You have to have a diversity of viewpoints for this procedure to work. The algorithm will minimize the cost function to find one of a very large family of equally minimal solutions. I don't know why neither the camera nor the target's location was updated. If the mutable joint state node is active, and the transform interfaces calls it's update function, they should both move.

2015-09-04 09:16:28 -0500 received badge  Famous Question (source)
2015-09-04 09:16:28 -0500 received badge  Notable Question (source)
2015-05-28 04:34:31 -0500 received badge  Famous Question (source)
2015-02-02 16:40:51 -0500 received badge  Notable Question (source)
2014-12-19 04:35:26 -0500 received badge  Popular Question (source)
2014-12-16 16:38:12 -0500 asked a question camera calibration accuracy

I had hoped that by collecting images of a calibration target throughout the field of view at several prescribed distances that I would get very consistent estimates for fx, fy, cx and cy on an ASUS Xtion Pro. However the values I get for successive data sets differ by as much as 30 pixels for focal length and 10 pixels for center point. The per pixel residual is consistently on the order of .05 pixels per observed grid point. I realize this is just a 5 and 3% parameter variance, but this error dominates the absolute triangulation accuracy for a photogrammetric system. Has anyone else found this degree of variability? Can anyone suggest target size and a set of images that insures better estimation accuracy?

2014-04-06 17:53:10 -0500 received badge  Popular Question (source)
2014-02-24 23:03:24 -0500 received badge  Taxonomist
2014-01-31 03:05:10 -0500 asked a question Camera Calibration: Why skew CirclesGrid Target?

I understand why circles located with more sub-pixel precision than checkerboard grid intersections. However, when a circle is skewed the center of its elliptical projection is offset from the projection of the center of the circle in the image plane. Therefore, why does ros's calibration gui try to make sure the user presents sufficient skewed images? I'm not sure why one would want to skew a checkerboard either, but for a circles grid this seems wrong.

2013-03-26 11:21:02 -0500 received badge  Famous Question (source)
2012-11-20 04:19:45 -0500 received badge  Popular Question (source)
2012-11-20 04:19:45 -0500 received badge  Notable Question (source)
2012-11-20 04:19:45 -0500 received badge  Famous Question (source)
2012-10-30 16:36:19 -0500 received badge  Notable Question (source)
2012-10-12 06:25:20 -0500 received badge  Popular Question (source)
2012-09-21 09:31:59 -0500 asked a question EPICS: Opensource publish-subscribe 4 PLC, data-acq, and motion control

I wanted to know if anyone in the ROS community has interacted with EPICS. EPICS appears to me to be remarkably similar to ROS with a different scope. However, they include open source drivers for a wide range of hardware including programmable logic controllers, motion controls, and data acquisition. Their code originally ran only on VXworks, but now runs on Windows, linux etc. http://www.aps.anl.gov/epics This could be an excellent resource for ROS-Industrial. One could imagine creating a bridge, or simply converting their code to ros nodes.

2012-04-26 07:21:13 -0500 received badge  Student (source)
2012-04-26 04:31:30 -0500 asked a question tfListener::setExtrapolationLimit, Does this really allow forward extrapolation?

My motion platform moves at a fairly constant rate. I receive data from the kinect much faster than I do from my position sensor and would like lookupTransform to extrapolate to determine the approximate transform between the fixed kinect and the moving platform.

Does setExtrapolationLimit() do this. I see that it sets max_extrapolation_distance_, but I couldn't see any place where this variable is used in any meaningful way.