Zoom camera calibration in ROS?

How would you tackle the task of zoom camera calibration using standard ROS tools? Both the calibration process itself, and camera_info publication afterwards? I haven't found anything in camera_calibration or camera_info_manager.

If I understand it correctly, you have to estimate the dependency of the K matrix on the current zoom level using a function. The dependency of K on focal length is IMO linear (as long as we ignore skew and the other odd terms).

There are several quantities that can be reported by the camera:

• "zoom ratio": then I'd just multiply the f_x and f_y terms of the K matrix with this ratio (assuming the camera was calibrated with ratio 1)
• focal length: I'd simply put the focal length in the matrix
• field of view: I can estimate sensor width from the calibration, and then use it to get the focal length (2*atan(0.5*width/f)).

This of course ignores radial (and other) distortion, which may also by influenced by the focal length (at least so do I think).

Then I'd publish the updated camera matrix to the appropriate camera_info topic.

edit retag close merge delete

That seems like a reasonable approach, and the right way to integrate with ROS.

( 2016-04-26 11:59:56 -0500 )edit
1

Most of the image pipeline nodes will probably be fine, but you will probably hit edge cases where some nodes assume the camera_info is fixed, and only use the first message instead of subscribing.

( 2016-04-26 12:00:51 -0500 )edit

Yes, and I'd have to avoid all loadCalibration and saveCalibration calls in the camera info manager (otherwise the "zoomed" matrices would get saved). This is in no way optimal, but I'm afraid we can't do better...

( 2016-04-26 12:40:41 -0500 )edit

Sort by » oldest newest most voted

How about calibrating at several zoom positions and then validating your single calibration model against the experimental results?

Or, given that you already have multiple calibrations, maybe all other in-between calibrations could be a linear interpolation and you don't need a model at all.

One thing that will benefit you is that distortion ought to go down with greater zoom, so even if a model or interpolation scheme is not quite right for distortion as long as the distortion coefficients are getting smaller their effects are diminished.

You'll want to make a special node that subscribes to the camera image and then publishes the camera_info with the same timestamp (and rest of the header), some nodes are going to be subscribing to the Image and CameraInfo with a synchronizing message filter.

You may have to remap the fixed camera_info the camera driving is trying to publish to another topic so the namespaces are correct, lots of nodes only subscribe to /foo/camera_info if /foo/image_raw is the image topic.

Do you have a camera with an encodered zoom, or the actuated zoom is sufficiently repeatable this ought to work open loop only knowing the commanded zoom position?

more

1

See https://github.com/ros-perception/cam... . I've implemented both ways. If you have any ideas how to do it better, please tell me!

( 2016-04-27 09:41:55 -0500 )edit

Cool! I'll have to try that out. Setting up a software test with a rviz/Camera would be interesting. A real world test involving checkerboards (probably at least two- a big one for the wide angle and a small one for the zoomed in telephoto) would be good to do too.

( 2016-04-27 11:23:37 -0500 )edit