ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

extrinsic calibration of non-overlapping cameras

asked 2021-07-04 04:21:37 -0500

Ifx13 gravatar image

Hello everyone,

does anybody have any idea how to calculate the relative transformation between the cameras of a multi camera rig with no overlap?

Thanks in advance.

edit retag flag offensive close merge delete

Comments

It is trivial to do so using any standard toolbox for the calibration of overlapping cameras. All you need is one more camera not part of the rig, which is placed in such a way as to create overlap with both. Then simply chain the transform. The method, including what to do if one camera isn't enough is described in: http://www.diva-portal.org/smash/get/...

This approach is simpler than using a mirror, and more accurate. Further, because the extra camera is only needed during the calibration, you can use a high end one, and achieve very high accuracy. The paper also includes a simulator to show expected accuracy.

As with all camera calibration, to ensure a good result:

  1. Light the scene well, preferably sunlight
  2. use tripods for the pattern and the camera to ensure they are stationary
  3. cover the entire sensor with roughly evenly sampled observations
  4. Use ...
(more)
midjji gravatar image midjji  ( 2022-03-03 02:54:41 -0500 )edit

2 Answers

Sort by ยป oldest newest most voted
0

answered 2021-07-05 02:59:44 -0500

gvdhoorn gravatar image

I would suggest to take a look at ethz-asl/kalibr.

From the readme:

Kalibr is a toolbox that solves the following calibration problems:

  • Multiple camera calibration: intrinsic and extrinsic calibration of a camera-systems with non-globally shared overlapping fields of view
edit flag offensive delete link more

Comments

I've tried this. The thing is that the overlap is practically zero and when they say that the input is "non-globally shared overlapping fields of view" they mean that there is no need for every camera at the rig to have overlapping FoV but there is still the need to have overlap at in between the pairs of the cameras. Also, this package can work with different targets, one of the supported targets is the aprilgrid, this target does not require to be "seen" completely by both cameras, it says that it can be partially viewed but at least one marker of the grid must be seen at both cameras. With my current camera configuration, this is not possible. Not a single marker is shared between images, they're cut in half. Please correct me if misunderstood something.

Ifx13 gravatar image Ifx13  ( 2021-07-06 01:33:10 -0500 )edit
0

answered 2021-07-04 10:13:06 -0500

404RobotNotFound gravatar image

First, you can try to just measure the transform manually between them or to a common point and that can get you close enough. Otherwise, make a setup with two fiducuals that you know an exact transform between, and then detect one of the fiducuals in each image and you can then get the relative transform between the cameras. Make sure both cameras are intrinsicly calibrated if you decide to do the second option. Without any overlap in the cameras images themselves, that might be the most reliable way.

edit flag offensive delete link more

Comments

Hello, can you provide some more information on how should I approach the second method that you described?

When you said that I need to know the exact transformation between the two fiducials you are referring to their relative transformation I guess. After that how should I detect those markers and what are the necessary calculations in order to have as a result the relative transformation of the cameras? There is a gap there in my understanding.

Ifx13 gravatar image Ifx13  ( 2021-07-06 01:41:13 -0500 )edit
1

So in the second case you have 4 objects to know the transform between: camera 1 (c1), camera 2 (c2), fiducial 1 (f1), fiducial 2 (f2). If you setup the fiducuials in a way that one can be seen by each camera, then you measure (hopefully with decent accuracy) the transform from f1->f2. Then, you do fiducial detection on both cameras, finding c1->f1 and c2->f2. From there, you can combine the transforms together by following c1->f1->f2->c2 and that is how you would calculate the transform between the cameras.

On a side note, it seems similar to how I would assume the aprilgrid to work. If it has multiple markers in a known pattern in a grid, as long as you see one of the markers you should know your relation to every other marker on the grid

404RobotNotFound gravatar image 404RobotNotFound  ( 2021-07-06 05:01:50 -0500 )edit

This really sounds interesting, I will give it a try, I'll let you know how it goes!

Ifx13 gravatar image Ifx13  ( 2021-07-07 10:08:30 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2021-07-04 04:21:37 -0500

Seen: 534 times

Last updated: Jul 05 '21