ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Stereo calibration, the extrinsic parameters to tf?

asked 2016-01-18 03:20:40 -0500

Ariel gravatar image

updated 2016-01-18 06:41:07 -0500

Hello,

I have two identical RGB cameras fixed to a rigid structure. They are synchronized by it's own node. Each camera is rotated clock-wise and counter clock-wise (due to where the usb connector is mounted on the camera's pcb). Therefore, I have rotated the images first and then publish them and use them for calibration and stereo_proc.

I've done the stereo calibration and managed to have the stereo_proc node working with help I got from this answer (mine as well, I didn't know if I should have opened a new question or not....but as it's a different topic I did).

I have successfully calibrated the cameras and obtained these left.yaml and right.yaml

left.yaml

image_width: 512
image_height: 512
camera_name: stereo/left
camera_matrix:
  rows: 3
  cols: 3
  data: [2344.331957, 0, 337.706086, 0, 2357.396758, 82.47229799999999, 0, 0, 1]
distortion_model: plumb_bob
distortion_coefficients:
  rows: 1
  cols: 5
  data: [-0.6336689999999999, 3.019293, 0.013087, 0.008579, 0]
rectification_matrix:
  rows: 3
  cols: 3
  data: [0.9948509999999999, -0.002312, 0.101323, 0.002147, 0.999996, 0.001734, -0.101327, -0.001507, 0.994852]
projection_matrix:
  rows: 3
  cols: 4
  data: [4243.110203, 0, 70.32520700000001, 0, 0, 4243.110203, 75.543232, 0, 0, 0, 1, 0]

right.yaml

image_width: 512
image_height: 512
camera_name: stereo/right
camera_matrix:
  rows: 3
  cols: 3
  data: [2346.048386, 0, 312.031898, 0, 2349.95312, 70.149203, 0, 0, 1]
distortion_model: plumb_bob
distortion_coefficients:
  rows: 1
  cols: 5
  data: [-0.604228, 1.382571, 0.015306, 0.004007, 0]
rectification_matrix:
  rows: 3
  cols: 3
  data: [0.993127, -0.003312, 0.116991, 0.003502, 0.9999929999999999, -0.00142, -0.116986, 0.00182, 0.9931319999999999]
projection_matrix:
  rows: 3
  cols: 4
  data: [4243.110203, 0, 70.32520700000001, -4319.654554, 0, 4243.110203, 75.543232, 0, 0, 0, 1, 0]

So with {left, right}/{image_raw, camera_info} I can obtain the rectified images and disparity map (even though not so good at the moment, therefore the other question) but that would concern the intrinsic parameters.

What about the extrinsic parameters? How do I extract them from camera_info and publish them to tf?. The stereo_calibration tutorial does not mention any of this...is it because I have to do something else? Such as use the industrial_extrinsic_cal node?

Thank you for the help.

EDIT 1: The extrinsic parameters that I mean are the rotation and translation from one camera to another as I want to do "pixel" triangulation (e.g: Specify in x,y,z (from left camera, as stereo_proc does with the pointcloud) the position of a pixel (u,v) in left image). So, i neet the relative position between the two cameras within the stereo pair as I'll know the transformation between base_link and the left camera in my robot.

edit retag flag offensive close merge delete

Comments

And please add whether you are interested in the extrinsic calibration with respect to some world frame, or between the two cameras.

Dimitri Schachmann gravatar image Dimitri Schachmann  ( 2016-01-18 06:13:51 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
1

answered 2016-01-18 06:12:30 -0500

Dimitri Schachmann gravatar image

extrinsic with respect to what? sensor_msgs::CameraInfo does not contain any information about the extrinsic calibration (a.k.a. location with respect to some world frame) of the stereo pair. In ROS this information is stored only in tf.

When you publish your camera_info, it is up to you to also publish according tf information (if you need it), but note that for stereo cameras one usually needs only the reference frame of the left camera, which is then assumed for the whole stereo pair. That's how stereo_image_proc expects it to be, for example.


If you mean the relative position of the two cameras within the stereo pair, then it's implicitly encoded in the rectification matrix and the projection matrix. R and P in `CameraInfo.

By inspecting the implementation you will see, that all that is needed for stereo matching, is the rectification matrix (R) and the baseline (which is computed from P by -right_.Tx() / right_.fx(); where right_ is the CameraInfo from the right camera.)

edit flag offensive delete link more

Comments

So I can physically measure the baseline=10cm. Therefore, if I use the values from right P I should get the same number (4319.6/4243.1=1.01). Therefore, I can check that the calibration is not correct? BTW, right_.fx is also from P, right? Not from K? Shouldn't it be fx'() then?

Ariel gravatar image Ariel  ( 2016-01-18 06:43:54 -0500 )edit

I must admit I am currently unable to wrap my head around the math, but is StereoCameraModel::projectDisparityTo3d perhaps what you need? See in https://github.com/strawlab/vision_op...

Dimitri Schachmann gravatar image Dimitri Schachmann  ( 2016-01-19 07:05:00 -0500 )edit

If I understood correctly, with projectDisparityTo3d I the left{u,v} pixel that I want the 3d coordinate, the value of that pixel (same u,v?) in the disparity image and I'll get the xyz point. Is that correct?

Ariel gravatar image Ariel  ( 2016-01-19 07:44:38 -0500 )edit

I must admit I'm a bit confused myself but yes: you give it pixel coordinates uv (of the left camera) and a disparity and it gives you 3D coordinates XYZ of that thing that you see at uv in the reference frame of the left camera.

Dimitri Schachmann gravatar image Dimitri Schachmann  ( 2016-01-19 08:20:41 -0500 )edit

What's making you a bit confused? I'll give it a try a check how things work....but based on my first comment here, there is something wrong with the calibration as the baseline gives me 1.01 meters rather than 0.1 meters.....

Ariel gravatar image Ariel  ( 2016-01-19 08:38:53 -0500 )edit

One thing that I still don't understand is the double disparity variable. I have three values for each pixel in the disparity topic (R,G,B) and I need one value for the projectDisparityTo3d function. How does that work?

Ariel gravatar image Ariel  ( 2016-01-20 01:50:11 -0500 )edit

Disparity images must have only one channel! It makes no sense for them to have three channels. Where do you get your stereo_msgs/DisparityImage from? What does msg.image.encoding contain?

Dimitri Schachmann gravatar image Dimitri Schachmann  ( 2016-01-20 02:52:54 -0500 )edit

My mistake....the encoding is 32FC1. I still cannot obtain a decent disparity image.....When I use max resolution (1280x1024) rather than 30fps that I get from the raw data from the cameras, the disparity images is ~1fps....could it be some bottleneck in stereo_image_proc?

Ariel gravatar image Ariel  ( 2016-01-20 03:20:47 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2016-01-18 03:20:40 -0500

Seen: 2,171 times

Last updated: Jan 18 '16