ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

What are the next steps after stereo calibration?

asked 2015-05-10 20:31:10 -0600

Cerin gravatar image

updated 2015-05-11 21:42:44 -0600

I've completed the stereo calibration tutorial and have my sample images and parameters in a calibrationdata.tar.gz file. Where do I go from here?

I want to use this to create a depth map and point cloud for doing SLAM, but the tutorial doesn't mention anything about what to do with the calibrated camera. It only links to an old Willowgarage page which is long dead.

Although it's completely undocumented, digging around I found that the "Commit" button in the cameracalibrator.py gui saves a YAML file to ~/.ros/camera_info/head_camera.yaml. However, it's unclear what this file represents. The file content saved for my camera was:

image_width: 320
image_height: 240
camera_name: head_camera
camera_matrix:
  rows: 3
  cols: 3
  data: [1034.231282067822, 0, 149.3986264583577, 0, 1059.726267583194, 27.5609680166836, 0, 0, 1]
distortion_model: plumb_bob
distortion_coefficients:
  rows: 1
  cols: 5
  data: [0.2006669805992255, -0.920386733283972, -0.06710445687079475, 0.01907417769046314, 0]
rectification_matrix:
  rows: 3
  cols: 3
  data: [0.9491200720659563, 0.03439773884992495, 0.3130301652613153, -0.03871893115684572, 0.9992212648619694, 0.007596592506998183, -0.3125250920671578, -0.01933027184646673, 0.94971280259811]
projection_matrix:
  rows: 3
  cols: 4
  data: [1300.089955798789, 0, -170.1720094680786, -89.58782554834289, 0, 1300.089955798789, 9.835054516792297, 0, 0, 0, 1, 0]

However, the raw ost.txt file exported to the /tmp/calibrationdata.tar.gz contained:

# oST version 5.0 parameters


[image]

width
320

height
240

[narrow_stereo/left]

camera matrix
958.183533 0.000000 178.343947
0.000000 998.494337 -2.125240
0.000000 0.000000 1.000000

distortion
-0.184614 3.428311 -0.078904 0.023000 0.000000

rectification
0.950676 0.038027 0.307845
-0.033772 0.999246 -0.019139
-0.308341 0.007799 0.951244

projection
1300.089956 0.000000 -170.172009 0.000000
0.000000 1300.089956 9.835055 0.000000
0.000000 0.000000 1.000000 0.000000

# oST version 5.0 parameters


[image]

width
320

height
240

[narrow_stereo/right]

camera matrix
1034.231282 0.000000 149.398626
0.000000 1059.726268 27.560968
0.000000 0.000000 1.000000

distortion
0.200667 -0.920387 -0.067104 0.019074 0.000000

rectification
0.949120 0.034398 0.313030
-0.038719 0.999221 0.007597
-0.312525 -0.019330 0.949713

projection
1300.089956 0.000000 -170.172009 -89.587826
0.000000 1300.089956 9.835055 0.000000
0.000000 0.000000 1.000000 0.000000

So it looks like the YAML file saved by cameracalibrator.py is truncated and only contains the calibration data for a single camera. Do I need to manually organize this data, for both cameras, in separate YAML files and then give those to the nodes that read/publish the camera images?

Edit: I was able to split the ost.txt file and convert it into separate yml files for each camera, and get a usb_cam node running who those calibration files. However, stereo_image_proc seems unable to connect or read these streams.

My usb_cam launch file:

<launch>
    <arg name="device_left" default="/dev/video1" />
    <arg name="device_right" default="/dev/video2" />
    <arg name="width ...
(more)
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2015-05-11 02:39:33 -0600

Tirgo gravatar image

Hello,

i think it is the easiest way to do it manually. Split the .txt file. One for your right and one for your left camera. After that use the camera_calibration_parsers link text to convert them to .yml files. The .yml files can be used to set up your camera_info msg. For example: The package usb_cam uses the parameter camera_info_url to set up the path to the .yml file. You can edit the parameters in a launch file.

Hope this helps.

edit flag offensive delete link more

Comments

Yeah, this essentially what I figured out myself last night. But what now? How do I get depth map from it? I tried using stereo_image_proc, but it just seems to hang.

Cerin gravatar image Cerin  ( 2015-05-11 07:17:36 -0600 )edit

It hangs? stereo_img_proc is correct link text. Maybe the topics of the camera are not set correctly? Can you post a picture of rqt_graph, while the camera nodes and stereo_img_proc is running? Or at least the output of rostopic list.

Tirgo gravatar image Tirgo  ( 2015-05-11 08:03:21 -0600 )edit

@Tirgo, I've updated my question to list all my current steps and the outputs of those commands. Does that graph signify no connection to stereo_image_proc?

Cerin gravatar image Cerin  ( 2015-05-11 21:44:47 -0600 )edit

Do you get retified images? For example /raw_stereo/right/image_rect_color as the rectified image of the right camera? And another hint: stereo-imgage_proc only calculates the disparity map based on the rectified images if a subscriber is existent! Did you ever try to subscribe to the depth-map?

Tirgo gravatar image Tirgo  ( 2015-05-12 02:46:09 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2015-05-10 20:31:10 -0600

Seen: 1,354 times

Last updated: May 11 '15