Ask Your Question

Is this the correct OCV method to transform two 2D points to one 3D?

asked 2012-05-09 05:00:21 -0500

There's a moving object whose imagePoints I identity at two times with two MS LifeCams on my Windows 7 PC in MSVSC++. I want to determine the distance between those two 3D points. The steps I'm using are shown. I included almost no detail since now I want to just learn if I'm using the right OpenCV 2.3.1 functions.

  1. calibrateCamera for each camera

      output: cameraMatrix and distCoeffs for each camera
  2. simpleBlobDetect once for each camera at two times

      output: point 1 (cam1x1, cam1y1) (cam2x1, cam2y1)
      output: point 2 (cam1x2, cam1y2) (cam2x2, cam2y2)
  3. stereoCalibrate

      output: R, T, E, & F
  4. stereoRectify

      this doesn't actually rectify any images but produces R1,R2,P1 & P2 so it can be done
  5. undistortPoints

      output: vector of Point2f objects in rectified image
  6. perspectiveTransform

    output: vector of Point3f objects

From 30k feet are these the correct steps?

Thanks. Charles

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2012-05-17 14:43:41 -0500

Yep, sounds about right. But keep in mind that the camera calibration needs to be performed only once (if the stereo rig is fixed). So the order of the steps would be something like:

Only once (actually there is a ROS package that can help you: camera_calibration):

  1. calibrateCamera for each camera.
  2. stereoCalibrate.
  3. stereoRectify.
  4. Store calibration parameters.

Later in your ros node:

  1. Load calibration parameters.
  2. On each iteration: simpleBlobDetect -> undistortPoints -> perspectiveTransform

Have fun!

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2012-05-09 05:00:21 -0500

Seen: 987 times

Last updated: May 17 '12