Is this the correct OCV method to transform two 2D points to one 3D?
There's a moving object whose imagePoints I identity at two times with two MS LifeCams on my Windows 7 PC in MSVSC++. I want to determine the distance between those two 3D points. The steps I'm using are shown. I included almost no detail since now I want to just learn if I'm using the right OpenCV 2.3.1 functions.
calibrateCamera for each camera
output: cameraMatrix and distCoeffs for each camera
simpleBlobDetect once for each camera at two times
output: point 1 (cam1x1, cam1y1) (cam2x1, cam2y1) output: point 2 (cam1x2, cam1y2) (cam2x2, cam2y2)
stereoCalibrate
output: R, T, E, & F
stereoRectify
this doesn't actually rectify any images but produces R1,R2,P1 & P2 so it can be done
undistortPoints
output: vector of Point2f objects in rectified image
perspectiveTransform
output: vector of Point3f objects
From 30k feet are these the correct steps?
Thanks. Charles