ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Cherry 3.14's profile - activity

2020-04-11 12:34:02 -0500 received badge  Taxonomist
2015-01-19 03:53:05 -0500 received badge  Famous Question (source)
2015-01-19 03:53:05 -0500 received badge  Notable Question (source)
2015-01-19 03:53:05 -0500 received badge  Popular Question (source)
2014-11-20 13:54:25 -0500 received badge  Notable Question (source)
2014-11-20 13:54:25 -0500 received badge  Popular Question (source)
2014-11-20 13:54:25 -0500 received badge  Famous Question (source)
2014-01-28 17:25:15 -0500 marked best answer 3D calibration in OpenCV without chessboard images?

It takes me a long time to get functions to work in OpenCV so I'd like to know whether my overall plan makes sense before I dive into the details of trying to make it happen. (Open CV 2.3.1, Windows 7, C++) I'd be appreciative of any advice.

Problem:

I work at a skeet range & want to learn 3D information about the flight of the clay targets until they're hit.

Data:

  1. The two cameras (eventually there will be a few more) are yards apart so it's not practical to make a chessboard large enough for them to both see.

  2. There are several trees between 50 to 100 yards each side of the sloping hill target area that cover the field of view of each camera at least horizontally. I've measured distance to specific spots (like the junction of 1st left limb with trunk) on each of them.

Plan

  1. Put the tree positions into an objectPoints vector as Point3f objects

  2. Find the points they appear on each camera's image at and put those Point2f objects into an imagePoints vector for each camera

  3. Stereo calibrate

Questions

  1. Is my plan even in the ballpark?

If it is

  1. would it be better to calibrate each camera by itself with a chessboard that's feet from the camera then pass the intrinsic and distCoeffs matrices to stereoCalibrate?

  2. If I stereoCalibrate without a chessboard what should I pass as Size to the function?

Thank you for any help. Charles

2014-01-28 17:24:37 -0500 marked best answer Request for approach recommendation for an object identification

Equipment - Windows 7, OpenCV 2.3.1, Visual Studio C++ 2010 Express, and, eventually, any digital video cameras needed, lenses (?)

Project - I want to build a machine to identify characteristics of the flight of a baseball my son hits to the outfield (length, direction, height, etc) in realtime.

Solution description - I will have two fixed digital video cameras observe the flight of the ball and will analyze those video streams with OpenCV to locate and track the ball.

OpenCV methods There are three methods I've read about and/or seen to identify a ball 1. circle detection from edges 2. circle detection of blobs in a color range (orange ball and tennis ball examples) 3. moving circle blob detection by frame differencing (car and people identifying and tracking)

I have done the first (cvtColor, GaussianBlur, Canny, HoughCircles) only well enough that I can get it to work with certain color backgrounds. I started the second but before I spend days making it work I realized I don't know what the best approach is. It seems to someone with no image analysis experience - me - that my PC could have difficulty in 1 finding the right edges since the weather and background will change from game to game. 2 could be difficult because there may be several blobs in the foreground (players' white uniforms, bases) and the background (white lettering or background on signs) that would also be baseball white, and because the baseball white would change as the sun went down or the ball got dirty. I think 3 is the best way to go but don't want to spend a lot of time making it work (my early attempts failed) if I'd just learn it's shortcomings for tracking a baseball after I had it functioning.

The question: Which of 1-3 or 4, 5, 6 (I'm sure there are other methods you know of that I don't) is the most appropriate approach in OpenCV to learn characteristics about the 3D flight (distance, height, direction, etc.) of a baseball hit to the outfield?

(I'm expecting to need to write the code myself but I wouldn't turn down portions of the program that are sent to me.)

Thanks for any advice.

2012-10-17 10:20:09 -0500 received badge  Famous Question (source)
2012-09-11 21:15:48 -0500 received badge  Famous Question (source)
2012-09-11 21:15:48 -0500 received badge  Popular Question (source)
2012-09-11 21:15:48 -0500 received badge  Notable Question (source)
2012-08-17 10:12:45 -0500 received badge  Famous Question (source)
2012-08-17 10:12:45 -0500 received badge  Popular Question (source)
2012-08-17 10:12:45 -0500 received badge  Notable Question (source)
2012-06-29 03:54:51 -0500 received badge  Notable Question (source)
2012-05-31 06:31:22 -0500 asked a question How do I forward map a point in Open CV with a mapx and mapy?

Ultimately, I'm trying to determine the 3D location of a point I've identified in two cameras. I'm having trouble locating the 2D point for each camera from its mapx & mapy. I could not find in Bradski & Kaehler's OpenCV book how to do it and I have not gotten answers yet (or probaly ever) from Stack Overflow and Opencv-users.

PROCESS

  1. calibrateCamera for each of the pair

  2. stereoCalibrate

  3. stereoRectify

  4. detect the blob I want to locate in 3D on each camera in 2D

  5. initUndistortRectifyMap for each camera

  6. Eventually, perspectiveTransform

PROBLEM with 5:

I have the x and y location of the point in the distorted and unrectified image for each camera(from #4) but I don't know how to get the undistorted and rectified x and y for each camera from the two maps initUndistortRectifyMap creates for each camera. I don't want to remap the whole image since I only want to learn the 3D location of one object for each frame.

QUESTION:

How do I forward map a point (get the undistorted and rectified x and y) with its two maps?

2012-05-19 06:07:40 -0500 commented answer Does OpenCV's undistortPoints also rectify them?

... I don't understand the _Tp, &, or <> but I do understand a Point and integer and MSVC has complained when I've sent it just a Point, and when it's sent two ints or just one

2012-05-19 06:07:40 -0500 received badge  Commentator
2012-05-19 06:06:51 -0500 commented answer Does OpenCV's undistortPoints also rectify them?

Tried mapxC1.at(s. After 1st . MSVC says "no OL func matches arg list" & it undrlines 2nd arg & says too many arguments. Why does VC say this when as I'm typing the args it says arg options are "const_Tp & at<_Tp> (cv::Point pt) const or in the () could be things like int i0, i1 or just i0=0? ...

2012-05-19 06:05:20 -0500 commented answer Does OpenCV's undistortPoints also rectify them?

Does a single channel matrix mean that for each row & column matrix location there is one object, in this case a16 bit unsigned integer?

2012-05-19 06:04:55 -0500 commented answer Does OpenCV's undistortPoints also rectify them?

Thank you squared, Martin, The values held at the pixel represented by and x and a y pixel can be floats, right, just not the pixel (since a pixel is just 1 pixel at a location defined by two integers: row and column number)?

2012-05-16 05:19:11 -0500 commented answer Does OpenCV's undistortPoints also rectify them?

What I've done: PointC1x & PointC1y are floats that are the x & y of Keypoint objects in an image that's been cvtColor( ... CV_RGB2GRAY)

Q.) Am I even creating the Mat for the map correctly mapxC1.create(sizeImage,1); ? Charles Mennell

2012-05-16 05:17:45 -0500 commented answer Does OpenCV's undistortPoints also rectify them?

1.) MSVC++ says about mapxC1.at(PointC1x,PointC1y) that no overloaded functn matches argumnt list and that there are too many args in functn call.
2.) I don't know OCV or C++ well enough to make enough sense of the link you sent to get at a map's data

2012-05-16 05:17:32 -0500 commented answer Does OpenCV's undistortPoints also rectify them?

Hi Martin, Thank you so much for your help! I know I'll reach the answer being assisted by an OCV expert. You told me how to put exactly the data I want into maps but now I'm frustrated since I still can't get that data. I tried your suggestion and link. Results:

2012-05-15 09:22:25 -0500 commented answer Does OpenCV's undistortPoints also rectify them?

vector<Point2f> point1Cam1UndistortedRectified; point1Cam1UndistortedRectified.at(0).x = mapxC1[PointC1x, PointC1y]; point1Cam1UndistortedRectified.at(0).y = mapxC1[PointC1x, PointC1y]; but MSVC++ says not operator "[]" matches these operands right after mapxC1 in the last two lines. Know why?

2012-05-15 09:15:44 -0500 commented answer Does OpenCV's undistortPoints also rectify them?

I can't read mapx & mapy. I try Mat mapxC1; mapxC1.create(sizeImage,1); initUndistortRectifyMap(cameraMatrix1, distCoeffs1, R1, camMat1UndistRect, sizeImage, 0, /* ???*/ mapxC1, mapyC1);

2012-05-13 23:11:20 -0500 received badge  Popular Question (source)
2012-05-09 05:00:21 -0500 asked a question Is this the correct OCV method to transform two 2D points to one 3D?

There's a moving object whose imagePoints I identity at two times with two MS LifeCams on my Windows 7 PC in MSVSC++. I want to determine the distance between those two 3D points. The steps I'm using are shown. I included almost no detail since now I want to just learn if I'm using the right OpenCV 2.3.1 functions.

  1. calibrateCamera for each camera

      output: cameraMatrix and distCoeffs for each camera
    
  2. simpleBlobDetect once for each camera at two times

      output: point 1 (cam1x1, cam1y1) (cam2x1, cam2y1)
      output: point 2 (cam1x2, cam1y2) (cam2x2, cam2y2)
    
  3. stereoCalibrate

      output: R, T, E, & F
    
  4. stereoRectify

      this doesn't actually rectify any images but produces R1,R2,P1 & P2 so it can be done
    
  5. undistortPoints

      output: vector of Point2f objects in rectified image
    
  6. perspectiveTransform

    output: vector of Point3f objects
    

From 30k feet are these the correct steps?

Thanks. Charles

2012-05-04 04:07:15 -0500 asked a question Does OpenCV's undistortPoints also rectify them?

I'm trying to determine the distance between two object by using two cameras (OCV 2.3.1, MSVC++, Win7) but can't calculate the object's objectPoints. I think this is because the image points aren't being rectified before their disparity is calculated.

I. WHAT I DO FIRST Step 1. Calibrate each camera by itself

int numSquares = numCornersHor * numCornersVer;
Size board_sz = Size(numCornersHor, numCornersVer);

Mat cameraMatrix = Mat(3, 3, CV_32FC1);
Mat distCoeffs;

vector<Mat> rvecs, tvecs;

cameraMatrix.ptr<float>(0)[0] = 1;
cameraMatrix.ptr<float>(1)[1] = 1;

calibrateCamera(object_points, image_points, image.size(), 
                    cameraMatrix, distCoeffs, rvecs, tvecs);

Step 2. Calibrate the cameras to together.
int numCornersHor = 4; int numCornersVer = 3; const float squareSize = 1.75;

Size imageSize = Size(numCornersHor, numCornersVer);
   int numSquares = numCornersHor * numCornersVer;

for(int i = 0; i < pairs; i++ )
{
    for( int j = 0; j < imageSize.height; j++ )
    {
        for( int k = 0; k < imageSize.width; k++ )
        {
            objectPoints[i].push_back(Point3f(j*squareSize, k*squareSize, 0));
        }
    }
}

Mat R, T, E, F;

rms = stereoCalibrate(  
    objectPoints, 
    imagePoints[0],     imagePoints[1],
    cameraMatrix[0],    distCoeffs[0],
    cameraMatrix[1],    distCoeffs[1],
    imageSize, 
    R,  T,  E,  F,
    TermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, 1e-5),
    CV_CALIB_FIX_ASPECT_RATIO +
    CV_CALIB_ZERO_TANGENT_DIST +
            CV_CALIB_SAME_FOCAL_LENGTH +
    CV_CALIB_RATIONAL_MODEL +
    CV_CALIB_FIX_K3 +   CV_CALIB_FIX_K4 +   CV_CALIB_FIX_K5
    );

Step 3. Create the rectification data stereoRectify( cameraMatrix[0], cameraMatrix[1], distCoeffs[0], distCoeffs[1], imageSize, R, T, RC1, RC2, //RC1: Rotation matrix Camera 1 PC1, PC2, Q, CALIB_ZERO_DISPARITY, 1,
imageSize);

II. WHAT I BELIEVE Goal: I'm trying undistort and rectify the image points of one object in the image from camera 1 and the image from camera 2 (I do this process twice: once while the clay pigeon's on the launcher and once one frame before the clay pigeon disintegrates)

Method: I believe that I don't need to use initUndistortRectifyMap then Remap but can instead just use undistortPoints. I think undistortPoints undistorts the points of interest and rectifies them.

III. WHAT I DO SECOND You can ignore this if my beliefs aren't correct.

undistortPoints(launcherC1, launcherC1Undistorted, cameraMatrixC1, distCoeffsC1, R1, P1);   
undistortPoints(launcherC2, launcherC2Undistorted, cameraMatrixC2, distCoeffsC2, R2, P2);   

undistortPoints(clayPigeonC1, clayPigeonC1Undistorted, cameraMatrix1, distCoeffs1, R1, P1); 
undistortPoints(clayPigeonC2, clayPigeonC2Undistorted, cameraMatrix2, distCoeffs2, R2, P2);

The input and output arrays for undistortPoints (launcherC1, launcherC1Undistorted, ... clayPigeonC2, clayPigeonC2Undistorted) are vectors of Point2f objects.

IV. DISCREPANCY BETWEEN BELIEF AND REALITY After all undistortPoints functions are run, 1. launcherC1Undisorted.y does not equal launcherC2Undistorted.y 2. clayPigeonC1Undistorted.y does not equal clayPigeonC2Undistorted.y.
They are up to 30% different.

V. QUESTIONS Q1: In addition to undistorting them does undistortPoints also rectify points? Q1.1_yes. Are the values of y supposed to be equal after rectification? Q1.1.1_yes Can you tell from the code I've included what I'm doing wrong so that they don't? Q1_no If undistortPoints doesn't rectify the points then how do I rectify them?

Thank you for any assistance. Charles++

2012-04-13 06:04:44 -0500 commented answer 3D calibration in OpenCV without chessboard images?

OK, I'll do calibration on each camera before I calibrate the camera's together. Could this intrinsic calibration be done before the camera is mounted if the camera isn't jostled, the lens rotated or the camera's body torqued when it's mounted?

Whether the cameras are extrinsically calibrated with the object points I've measured (+- inch at hundreds of yards) or with object points on pattern boards that are held at different locations that both cameras can see ...

  1. What does the calibration generate?
  2. I know your answer includes it but I didn't understand how to use what the extrinsic calibration generates to determine a real-world 3D location of a point from 2D image locations on both of those cameras. How do I?

My guess is that StereoCalibration outputs rotation matrix and translation vector. These two items are put into StereoRectify which will output a transformation matrix ... (more)

2012-04-07 11:40:37 -0500 received badge  Good Question (source)
2012-04-07 10:03:57 -0500 received badge  Nice Question (source)