2D to 3D conversion using opencv ROS and python and Camera Calibration
in a nutshell, i have a camera, attached to my robotic arm, from which i can detect a certain object. I can get the pixel x,y coordinate of the object from the camera perspective, and what im trying to do is to first convert pixel coordinates to spatial coordinates (still in the cameras frame), then convert the spatial coordinates to my base frame, and then use inverse kinematics to get my arm to move to those coordinates.
I already have my transformation matrix set up to go from camera to base frame, but what i'm struggling with right now, is converting the 2d pixel coordinates(pixel x,y) to spatial coordinates. (im only concerned about the x,y coordinates, since i can directly obtained my z coordinate spatially)
I'm trying to use the second equation from this link here
but in reverse. That is, i have the pixel coordinates, u and v, and the rotation matrix, all i need is the intrinsic matrix to get the spatial coordinates. However, to get the intrinsic matrix, i figured that i would have to use cv2.calibratecamera. Here is where i have some troubles.
The parameters of calibrate camera are really confusing me. What are image_point and object_point? Would those be the pixel coordinates and spatial coordinates of some object in my cameras frame? Object whose spatial coordinates i already know?
Another parameter is cameraMatrix. But isn't what what CalibrateCamera is supposed to return?? If i knew what the matrix was, i wouldnt be looking into this whole thing
Last question, a more methodical question. My aim is to reconstruct a 3d point from a 2d coordinate, so can i just reverse the second equation given here
in order to do that? So i would need the intrinsic matrix, which i would find using calibratecamera, and i already have the rotational matrix, so doing some matrix multiplication should work and revert the 2d back to the 3d right?
have you found out how to do this?