Will bad camera calibration affect pixel coordinates?
I am working with Turtlebots and ROS, and using a camera to find the pixel positions of a marker in the camera. I've moved over from simulations to a physical system. The issue I'm having is that the measured pixel positions in my physical system did not match the pixel positions in the physical system despite the marker and everything else being in the same position as in the simulations. The transforms between the camera and the marker were the same in both the simulations and the physical setup, meaning the same height difference and distance between the camera and object. Everything except for the camera (simulation using the default TB3 Waffle Pi camera and physical setup using Logitech C900 USB cam) and camera calibration matrices are the same. I've also made sure that there is no tilt in the camera and that it is pointing straight forward.
The camera calibration for the simulated system is ideal, the camera calibration for the physical system has bad camera calibration. The resolution I'm using is 640x480, so the center pixels should be cx=320 and cy=240, but what I noticed in the camera calibration matrix I was using in the physical system was that the cx was around 318, which is pretty accurate, but the cy was around 202, which is far from what it should be. This also made me think that the shift in pixel positions in the vertical direction is because of the camera calibration.
So my question is, does the calibration of the camera affect the pixel coordinates? Would two cameras with different calibrations detect the pixels in different coordinates, despite everything else (distance between camera and object, height between optical axis and object, etc) being the same? And would the same camera detect the pixel coordinates differently with different calibrations?
Asked by Roshan on 2022-01-23 07:05:04 UTC
Answers
You are trying to calculate 2-D map coordinates (x,y) relative to the camera, given that you have the pixel coordinates in the image? Then the answer to your question is yes, this calculation requires a calibration matrix specific to the camera being used.
Using a 2-D image for this purpose has severe limitations. In general, it can't give you 3-D map coordinates, but if you limit yourself to small objects sitting on a known plane, then you can estimate the 2-D map coordinates.
Asked by Mike Scheutzow on 2022-01-23 08:48:55 UTC
Comments
I'm trying to just get the pixel coordinates (m,n) in the image. The n coordinate for the physical system was 150 pixels, while the n for the simulated system was 190 pixels when detecting the same marker with everything (transforms, distance, height between optical axis and object) except for the camera/camera calibration being the same in the simulations and physical system. I want to figure out what could be causing the difference
Asked by Roshan on 2022-01-23 09:03:54 UTC
Yes, you can also map from 3-D map coordinates to 2-D image coordinates. And just as with the other direction, it requires a calibration matrix specific to the camera. Google "perspective transformation".
Asked by Mike Scheutzow on 2022-01-23 09:25:00 UTC
Ok, so just to be clear, this is a calibration issue? After having a look at the perspective transformations (https://www.tutorialspoint.com/dip/perspective_transformation.htm) it seems like it could also be that I'm using different cameras in the simulation and physical setup, and they have could have different focal lengths
Asked by Roshan on 2022-01-23 09:39:09 UTC
Comments