ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Will bad camera calibration affect pixel coordinates?

asked 2022-01-23 06:05:04 -0500

Roshan gravatar image

I am working with Turtlebots and ROS, and using a camera to find the pixel positions of a marker in the camera. I've moved over from simulations to a physical system. The issue I'm having is that the measured pixel positions in my physical system did not match the pixel positions in the physical system despite the marker and everything else being in the same position as in the simulations. The transforms between the camera and the marker were the same in both the simulations and the physical setup, meaning the same height difference and distance between the camera and object. Everything except for the camera (simulation using the default TB3 Waffle Pi camera and physical setup using Logitech C900 USB cam) and camera calibration matrices are the same. I've also made sure that there is no tilt in the camera and that it is pointing straight forward.

The camera calibration for the simulated system is ideal, the camera calibration for the physical system has bad camera calibration. The resolution I'm using is 640x480, so the center pixels should be cx=320 and cy=240, but what I noticed in the camera calibration matrix I was using in the physical system was that the cx was around 318, which is pretty accurate, but the cy was around 202, which is far from what it should be. This also made me think that the shift in pixel positions in the vertical direction is because of the camera calibration.

So my question is, does the calibration of the camera affect the pixel coordinates? Would two cameras with different calibrations detect the pixels in different coordinates, despite everything else (distance between camera and object, height between optical axis and object, etc) being the same? And would the same camera detect the pixel coordinates differently with different calibrations?

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2022-01-23 07:48:55 -0500

Mike Scheutzow gravatar image

You are trying to calculate 2-D map coordinates (x,y) relative to the camera, given that you have the pixel coordinates in the image? Then the answer to your question is yes, this calculation requires a calibration matrix specific to the camera being used.

Using a 2-D image for this purpose has severe limitations. In general, it can't give you 3-D map coordinates, but if you limit yourself to small objects sitting on a known plane, then you can estimate the 2-D map coordinates.

edit flag offensive delete link more

Comments

I'm trying to just get the pixel coordinates (m,n) in the image. The n coordinate for the physical system was 150 pixels, while the n for the simulated system was 190 pixels when detecting the same marker with everything (transforms, distance, height between optical axis and object) except for the camera/camera calibration being the same in the simulations and physical system. I want to figure out what could be causing the difference

Roshan gravatar image Roshan  ( 2022-01-23 08:03:54 -0500 )edit

Yes, you can also map from 3-D map coordinates to 2-D image coordinates. And just as with the other direction, it requires a calibration matrix specific to the camera. Google "perspective transformation".

Mike Scheutzow gravatar image Mike Scheutzow  ( 2022-01-23 08:25:00 -0500 )edit

Ok, so just to be clear, this is a calibration issue? After having a look at the perspective transformations (https://www.tutorialspoint.com/dip/pe...) it seems like it could also be that I'm using different cameras in the simulation and physical setup, and they have could have different focal lengths

Roshan gravatar image Roshan  ( 2022-01-23 08:39:09 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2022-01-23 06:05:04 -0500

Seen: 67 times

Last updated: Jan 23 '22