Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

The projection matrix P you get is computed by the OpenCV method getOptimalNewCameraMatrix() used in the camera calibration node.

This method is used to control the output of the OpenCV undistortion method initUndistortRectifyMap(). Normally after the remap() some pixels would be outside of your image boundaries and some pixel regions in your image would be invalid (black), depending on the correction of your lens distortion.

The method getOptimalNewCameraMatrix() takes the intrinsic matrix, the distortion coefficients and an alpha value specified by you with:

  • alpha = 1 -> the image is resized so all original pixels fit into the image plane, thus you will most likely get invalid pixels in your undistorted image
  • alpha = 0 -> the image is resized so there are no invalid pixels in the image plane, thus you will most likely lose valid pixels in your undistorted image

This gives you a new camera matrix (called P by the calibration node) that can be passed to initUndistortRectifyMap() which computes the correct remaps depending on your used alpha value.

You can see the differences in this image:

h ttp://i.imgur.com/bOZ4fKs.jpg

The left one is with alpha = 0, the right one with alpha = 1 (note that this is not stereo, both images are the same, only "viewed" with a different camera matrix).

The camera calibration node does exactly that, with alpha = 0, and stores the new camera matrix as matrix P in the CameraInfo.

The projection matrix P you get is computed by the OpenCV method getOptimalNewCameraMatrix() used in the camera calibration node.

This method is used to control the output of the OpenCV undistortion method initUndistortRectifyMap(). Normally after the remap() some pixels would be outside of your image boundaries and some pixel regions in your image would be invalid (black), depending on the correction of your lens distortion.

The method getOptimalNewCameraMatrix() takes the intrinsic matrix, the distortion coefficients and an alpha value specified by you with:

  • alpha = 1 -> the image is resized so all original pixels fit into the image plane, thus you will most likely get invalid pixels in your undistorted image
  • alpha = 0 -> the image is resized so there are no invalid pixels in the image plane, thus you will most likely lose valid pixels in your undistorted image

This gives you a new camera matrix (called P by the calibration node) that can be passed to initUndistortRectifyMap() which computes the correct remaps depending on your used alpha value.

You can see the differences in this image:

h ttp://i.imgur.com/bOZ4fKs.jpghttp://i.imgur.com/bOZ4fKs.jpg

The left one is with alpha = 0, the right one with alpha = 1 (note that this is not stereo, both images are the same, only "viewed" with a different camera matrix).

The camera calibration node does exactly that, with alpha = 0, and stores the new camera matrix as matrix P in the CameraInfo.

The projection matrix P you get is computed by the OpenCV method getOptimalNewCameraMatrix() used in the camera calibration node.

This method is used to control the output of the OpenCV undistortion method initUndistortRectifyMap(). Normally after the remap() some pixels would be outside of your image boundaries and some pixel regions in your image would be invalid (black), depending on the correction of your lens distortion.

The method getOptimalNewCameraMatrix() takes the intrinsic matrix, the distortion coefficients and an alpha value specified by you with:

  • alpha = 1 -> the image is resized so all original pixels fit into the image plane, thus you will most likely get invalid pixels in your undistorted image
  • alpha = 0 -> the image is resized so there are no invalid pixels in the image plane, thus you will most likely lose valid pixels in your undistorted image

This gives you a new camera matrix (called P by the calibration node) that can be passed to initUndistortRectifyMap() which computes the correct remaps depending on your used alpha value.

You can see the differences in this image:

http://i.imgur.com/bOZ4fKs.jpgimage description

(originally from http://i.imgur.com/bOZ4fKs.jpg)

The left one is with alpha = 0, the right one with alpha = 1 (note that this is not stereo, both images are the same, only "viewed" with a different camera matrix).

The camera calibration node does exactly that, with alpha = 0, and stores the new camera matrix as matrix P in the CameraInfo.