Answered in #q256427 - though that answer and the wiki page linked there could be unclear. I haven't tried it yet (I will and update this answer), but my understanding is that cameracheck.py looks at a live rectified image of a checkerboard and analyzes it for distortion. It analyzes square intersections that ought to be in a straight line for straightness within the rectified image. Next it finds the checkerboard rotation and translation from all the intersections, then recomputes where the intersections ought to be given the rotation and translation it just found and compares those to the found intersections.
http://wiki.ros.org/camera_calibratio... is on the same wiki page but could be missed.
The camera has to be running with the calibration applied so that the rectified image is published - which is what the other answer meant when they says the commit button needs to be pressed (they are assuming that cameracheck.py is run immediately after running the calibrator tool).
A different sized chessboard/checkerboard can be used than the one used to calibrate with as long as cameracheck.py gets the correct dimensions of the live checkerboard.