In case anyone is still not sure about this, there's a working example of a calibration setup here for the ar drone 2.0. The camera used is a wide angle lens and he uses 11 images to produce a rough calibration.
As far as for the original three questions:
- i haven't found the <squaresize> to have any real effect it looks like this is likely just to document the calibration template used in the results xml.
- I've had this as well and have noticed that the input pictures can influence this significantly. Taking the images on a dark (used a black table), background seems to allow for the chessboard pattern to be recognised more consistently. Also keeping the centre of the board roughly in the centre of the image when imaging at different angles/rotations ect, seems to help to an extent as well, these are largely just from my experience however.
- I'm not sure why it works with a videostream(1) but not a saved video file, i've also had trouble with this.
Anyway hope this helps. I'll update this if i find anything else which is relevant.