OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sun, 15 Nov 2015 11:40:58 -0600camera calibration accuracyhttp://answers.opencv.org/question/76357/camera-calibration-accuracy/ Hi everyone,
I am calibrating my camera and I get quite bad results. As of now, I blame that my camera is bad or I am doing something wrong.
The main functions I use are: **findChessboardCorners**, **cornerSubPix**, **findCirclesGrid**, **calibrateCamera** and **solvePnP** (as very well described in the opencv documentation for camera calibration)
So I started to evaluate how the algorithm for camera calibration works if I add 'perfect' data.
I used 3D CAD modelling software (Rhinoceros3D) and modelled my grids with absolute accuracy, i.e. square sizes and the distances between are exactly 10 mm . My calibration pattern lies on OXY plane. Because 3D cad software has perspective view, I can easily render the results on the screen as it is in the reality. So I generated images as I would capture them in the real world.
This scenario is the perfect case - the pattern is absolutely precise, there is no distortion and there is no camera in the world to produce such good results.<br>
Chessboard 8x10<br>
https://www.dropbox.com/sh/9zbzk6bqekih8il/AADUZvxwd5PdmGXauCJSHFMwa?dl=0
<br>and<br>
Asymmetric grid 4 x 11<br>
https://www.dropbox.com/sh/9abr79py4z9hf3x/AABj0ez5_bxL4rsFLjxKwjsma?dl=0
My next step is to calibrate camera. I passed the images and from
calibrateCamera function I get error of **0.115208 px** for chessboard and **0.030177 px** for asymmetric grid.
Then what I need to do is to evaluate how good the calibration is. For the same set of images I use **solvePnP** ( used **solvePnPRansac** with the same results) to locate where the camera is. I made clear solve - no initial guess for the camera position (as this is a new position in the space of the camera). Using **rot** and **trans** of the results, I construct a cartesian coordinate system, pass a ray from camera origin, through the **UNDISTORTED** points and intersect with plane OXY. <br>
Ideally I would expect these lines will intersect the plane very accurately ( in points (0, 0), (0, 10), (10, 0) etc.).
The problem is that I get significant offset of around 0.15 mm, which means that locating my camera in 3D is wrong. I want to use this as base to do 'camera - projector calibration', but if I get such a big error in the 'perfect' scenario, this will never get good results with real camera/projector.
Another test which I did was:<br>
1. For every image after we have calibrationMatrix and deviation coefficients, locate the camera position using solvePnP.<br>
2. Intersect only first point of the detected corners in the pattern. ( It should intersect 0XY in (0, 0) )<br>
3. Evaluate standard deviation of the distances for all of these points to the origin (0, 0) - LOCATION accuracy<br>
4. Evaluate standard deviation of the distances for all these points to their average point - SYSTEMATIC accuracy<br>
The problem is that for LOCATION accuracy I get error **0.165423 mm** and for SYSTEMATIC accuracy I get error **0.035441 mm**.
These errors are two high . I would expect for both LOCATION and SYSTEMATIC accuracy to get something like 0.0001 mm for the data set provided.
***My question is - Can someone test the images from the links in their implementation and let me know what are the results?***
May be I miss something in my implementation, but I truly believe that we should get 'perfect' results when we provide 'perfect' data.
I will be really grateful for the help.
P.S. - I am using opencv 2.4.10. Has anything been improved in 3.0 in these algorithms?
Thanks a lotSun, 15 Nov 2015 11:40:58 -0600http://answers.opencv.org/question/76357/camera-calibration-accuracy/