Ask Your Question

OpenCV Java camera calibration

asked 2014-08-31 12:49:59 -0600

Naddox91 gravatar image


I have been struggling to calibrate my camera in a Java application .

I have used the python tutorial code Python and when I tranformed it to my Java application the results are different( On the exact same image!!). Even the camera matrix has no resemblance. I don't understand why that may be.

Here is the java code

found_chess = Calib3d.findChessboardCorners(gray, patternSize, actual_corners,  Calib3d.CALIB_CB_ADAPTIVE_THRESH + Calib3d.CALIB_CB_NORMALIZE_IMAGE);

    //cornersubPix() irrelevant since all the corners are found in findChessBoard
    //Imgproc.cornerSubPix(gray, actual_corners, new Size(SIZE_Y*2+1,SIZE_X*2+1), new Size(-1,-1), new TermCriteria(TermCriteria.EPS+TermCriteria.MAX_ITER,30,0.1));

    MatOfPoint3f points;

    Mat a = new MatOfPoint3f();
        for(int x=0; x<SIZE_X; ++x) 
            for(int y=0; y<SIZE_Y; ++y)
                points = new MatOfPoint3f(new Point3(y, x, 0));

    Calib3d.calibrateCamera(object_points, corners, gray.size(), cameraMatrix, distCoeffs, rvecs, tvecs);

The corners position is exactly the same in both implementations. I have no clue on why this disparity between programs (Maybe the way I am initializing the object_points?)

I hope someone could provide me a solution,

Thank you in advance

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted

answered 2014-08-31 14:00:51 -0600

Daniil Osokin gravatar image


You should add object points for each found corners entry and perform calibration after gathering sufficient number of corners (try all default chess*.png images). Also you've mixed x and y in Point3 constructor. Check the full pipeline in Android (asymmetric circles grid pattern is used) or c++ sample.

edit flag offensive delete link more


Hi thank you for your quick response. I really do not understand how opencv is able to calibrate the camera providing as object_points the size of the board ((0,0,0), (1,0,0)) etc). Would this calibration enable me to translate image_points into object_points inverting the process explained here without any problem with little error? Thank you in advance

Naddox91 gravatar imageNaddox91 ( 2014-08-31 16:23:19 -0600 )edit

The key of object points is their relative position, so in case of regular chessboard they can be set as described in tutorial. To undistort picture this would be sufficient. But, if you want to use intrinsic parameters beyond in the code, where real word objects sizes are matters, you should involve square size of chessboard in object points calculations. Check the samples, they both use it.

Daniil Osokin gravatar imageDaniil Osokin ( 2014-09-01 01:55:35 -0600 )edit

Thank you again for taking the time to answer my questions. I have really learned a lot in the past few days. I have two more questions, 1) I do not know the square size since the image is being projected would the pixel_size of the image be sufficient? 2) After successfully calibrating the camera I want to process some characteristics from images that are being displayed. How do I get those points without the camera distortion (projectpoints()), since I do not know which t_vec or r_vec to choose from the List that calibrateCamera() provided me . Thank you again for your time.

Naddox91 gravatar imageNaddox91 ( 2014-09-02 19:13:02 -0600 )edit

Question Tools


Asked: 2014-08-31 12:49:59 -0600

Seen: 2,013 times

Last updated: Aug 31 '14