Hello,
I have been struggling to calibrate my camera in a Java application .
I have used the python tutorial code Python and when I tranformed it to my Java application the results are different( On the exact same image!!). Even the camera matrix has no resemblance. I don't understand why that may be.
Here is the java code
found_chess = Calib3d.findChessboardCorners(gray, patternSize, actual_corners, Calib3d.CALIB_CB_ADAPTIVE_THRESH + Calib3d.CALIB_CB_NORMALIZE_IMAGE);
if(found_chess)
{
corners.add(actual_corners);
//cornersubPix() irrelevant since all the corners are found in findChessBoard
//Imgproc.cornerSubPix(gray, actual_corners, new Size(SIZE_Y*2+1,SIZE_X*2+1), new Size(-1,-1), new TermCriteria(TermCriteria.EPS+TermCriteria.MAX_ITER,30,0.1));
MatOfPoint3f points;
Mat a = new MatOfPoint3f();
for(int x=0; x<SIZE_X; ++x)
{
for(int y=0; y<SIZE_Y; ++y)
{
points = new MatOfPoint3f(new Point3(y, x, 0));
a.push_back(points);
}
}
object_points.add(a);
Calib3d.calibrateCamera(object_points, corners, gray.size(), cameraMatrix, distCoeffs, rvecs, tvecs);
}
The corners position is exactly the same in both implementations. I have no clue on why this disparity between programs (Maybe the way I am initializing the object_points?)
I hope someone could provide me a solution,
Thank you in advance