Ask Your Question

Naddox91's profile - activity

2020-03-27 07:18:50 -0600 received badge  Popular Question (source)
2014-09-12 12:40:08 -0600 asked a question cornerSubPix returning null Mat

Hi,

I have been calibrating my camera with a chessboard. I have a problem when I try to use Cornersubpix() function it returns an empty Mat. Here is the code

 found_chess = Calib3d.findChessboardCorners(image, patternSize, actual_corners,  Calib3d.CALIB_CB_ADAPTIVE_THRESH + Calib3d.CALIB_CB_NORMALIZE_IMAGE);
if(found_chess)
 {
                   //Found 
            Imgproc.cornerSubPix(gray, actual_corners, new Size(11,11), new Size(-1,-1), new TermCriteria(TermCriteria.EPS+TermCriteria.COUNT,30,0.1));

The chess is being recognized and the when I draw the corners into the image they are correctly placed. Is findChessboardCorners already doing the correction?

Thanks in advance

2014-09-10 13:31:19 -0600 asked a question Error transforming image_points into object_points after Calibration

Hi,

I have been struggling to convert image_points(2D) to object_points(3D) after calibrating my camera. The calibration is completed and reprojection error is below 0,5. If I use projectPoints() the results are accurate(3D -> 2D) but if I invert the process no valid results are thrown.

I have followed this "tutorial"

Here is my code

Calib3d.solvePnP(objectpoint, actual_corners, cameraMatrix, distCoeffs, rvec, tvec);
     Calib3d.Rodrigues(rvec), rotationMatrix);


     Mat rotation_inv = rotationMatrix.inv();
     Mat camera_inv = cameraMatrix.inv(); 
     Mat inter = new Mat();
     Mat temp1 = new Mat();

     //get S with point already known (0,0,0) --> (x,y,z)
             //Formula
             //tempMat = rotationMatrix.inv() * cameraMatrix.inv() * uvPoint;
             //tempMat2 = rotationMatrix.inv() * tvec;
             //s = 0 + tempMat2.at<double>(2,0); //285 represents the height Zconst
            //s /= tempMat.at<double>(2,0);

     Mat uv = new Mat(3,1, CvType.CV_64FC1);
        uv.put(0, 0, all_corners.get(5).x);
        uv.put(1, 0, all_corners.get(5).y);
        uv.put(2, 0, 1);
    //Android Multiplication of Mat() = core.gemm
     Core.gemm(rotation_inv, camera_inv,1, Mat.zeros(3, 4, CvType.CV_64FC1) , 0, inter, 0);

     Core.gemm(inter,uv ,1, Mat.zeros(3, 4, CvType.CV_64FC1) , 0, temp1, 0);


     Mat temp2 = new Mat();
     //temp2 tvec * rot^-1
     Core.gemm(rotation_inv,tvecs.get(0) ,1, Mat.zeros(3, 4, CvType.CV_64FC1) , 0, temp2, 0);

     //getting(2,0)
     double[] s1 = temp2.get(2,0);
     double[] s2 = temp1.get(2,0);

     double s_final = s1[0]/s2[0];// S = very high

     Mat point = new Mat();
    //Conversion 2D to 3D
     // Formula =   rotationMatrix.inv() * (s * cameraMatrix.inv() * uvPoint - tvec)
     //Multiplying camera_matrix_inv *s
     Mat result = new Mat (3,3,CvType.CV_64FC1);
     for(int i = 0;i<3;i++)
     {
         for(int j = 0;j<3;j++)
         {
             double[] res = camera_inv.get(i, j);
             result.put(i, j, res[0]*s_final);
         }
     }
     //Multiply result*uvPoint
     Core.gemm(result, uv,1, Mat.zeros(3, 4, CvType.CV_64FC1) , 0, point, 0);
     Mat p =  new Mat();
    //Substract tvec
     Core.subtract(point, tvecs.get(0), p);

     Mat fpoint = new Mat();
    //multyply rotation_matrix_inv * result
     Core.gemm(rotation_inv, point,1, Mat.zeros(3, 4, CvType.CV_64FC1) , 0, fpoint, 0);

     System.out.println( "S final : " + s_final);
     System.out.println( "Size1 : " + s1.length);
     System.out.println( "Size2 : " + s2.length);

     double[] f1 = fpoint.get(0,0);
     double[] f2 = fpoint.get(1,0);
     double[] f3 = fpoint.get(2, 0);
     //No valid solution
     System.out.println( "P final : "+  f1[0] + " " +  f2[0] + " "+  f3[0] + " ");

S ia to high , is this a valid result? P final result is = not valid since Z= f3[0] is not equal to 0.The end result for the point obejctpoint(0,0,0) = 172, 5344, 3224 which is is crap.

I have no clue why this results since I am new to camera calibration,(projectpoints works). I would really appreciate some advice or a alternative solution

Thank you in advance

2014-09-02 19:13:02 -0600 commented answer OpenCV Java camera calibration

Thank you again for taking the time to answer my questions. I have really learned a lot in the past few days. I have two more questions, 1) I do not know the square size since the image is being projected would the pixel_size of the image be sufficient? 2) After successfully calibrating the camera I want to process some characteristics from images that are being displayed. How do I get those points without the camera distortion (projectpoints()), since I do not know which t_vec or r_vec to choose from the List that calibrateCamera() provided me . Thank you again for your time.

2014-08-31 16:23:19 -0600 commented answer OpenCV Java camera calibration

Hi thank you for your quick response. I really do not understand how opencv is able to calibrate the camera providing as object_points the size of the board ((0,0,0), (1,0,0)) etc). Would this calibration enable me to translate image_points into object_points inverting the process explained here without any problem with little error? Thank you in advance

2014-08-31 16:08:18 -0600 received badge  Supporter (source)
2014-08-31 12:49:59 -0600 asked a question OpenCV Java camera calibration

Hello,

I have been struggling to calibrate my camera in a Java application .

I have used the python tutorial code Python and when I tranformed it to my Java application the results are different( On the exact same image!!). Even the camera matrix has no resemblance. I don't understand why that may be.

Here is the java code

found_chess = Calib3d.findChessboardCorners(gray, patternSize, actual_corners,  Calib3d.CALIB_CB_ADAPTIVE_THRESH + Calib3d.CALIB_CB_NORMALIZE_IMAGE);
if(found_chess)
{

    corners.add(actual_corners);
    //cornersubPix() irrelevant since all the corners are found in findChessBoard
    //Imgproc.cornerSubPix(gray, actual_corners, new Size(SIZE_Y*2+1,SIZE_X*2+1), new Size(-1,-1), new TermCriteria(TermCriteria.EPS+TermCriteria.MAX_ITER,30,0.1));

    MatOfPoint3f points;

    Mat a = new MatOfPoint3f();
        for(int x=0; x<SIZE_X; ++x) 
        {
            for(int y=0; y<SIZE_Y; ++y)
            {
                points = new MatOfPoint3f(new Point3(y, x, 0));
                a.push_back(points);
            }
        }

    object_points.add(a);
    Calib3d.calibrateCamera(object_points, corners, gray.size(), cameraMatrix, distCoeffs, rvecs, tvecs);
 }

The corners position is exactly the same in both implementations. I have no clue on why this disparity between programs (Maybe the way I am initializing the object_points?)

I hope someone could provide me a solution,

Thank you in advance