issue camera calibration

asked 2017-09-14 08:44:50 -0600

linengmiao gravatar image

updated 2017-09-14 08:49:40 -0600

Hello

I am trying to calibrate a camera with a fisheye lens. I therefor used the fisheye lens module, but keep getting strange results no matter what distortion parameters I fix. This is the input image I use: https://i.imgur.com/apBuAwF.png

where the red circles indicate the corners I use to calibrate my camera.

This is the best I could get, output: https://imgur.com/a/XeXk5

I currently don't know by heart what the camera sensor dimensions are, but based on the focal length in pixels that is being calculated in my nitrinsic matrix, I deduce my sensor size is approximately 3.3mm (assuming my physical focal length is 1.8mm), which seems realistic to me. Yet, when undistorting my input image I get nonsense. Could someone tell me what I may be doing incorrectly?

the matrices and rms being output by the calibration:

K:[263.7291703200009, 0, 395.1618975493187;
 0, 144.3800397321767, 188.9308218101271;
 0, 0, 1]

D:[0, 0, 0, 0]

rms: 9.27628

my code:

#include <opencv2/opencv.hpp>
#include "opencv2/core.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/ccalib/omnidir.hpp"

using namespace std;
using namespace cv;

vector<vector<Point2d> > points2D;
vector<vector<Point3d> > objectPoints;

Mat src;

//so that I don't have to select them manually every time
void initializePoints2D()
{
    points2D[0].push_back(Point2d(234, 128));
    points2D[0].push_back(Point2d(300, 124));
    points2D[0].push_back(Point2d(381, 126));
    points2D[0].push_back(Point2d(460, 127));
    points2D[0].push_back(Point2d(529, 137));
    points2D[0].push_back(Point2d(207, 147));
    points2D[0].push_back(Point2d(280, 147));
    points2D[0].push_back(Point2d(379, 146));
    points2D[0].push_back(Point2d(478, 153));
    points2D[0].push_back(Point2d(551, 165));
    points2D[0].push_back(Point2d(175, 180));
    points2D[0].push_back(Point2d(254, 182));
    points2D[0].push_back(Point2d(377, 185));
    points2D[0].push_back(Point2d(502, 191));
    points2D[0].push_back(Point2d(586, 191));
    points2D[0].push_back(Point2d(136, 223));
    points2D[0].push_back(Point2d(216, 239));
    points2D[0].push_back(Point2d(373, 253));
    points2D[0].push_back(Point2d(534, 248));
    points2D[0].push_back(Point2d(624, 239));
    points2D[0].push_back(Point2d(97, 281));
    points2D[0].push_back(Point2d(175, 322));
    points2D[0].push_back(Point2d(370, 371));
    points2D[0].push_back(Point2d(578, 339));
    points2D[0].push_back(Point2d(662, 298));


    for(int j=0; j<25;j++)
    {   
        circle(src, points2D[0].at(j), 5, Scalar(0, 0, 255), 1, 8, 0);
    }

    imshow("src with circles", src);
    waitKey(0);
}

int main(int argc, char** argv)
{
    Mat srcSaved;

    src = imread("images/frontCar.png");
    resize(src, src, Size(), 0.5, 0.5);
    src.copyTo(srcSaved);

    vector<Point3d> objectPointsRow;
    vector<Point2d> points2DRow;
    objectPoints.push_back(objectPointsRow);
    points2D.push_back(points2DRow);

    for(int i=0; i<5;i++)
    {

        for(int j=0; j<5;j++)
        {
            objectPoints[0].push_back(Point3d(5*j,5*i,1));        
        }
    }

    initializePoints2D();
    cv::Matx33d K;
    cv::Vec4d D;
    std::vector<cv::Vec3d> rvec;
    std::vector<cv::Vec3d> tvec;


    int flag = 0;
    flag |= cv::fisheye::CALIB_RECOMPUTE_EXTRINSIC;
    flag |= cv::fisheye::CALIB_CHECK_COND;
    flag |= cv::fisheye::CALIB_FIX_SKEW; 
    flag |= cv::fisheye::CALIB_FIX_K1; 
    flag |= cv::fisheye::CALIB_FIX_K2; 
    flag |= cv::fisheye::CALIB_FIX_K3 ...
(more)
edit retag flag offensive close merge delete

Comments

@berak , what would you suggest me to try, if this is the only image I may have currently? I think the points I choose span pretty well the entire image, which is why I supposed to obtain an image which is already better but still not completely undistorted. What I currently obtain is nonsense,

linengmiao gravatar imagelinengmiao ( 2017-09-14 08:54:15 -0600 )edit

hmm, maybe you should get a real checher(or circle) board, and not try to abuse the carpet ? (as i see it, there might be a million different ways to map your assorted 2d points to the line crossings in the image, but the calibration needs exactly 1 canonical mapping.

berak gravatar imageberak ( 2017-09-14 09:05:05 -0600 )edit

btw, images on this site go like this: ![](image_url)

berak gravatar imageberak ( 2017-09-14 09:07:04 -0600 )edit

@berak , I don't see what difference I may get if I would place a checkerboard in front of the camera. At the end of the day the goal -I think- is to get corner coordinates which represent samples from the distortion radii . The carpet is to some extent the same as the checkerboard, the only difference -once again I think- is the fact that you have less high frequency edges at those eg corners on the carpet than on a black and white checkerboard.

linengmiao gravatar imagelinengmiao ( 2017-09-14 09:14:12 -0600 )edit

You need more points. Many more points. Especially points not on the one plane. Having all the points on one plane is actually a singularity, and cannot be accurately estimated without a prior. If you use a chessboard or circle pattern, you can tilt it relative to the camera easier and still identify which point is which. Something that is much harder than the carpet.

Tetragramm gravatar imageTetragramm ( 2017-09-14 22:55:05 -0600 )edit

@Tetragramm , I ended up using this image with a chessboard: https://imgur.com/a/WlLBR provided by this website: https://sites.google.com/site/scarabo... But results are still very poor: diagonal lines like the other output image I posted a bit higher. I now have 40 points (which is more than the original 25 I had) and have a chessboard. Yet the results are bad. What do you suggest?

Those were the settings that gave me the lowest rms (0.9) with this second image:

int flag = 0;
flag |= cv::fisheye::CALIB_RECOMPUTE_EXTRINSIC;
flag |= cv::fisheye::CALIB_CHECK_COND;
flag |= cv::fisheye::CALIB_FIX_SKEW; 
flag |= cv::fisheye::CALIB_FIX_K3; 
flag |= cv::fisheye::CALIB_FIX_K4;
linengmiao gravatar imagelinengmiao ( 2017-09-15 03:08:53 -0600 )edit

You're still using one image. One image is not enough information. You need information from more than one plane.

Tetragramm gravatar imageTetragramm ( 2017-09-15 18:31:26 -0600 )edit

@Tetragramm , I think I managed to undistort my image without a chessboard using only 1 image :). This input: https://imgur.com/a/ZmpmX gave me this output: https://imgur.com/a/ZmpmX

I now have an almost undistorted carpet. I might get better results by using more images, with eg a checkerboard a bit everywhere, but this is much closer to what I expected. What do you honnestly think about it?

linengmiao gravatar imagelinengmiao ( 2017-09-15 18:38:13 -0600 )edit

Ah, well those are the same link so...

Tetragramm gravatar imageTetragramm ( 2017-09-15 19:11:11 -0600 )edit

@Tetragramm my bad this is the input: https://imgur.com/apBuAwF Those two identical links, in my previous comment, are the output.

linengmiao gravatar imagelinengmiao ( 2017-09-15 19:38:59 -0600 )edit