Ask Your Question

axadiw's profile - activity

2018-09-24 21:23:27 -0600 received badge  Notable Question (source)
2017-02-02 09:36:27 -0600 received badge  Popular Question (source)
2015-01-27 12:54:49 -0600 received badge  Famous Question (source)
2014-10-19 00:24:29 -0600 received badge  Nice Question (source)
2014-07-09 22:30:09 -0600 received badge  Notable Question (source)
2014-04-20 05:33:51 -0600 received badge  Popular Question (source)
2014-02-27 08:00:17 -0600 received badge  Student (source)
2013-11-19 19:23:36 -0600 commented question SolvePnP detection errors [ios]

I've posted some thoughts about this question here: http://stackoverflow.com/questions/19849683/opencv-solvepnp-detection-problems

It looks like it's not callibration problem, I still don't know what's causing it

2013-11-19 19:21:21 -0600 received badge  Editor (source)
2013-11-08 05:38:50 -0600 received badge  Self-Learner (source)
2013-11-07 18:51:12 -0600 answered a question OpenCV : warpPerspective on whole image
2013-11-07 18:50:27 -0600 asked a question SolvePnP detection errors [ios]

Hi,

I've got problem with precise detection of markers using OpenCV.

I've recorded video presenting that issue: http://youtu.be/IeSSW4MdyfU

As you see I'm markers that I'm detecting are slightly moved at some camera angles. I've read on the web that this may be camera calibration problems, so I'll tell you guys how I'm calibrating camera, and maybe you'd be able to tell me what am I doing wrong?

At the beginnig I'm collecting data from various images, and storing calibration corners in _imagePoints vector like this

std::vector<cv::Point2f> corners;
_imageSize = cvSize(image->size().width, image->size().height);

bool found = cv::findChessboardCorners(*image, _patternSize, corners);

if (found) {
    cv::Mat *gray_image = new cv::Mat(image->size().height, image->size().width, CV_8UC1);
    cv::cvtColor(*image, *gray_image, CV_RGB2GRAY);

    cv::cornerSubPix(*gray_image, corners, cvSize(11, 11), cvSize(-1, -1), cvTermCriteria(CV_TERMCRIT_EPS+ CV_TERMCRIT_ITER, 30, 0.1));

    cv::drawChessboardCorners(*image, _patternSize, corners, found);
}

_imagePoints->push_back(_corners);

Than, after collecting enough data I'm calculating camera matrix and coefficients with this code:

std::vector< std::vector<cv::Point3f> > *objectPoints = new std::vector< std::vector< cv::Point3f> >();

for (unsigned long i = 0; i < _imagePoints->size(); i++) {
    std::vector<cv::Point2f> currentImagePoints = _imagePoints->at(i);
    std::vector<cv::Point3f> currentObjectPoints;

    for (int j = 0; j < currentImagePoints.size(); j++) {
        cv::Point3f newPoint = cv::Point3f(j % _patternSize.width, j / _patternSize.width, 0);

        currentObjectPoints.push_back(newPoint);
    }

    objectPoints->push_back(currentObjectPoints);
}

std::vector<cv::Mat> rvecs, tvecs;

static CGSize size = CGSizeMake(_imageSize.width, _imageSize.height);
cv::Mat cameraMatrix = [_userDefaultsManager cameraMatrixwithCurrentResolution:size]; // previously detected matrix
cv::Mat coeffs = _userDefaultsManager.distCoeffs; // previously detected coeffs
cv::calibrateCamera(*objectPoints, *_imagePoints, _imageSize, cameraMatrix, coeffs, rvecs, tvecs);

Results are like you've seen in the video.

What am I doing wrong? is that an issue in the code? How much images should I use to perform calibration (right now I'm trying to obtain 20-30 images before end of calibration).

Should I use images that containg wrongly detected chessboard corners, like this:

photo 1

or should I use only properly detected chessboards like these:

photo 2 photo 3

I've been experimenting with circles grid instead of of chessboards, but results were much worse that now.

In case of questions how I'm detecting marker: I'm using solvepnp function:

solvePnP(modelPoints, imagePoints, [_arEngine currentCameraMatrix], _userDefaultsManager.distCoeffs, rvec, tvec);

with modelPoints specified like this:

    markerPoints3D.push_back(cv::Point3d(-kMarkerRealSize / 2.0f, -kMarkerRealSize / 2.0f, 0));
    markerPoints3D.push_back(cv::Point3d(kMarkerRealSize / 2.0f, -kMarkerRealSize / 2.0f, 0));
    markerPoints3D.push_back(cv::Point3d(kMarkerRealSize / 2.0f, kMarkerRealSize / 2.0f, 0));
    markerPoints3D.push_back(cv::Point3d(-kMarkerRealSize / 2.0f, kMarkerRealSize / 2.0f, 0));

and imagePoints are coordinates of marker corners in processing image (I'm using custom algorithm to do that)

2013-10-30 18:26:57 -0600 asked a question OpenCV : warpPerspective on whole image

I'm detecting markers on images captured by my iPad. Because of that I want to calculate translations and rotations between them, I want to change change perspective on images these image, so it would look like I'm capturing them directly above markers.

Right now I'm using

points2D.push_back(cv::Point2f(-6, -6));
points2D.push_back(cv::Point2f(6, -6));
points2D.push_back(cv::Point2f(6, 6));
points2D.push_back(cv::Point2f(-6, 6));

Mat perspectiveMat = cv::getPerspectiveTransform(points2D, imagePoints);
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(_image->cols, _image->rows));

Which gives my these results (look at the right-bottom corner for result of warpPerspective):

photo 1 photo 2 photo 3

As you probably see result image contains recognized marker in left-top corner of the result image. My problem is that I want to capture whole image (without cropping) so I could detect other markers on that image later.

How can I do that? Maybe I should use rotation/translation vectors from solvePnP function?

2013-10-28 18:13:25 -0600 commented answer OpenCV + OpenGL: proper camera pose using solvePnP

Thanks man, it works. Actaully with your answer I've found out that main problem with my solution was that rotation matrix wasn't transposed when it should be. Also whole process of inverting matrices (R = R.t(); tvec = -R * tvec) wasn't necessary.

I've spend more than a week on that issue, thanks again :)

2013-10-28 18:09:47 -0600 received badge  Scholar (source)
2013-10-28 18:09:45 -0600 received badge  Supporter (source)
2013-10-26 17:49:13 -0600 asked a question OpenCV + OpenGL: proper camera pose using solvePnP

I've got problem with obtaining proper camera pose from iPad camera using OpenCV.

I'm using custom made 2D marker (based on AruCo library ) - I want to render 3D cube over that marker using OpenGL.

In order to recieve camera pose I'm using solvePnP function from OpenCV.

According to THIS LINK I'm doing it like this:

cv::solvePnP(markerObjectPoints, imagePoints, [self currentCameraMatrix], _userDefaultsManager.distCoeffs, rvec, tvec);

tvec.at<double>(0, 0) *= -1; // I don't know why I have to do it, but translation in X axis is inverted

cv::Mat R;
cv::Rodrigues(rvec, R); // R is 3x3

R = R.t();  // rotation of inverse
tvec = -R * tvec; // translation of inverse

cv::Mat T(4, 4, R.type()); // T is 4x4
T(cv::Range(0, 3), cv::Range(0, 3)) = R * 1; // copies R into T
T(cv::Range(0, 3), cv::Range(3, 4)) = tvec * 1; // copies tvec into T
double *p = T.ptr<double>(3);
p[0] = p[1] = p[2] = 0;
p[3] = 1;

camera matrix & dist coefficients are coming from findChessboardCorners function, imagePoints are manually detected corners of marker (you can see them as green square in the video posted below), and markerObjectPoints are manually hardcoded points that represents marker corners:

markerObjectPoints.push_back(cv::Point3d(-6, -6, 0));
markerObjectPoints.push_back(cv::Point3d(6, -6, 0));
markerObjectPoints.push_back(cv::Point3d(6, 6, 0));
markerObjectPoints.push_back(cv::Point3d(-6, 6, 0));

Because marker is 12 cm long in real world, I've chosed the same size in the for easier debugging.

As a result I'm recieving 4x4 matrix T, that I'll use as ModelView matrix in OpenCV. Using GLKit drawing function looks more or less like this:

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
    // preparations
    glClearColor(0.0, 0.0, 0.0, 0.0);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    float aspect = fabsf(self.bounds.size.width / self.bounds.size.height);
    effect.transform.projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(39), aspect, 0.1f, 1000.0f);

    // set modelViewMatrix
    float mat[16] = generateOpenGLMatFromFromOpenCVMat(T);
    currentModelMatrix = GLKMatrix4MakeWithArrayAndTranspose(mat);
    effect.transform.modelviewMatrix = currentModelMatrix;

    [effect prepareToDraw];

    glDrawArrays(GL_TRIANGLES, 0, 36); // draw previously prepared cube
}

I'm not rotating everything for 180 degrees around X axis (as it was mentioned in previously linked article), because I doesn't look as necessary.

The problem is that it doesn't work! Translation vector looks OK, but X and Y rotations are messed up :(

I've recorded a video presenting that issue:

http://www.youtube.com/watch?v=EMNBT5H7-os

I've tried almost everything (including inverting all axises one by one), but nothing actually works.

What should I do? How should I properly display that 3D cube? Translation / rotation vectors that come from solvePnP are looking reasonable, so I guess that I can't correctly map these vectors to OpenGL matrices.