Ask Your Question

diego's profile - activity

2016-01-31 20:01:49 -0600 received badge  Student (source)
2013-05-17 08:39:08 -0600 asked a question Good Calibration for Essential matrix estimation

Hello,

I think I'm having some problems with camera calibration. I'm using the sample calibration program provided with several (20) images taken with an iPhone. I get the camera intrinsic matrix K and the distortion coefficients R. I then load such matrices into another program. This program allows the user to select matching features in 2 different undistorted images from which I can take the Fundamental Matrix F and using K I can get the Essential matrix E = K.t() * F * K.

Afterwards, I test both F and E to check for the epipolar constraint, i.e.: x'Fx=0 or x'Ex= where x and x' are the corresponding the user selected. For every matching point, the test for the fundamental matrix yields values very close to 0, while the one for the essential matrix returns values that are as large as 2694990. This is obviously wrong.

From this I can conclude that I must be doing something wrong. I believe the computation for E is right, so that must leave the calibration. What do I need to do for a good calibration?

Thanks

2013-05-14 12:51:04 -0600 asked a question Snap point to feature

Hello.

I'm developing a program that allows the user to select feature points from several images, matching them. However, this brings a lot of error in the input since it is hard for the user to precisely select the desired pixel. On the other hand, I assume that the user wants to place the feature in areas of interest (building/window/door corners, among others). Is there anything I can use to "snap" the user selected feature to the "correct" or most interesting point?

Thanks

2013-04-25 05:18:53 -0600 commented answer Pose estimation in unknown scene

All I have is the internal parameters. Is the scale of T related in any way with focal length? I mean, the essential matrix is found with the help of the internal camera matrix. I was wondering if T and the focal length could be related. Another question: If I have multiple images taken with the same camera, and find the pose between pairs, would the translation between such pairs be proportional? i.e.: would all the pairwise translations be in the same "unit"?

2013-04-23 11:32:30 -0600 asked a question Pose estimation in unknown scene

Hello!

I'm trying to retrieve the pose between two internally calibrated cameras in an unknown scene and all methods that I have found so far (SVD of the essential matrix, vanishing points and others) return the translation up to a scale. However, I need the (exact) estimation of such translation, in order to apply some reconstruction methods (plane sweep, voxel coloring, etc.).

How can I do this? Can anyone help me?

2013-04-12 05:24:17 -0600 answered a question 3d calibration

Hi!

I'm not sure about this, since I'm new myself to opencv, but I actually measure one chessboard square and place the value in there.

2013-04-12 05:22:06 -0600 asked a question Pose extraction from multiple calibrated views

Hi

I have a camera and it's camera matrix and distortion coefficients (I used the sample program). I took several overlapping pictures from the same scene and now I want to compute their relative position and rotation.

How can I do this?

My idea is to place the first image at the origin coordinates looking along the z axis and then relate the image i+1 with the image i:

for(int i=1; i<images.size(); ++i){
    previous=images[i-1]
    current=images[i]

    // Find and Match feature points with SURF
    // Points in the previous image
    previous_image_points= ...
    // Match points in the second image
    current_image_points=...

    // Find fundamental matrix
    Mat F = findFundamentalMat(previous_image_points, current_image_points, FM_RANSAC, 0.1,0.99)
    // Essential matrix. K => Camera matrix
    Mat R = K.t()*F*K

    // Find the position and rotation between cameras
    SVD svd(E);
    Matx33d W(0,-1,0,
                  1,0,0,
                  0,0,1);
    Matx33d Winv(0,1,0,
                     -1,0,0,
                     0,0,1);

    Mat_<double> R = svd.u * Mat(W) * svd.vt;
    Mat_<double> t = svd.u.col(2);

    // Current image position and rotation matrix
    Matx34d rotPos = Matx34d(R(0,0), R(0,1), R(0,2), t(0),
                         R(1,0), R(1,1), R(1,2), t(1),
                         R(2,0), R(2,1), R(2,2), t(2));

    // Previous image position and rotation matrix
    // In the case of image 0, it is the identity matrix
    Matx34d previous_rot_pos=....

    // Calculate the final position and rotation matrix.
    // I know that these are 3x4 matrices and can't be multiplied,
    // but I convert them to homogeneous before :)
    rotPos = previous_rot_pos*rotPos
}

This code concatenates position and rotation matrices such that, in the end, their positions are all relative to the first one.

Thank you :)

2013-04-10 08:03:33 -0600 asked a question Building reconstruction using plane sweep

Hello everyone!

I'm new to OpenCV (only started this semester) although I already had some notions, mainly on Stereo Vision.

In a project I'm currently working on a project that reconstructs buildings from a set of images. However, I'm having some problems and I have some questions I hope someone can answer :D

What I'm doing is this:

1- Calibrate the camera using the tutorials example with a chessboard pattern. This gives me the intrinsic parameters matrix K and the distortion coefficients.

2- From a set of input images, I extract Canny edges and Hough lines. The Hough lines are the points where I expect to obtain better stereo matching among images.

3- Next, I try to obtain the extrinsic parameters for each image. For this, I iterate over all the images, find feature points and correspondences between image i and i-1 using a SurfFeatureDetector and a BruteForceMatcher. Find the fundamental matrix F and the Essential matrix E with: E = K^(t)FK. From E I can calculate the position and rotation of each image.

To this point I have all images positioned in space with the first image being at the origin and looking along positive Z.

4- Execute the Collins Plane Sweep using the positioned images and the Hough lines (interesting points). First all interesting points from all images are projected to a canonical plane (Z=z0) with the non linear planar homography:

Hi = K[r1 r2 z0*r3+t]

K is the intrinsic parameters matrix, r1, r2, r3 are the columns of the image rotation matrix and t is the translation matrix. Then for each sweep plane in the z axis (Z=zi) I use the points in the canonical plane to reproject them to this plane using:

xi = (zi-Cz)/(z0-Cz)*x0+(1-delta)*Cx
yi = (zi-Cz)/(z0-Cz)*y0+(1-delta)*Cy

(xi,yi) is the reprojected point in the plane Z=zi and [Cx Cy Cz] is the camera position. Then increment by one all plane cells inside a radius from such points.

In the end, all cells from each plane that have more than T votes are considered to contain a valid point in space, and as such I create a vertex there.

So my questions are:

I'm using an Nikon SLR Camera and I'm not sure if this is correct, since it has autofocus. However in the image tags, all images had a focal length of 18mm. I'm not sure whether it is the same focal length as the one in the intrinsic parameters.

In the matrix output by the calibration process (1) I get values of 10^3 order. In what metric are this values?

What is the purpose of the distortion coefficients and should I use them in my project? Where?

Am I calculating the Essential matrix and extrinsic parameters correctly? How can I be sure the results are good?

The building in the scene is about 20m away from the cameras, however I need a ... (more)