Ask Your Question

MMonty1960's profile - activity

2017-10-26 09:51:39 -0600 asked a question What is the principal point of undistort images?

What is the principal point of undistort images? I determined both cameraMatrix and distortion coefficients of my camera

2014-11-06 07:45:54 -0600 answered a question How to rescale distortion coefficients for mm unit into OpenCV ones

Ok perhaps I found the solution: according to the formulae shown in "Camera Calibration and 3D Reconstruction" at OpenCV documentation, the distortion coefficients are applied to dimensionless quantity; in the end the corrected value is multiply to the focal length f. Therefore, to be used in OpenCV, a correction-coefficient set for millimetre distances should be re-scaled as following: k1=k1mmf^2 k2=k2mmf^4 k3=k3mmf^6 p1=p1mmf p2=p2mm*f

In this way by the undistort function I get pretty images well corrected for distortion.

2014-10-30 03:45:09 -0600 asked a question How to rescale distortion coefficients for mm unit into OpenCV ones

From the OpenCV documentation I understand that pixel is the unit used to process the distortion effects induced by imperfect objective. As an example in the camera matrix the focal lens must be given in pixel. On the other hand, I need to use the distortion coefficients k1,k2,p1,p2,k3 provided by the commercial software iWitness for further image processing by means OpenCV. The unit used in iWitness is mm, so that typically k1 is about 1e-04. This mean that Dx = k1r^3, for r=10 mm is 0.1 mm.

I tried to rescale the coefficients to be applied in distance in pixel, that is

k1_OpenCv = k1_iWitness*px_dimension^2, but I get very small k1 value. As an example for Nikon D800, px_dimension = 4.9e-03 mm, so that k1_OpenCV = 2.4e-09 that is much smaller than the values reported by many users, like 0.1.

Looking at the source of cv::initUndistortRectifyMap it seems that x,y are, in a quite complicated manne,r normalized to size.width size.height respectively. Probably this is the key of the problem, but how to do that correctly?

How to transform (k1,k2,p1,p2,k3)_iWitness in (k1,k2,p1,p2,k3)_OpenCV?

Thanks for any help.

2014-05-26 08:13:33 -0600 answered a question minVal of minmaxLoc is not what I read by direct reading at matchLoc point

Thank you berak! The problem was due to the different type of the matrix result (float) and minVal and maxVal in matchTemplate, which are double.

2014-05-23 12:09:15 -0600 asked a question minVal of minmaxLoc is not what I read by direct reading at matchLoc point

Following the tutorial I created the result matrix:

/// Create the result matrix int result_cols = img.cols - templ.cols + 1; int result_rows = img.rows - templ.rows + 1; result.create( result_cols, result_rows, CV_64F );

Then I call machTemplate (with match_metod = 0) and normalize

matchTemplate( img, templ, result, match_method ); normalize( result, result, 0., 1., NORM_MINMAX, -1, Mat() );

Then I search the minimum by minmaxLoc

minMaxLoc( result, &minVal, &maxVal, &minLoc, &maxLoc, Mat() );

cout << " minVal = " << minVal << endl; returns the value minVal = 4.54747e-13

Conversely printing the corresponding matrix element with

cout << "result(j,i) = " << (result.at(double)(matchLoc.x,matchLoc.y)) << endl; I get result(j,i) = 4.19707e-07

Am I reading the matrix element rightly? Otherwise how should read (and write) the element of result matrix?

Thank you for the help.