OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 24 May 2019 06:43:40 -0500How to get undistorded pointhttp://answers.opencv.org/question/213439/how-to-get-undistorded-point/Hello,
I use that line to get undistor fisheye images:
cv::fisheye::estimateNewCameraMatrixForUndistortRectify(
camera_matrix, distortion_coefficients, image.size(),
cv::Mat::eye(3, 3, CV_32F), new_camera_matrix,
static_cast<double>(balance), image.size(),
static_cast<double>(distance));
cv::fisheye::initUndistortRectifyMap(
camera_matrix, distortion_coefficients, new_camera_matrix,
R, image.size(), CV_32F, map1, map2);
So I have my distorded image and my undistorded image. For one coordinate on the distorded image I want to get the coordinate in the undistorded image.
How can I get that coordinate? Is there a function in opencv to do that?
dev4all12358Fri, 24 May 2019 06:43:40 -0500http://answers.opencv.org/question/213439/Re-distorting a set of points after camera calibrationhttp://answers.opencv.org/question/148670/re-distorting-a-set-of-points-after-camera-calibration/I am working on a project in Python to calibrate a small thermal camera sensor (FLIR Lepton). Because of limited resolution initial distortion removal is not very exact. By using an iterative method I should be able to refine this calibration (for those of you with access to scientific articles, [see this link](http://ieeexplore.ieee.org/abstract/document/6738450/?reload=true)).
This requires me to take the following steps:
1. Use a set of images of a calibration pattern to estimate the initial distortion
2. Undistort the images
3. Apply a perspective correction to the undistorted images
4. Re-estimate the calibration point positions
5. Remap these refined calibration points back to the original images
6. Use the refined points to re-estimate the distortion
7. Repeat until the RMS-error converges
I am stuck at step four. Below you see the commands I used to remove the camera distortion from the original image using the camera matrices and the distortion matrix.
mapx,mapy = cv2.initUndistortRectifyMap(mtx,dist,None,newcameramtx,(w,h),5)
dst = cv2.remap(img,mapx,mapy,cv2.INTER_LINEAR)
So far I am not able to figure out how to reverse these commands to remap the new point positions to the original image. So far I am able to do (roughly in order of the steps above):
![image description](/upfiles/14948903725868899.png)
![image description](/upfiles/1494890388681659.png)
![image description](/upfiles/14948904019693216.png)
![image description](/upfiles/14948904193841778.png)
![image description](/upfiles/14948904705247035.png)
![image description](/upfiles/1494890482533334.png)
I have looked online and found the same question a bunch of times, with several examples in C++, which I cannot fully comprehend and modify for my purposes. I have tried the solution suggested by [this post](https://stackoverflow.com/questions/21615298/opencv-distort-back/24231047#24231047) but this has not yielded the desired results, see last image above. There is my code of that solution
def distortBackPoints(x, y, cameraMatrix, dist):
fx = cameraMatrix[0,0]
fy = cameraMatrix[1,1]
cx = cameraMatrix[0,2]
cy = cameraMatrix[1,2]
k1 = dist[0][0] * -1
k2 = dist[0][1] * -1
k3 = dist[0][4] * -1
p1 = dist[0][2] * -1
p2 = dist[0][3] * -1
x = (x - cx) / fx
y = (y - cy) / fy
r2 = x*x + y*y
xDistort = x * (1 + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2)
yDistort = y * (1 + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2)
xDistort = xDistort + (2 * p1 * x * y + p2 * (r2 + 2 * x * x))
yDistort = yDistort + (p1 * (r2 + 2 * y * y) + 2 * p2 * x * y)
xDistort = xDistort * fx + cx;
yDistort = yDistort * fy + cy;
return xDistort, yDistort
Then using this command to call the function
corners2 = []
for point in corners:
x, y = distortBackPoints(point[0][0], point[0][1], newcameramtx, dist)
corners2.append([x,y])
I am new to OpenCV and computer vision, so my knowledge about the algebra of these solutions is limited. Any hands on examples or correction to my current code would be greatly appreciated.
Kind regards,
Bart
bart.p1990Mon, 15 May 2017 17:25:29 -0500http://answers.opencv.org/question/148670/Getting a fully blurred undistored image from cv::undistorthttp://answers.opencv.org/question/104083/getting-a-fully-blurred-undistored-image-from-cvundistort/ Hello,
I am new to opencv and I am trying to undistort my camera's frames. I have used previously the callibration program from opencv to find the elements of the CameraMatrix and distCoeffs and I wrote the following program:
#include <string>
#include <iostream>
#include <opencv/cv.hpp>
using namespace std;
using namespace cv;
const int FRAME_WIDTH = 640;
const int FRAME_HEIGHT = 480;
int main()
{
Mat frame1, frame2,frame1undist;
double data[3][3];
double data2[5];
data[0][0] = 9.5327626068874099e+02;
data[0][1] = 0.0;
data[0][2] = 320.;
data[1][0] = 0.0;
data[1][1] = 9.5327626068874099e+02;
data[1][2] = 240.0;
data[2][0] = 0.0;
data[2][1] = 0.0;
data[2][2] = 1.0;
data2[0] = -1.1919013558906022e-01;
data2[1] = -2.9472820562856015e+00;
data2[2] = 0.0;
data2[3] = 0.0;
data2[4] = -1.9208489842061063e+01;
double avg_reprojection_error = 3.5640854681839190e-01;
Mat CameraMatrix(3,3,CV_64F,&data);
Mat NewCameraMatrix;
Mat distCoeffs(1,5,CV_64F,&data2);
Mat Result1, Result2;
double data3[3][3];
for (int i=0;i<3;i++){
for (int j=0;j<3;j++){
data3[i][j] = 0;
if(i==j) data[i][j] = 1;
}
}
//Mat R(3,3,CV_32FC1,&data3);
//initUndistortRectifyMap(CameraMatrix,distCoeffs,R,NewCameraMatrix,CvSize(FRAME_WIDTH,FRAME_HEIGHT),CV_32FC1,Result1,Result2);
VideoCapture capture(0);
if(!capture.isOpened()) return -1;
capture.set(CV_CAP_PROP_FRAME_WIDTH,FRAME_WIDTH);
capture.set(CV_CAP_PROP_FRAME_HEIGHT,FRAME_HEIGHT);
while(true){
capture.read(frame1);
//capture.read(frame2);
undistort(frame1,frame1undist,CameraMatrix,distCoeffs);
imshow("Frame 1",frame1);
imshow("Frame 1 undistored", frame1undist);
//imshow("Frame 2",frame2);
waitKey(21);
}
return 0;
}
The results can be seen in the following printscreen:
![image description](/upfiles/14759493678538734.png)
As you can see the undistorted frame is not even close to the "distorted" frame that I get from my camera and also it has some weird symbols on it, which disappear if I change the int _type in the Mat() functions that I use. I am quite confused on what is really going on, I thought maybe the int _type i use for the Mat function was the reason, but I don't know how could I fix that.
I am on Codeblocks at Ubuntu 14.04.
Thank you for your answers and for your time in advance,
ChrispatrchriSat, 08 Oct 2016 13:01:13 -0500http://answers.opencv.org/question/104083/initundistortrectifymap line 103 and 137 what is going on?http://answers.opencv.org/question/73533/initundistortrectifymap-line-103-and-137-what-is-going-on/ Hi,
I'm having trouble understanding a line in the original source code of the function initUndistortRectifyMap(..). In the corresponding docs this part doesn't seem to be mentioned.
The code is on line 103 and 137 of the following undistort.cpp function:
[link text](https://github.com/Itseez/opencv/blob/master/modules/imgproc/src/undistort.cpp#L103)
It appears to be taking the product of the camera intrinsic matrix A and multiplying it with the rotation. It then takes the inverses (all on line 103). This is then used on line 137 when bits are extracted out from the result on line 103. The results I get when using this code are excellent but I just can't understand it or tie it into the documentation at:
[link text](http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#initundistortrectifymap)
In particular I don't see how the first three lines of equations in the doc corresspond to the inverse of the cam matrix A and the rotation matrix?
![image description](http://docs.opencv.org/_images/math/8808430360ef87d99c3a5725cd2ba7d2852ba689.png)
Can some clever person put me right or point me at a doc that just explains that bit?
Thanksricor29Sun, 18 Oct 2015 06:01:19 -0500http://answers.opencv.org/question/73533/