OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Thu, 03 Sep 2020 16:20:24 -0500Omnidir undistort vs projectPointshttp://answers.opencv.org/question/234725/omnidir-undistort-vs-projectpoints/I work with fisheye images (~180°FOV) and want to be able to convert forth and back between distorted and undistorted image coordinates. With the fisheye camera model I already have this running as expected, but wanted to have a look at the omnidirectional camera model. However, trying to do a coordinate roundtrip in python using
out1 = cv2.omnidir.projectPoints(points, rvec, tvec, K, xi, D)
followed by
out2 = cv2.omnidir.undistortPoints(out1, K, D, xi, None)
results in `out2` differ from `points` (also apart from the obvious difference in dimension). Only in case of manually set `xi = 0` I obtain concordance in `points` and `out2`.
Am I just wrong, having bug in my code or ist this an issue within the opencv_contrib implementation?
I already had a look at the source of these functions and the corresponding paper, but couldn't definitely figure out, whether there might be something wrong.
However, comparing https://github.com/opencv/opencv_contrib/blob/master/modules/ccalib/src/omnidir.cpp#L152 and https://github.com/opencv/opencv_contrib/blob/master/modules/ccalib/src/omnidir.cpp#L333 puzzles me a bit. Shouldn't one of them be the inverse operation or am I totally wrong?
Doing the same with fisheye camera model (using both, `projectPoints` and `distortPoints` basically interchangeable) leads to expected results.
I would be very thankful if anyone would have a hint for me regarding this.
code for reproducibility:
import numpy as np
import cv2
K = np.array([[ 1.24440479e+03, -1.22794708e-01, 9.60388731e+02],
[ 0.00000000e+00, 1.24469754e+03, 9.59437737e+02],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
D = np.array([[-2.68369102e-01, 3.37814230e-02, 2.31238441e-04,
-2.66100513e-04]])
xi = np.array([1.]) # works with np.array([0.])
x = np.array([500])
y = np.array([500])
x = (x - K[0, 2]) / K[0, 0]
y = (y - K[1, 2]) / K[1, 1]
points = np.array([x, y, np.ones_like(x)]).T
rvec = np.zeros((3, 1))
tvec = np.zeros((3, 1))
out1, _ = cv2.omnidir.projectPoints(points.reshape(-1, 1, 3), rvec, tvec, K, xi, D)
out2 = cv2.omnidir.undistortPoints(out1, K, D, xi, None)
pippoThu, 03 Sep 2020 16:20:24 -0500http://answers.opencv.org/question/234725/Undistort single coordinateshttp://answers.opencv.org/question/231631/undistort-single-coordinates/Hi,
I am very new to all of this and don't really know where to start:
I have a camera with some sort of radial distortion. And I have a list of coordinates relative to the (distorted) video file (eg. at time t there is something happening at position (x,y) in the video").
I don't really care about the video itself, the coordinates (which are stored as (x,y) in a .txt file) are the only important bit to me.
Since I want to "map" those coordinates to an rectangular area (a computer monitor) in the recorded video, I need to undistort those coordinates first.
I read this [guide](https://docs.opencv.org/4.3.0/dc/dbb/tutorial_py_calibration.html) and was able to undistort the images. But how would I go about undistorting single (x,y) coordinates that I have as a list?
Thanks in advance,
F.umbThu, 25 Jun 2020 05:03:37 -0500http://answers.opencv.org/question/231631/Camera calibration: Can it be "over-undistorted"?http://answers.opencv.org/question/229530/camera-calibration-can-it-be-over-undistorted/My earlier camera calibration attempts went very smooth and gave good results. Now I wanted to get serious with it and calibrate each of my cameras (I'm using 3 of the same type in my system), each at their specific focus.
What I did before:
- used a 7x9 (inner corners) squares chessboard pattern with square size 10 mm
- printed on paper
- put on the floor to be hopefully very flat
- took ~20 pictures from various angles, trying to have the pattern appear in all areas of the image
- selected 12 best pictures (with respect to focus, movement, light)
- calibrated and created camera matrix and distortion vector
The result was good, the RMS reprojection error value is `0.425` (i believed that meant "good'), and the undistorted images looked nice and undistored (straight lines were straight).
Now, I got my chessboard pattern printed on aluminium di-bond using an online foto printing service, hoping to optimize the procedure. I used a 20 mm 7x9 pattern, but because I wanted to make it "beautiful" and made the outer squares end round (as I had seen in some example chessboard patterns), my high-resolution camera only recognizes 5x7 patterns in it (my lower resolution cameras somehow recognize 7x9 easily). Also, because of how the foto printing service works, my square size is not 20 mm but 17.08 mm - which I don't care much about.
![image description](/upfiles/15879344481160248.png)
Anyway, I put my perfectly straight 17.x mm 5x7 squares aluminium chesspattern in absolute perfect surround LED lighting and did my foto session. I thought "more is more" and took many pictures, from which I selected 20.
**The result is devestating!**
The RMS value is from `0.95` to `1.15` for my 3 cameras (maybe not too bad?). But the undistortion for 2 of the 3 results creates heavily undistorted images (of size 2048x1536), in which the actual undistorted part is concentrated in around 300x200 pixels somewhere in the bottom right or top right corner. The undistortion looks about right, but obviously something is wrong. This is what I would perhaps call **"over-undistorted"**.
![image description](/upfiles/15879335804569709.png)
Just by luck, I selected only 12 images to calculate the camera matrix and distortion vector, and received a fairly good result. But there the RMS reprojection error was `~4.5` (which I believed was worse than the `0.4` which I had with my first setup).
I went ahead and selected different combinations of chessboard recordings, trying to find if there was one that destroyed the result. But there wasn't - any combination would lead to more or less over-undistorted results. Something in between with 17 images:
![image description](/upfiles/15879342081651995.png)
So obviously I am very much confused with the results I'm getting, especially compared with the first results I had, where everything was pretty easy and straight forward.
I'm always reading that we need at least 10 images to get a good calibration. However, in my case, I might feel like the higher number gets me into trouble. This is not really true, because also with a lower number of samples I am in trouble.
George Lecakes says in https://www.youtube.com/watch?v=v7jutAmWJVQ that he takes around 50 images.
I am wondering, will I have a chance to get my undistortion to give better results, if I increase the number of samples?noshkySun, 26 Apr 2020 16:19:36 -0500http://answers.opencv.org/question/229530/cv::undistortPoints not working for me ....http://answers.opencv.org/question/207434/cvundistortpoints-not-working-for-me/ I am trying to un-distort the pixel coordinate of two points using the code below but I am not successful.
vector<Point2f> pts;
cv::Mat upts;
pts.push_back(diagStartPnt);
pts.push_back(diagEndPnt);
cerr << "m_cameraMat = \n" << m_cameraMat << endl;
cerr << "m_distortionCoefMat = \n" << m_distortionCoefMat << endl;
cerr << "pts = \n" << pts << endl;
undistortPoints(Mat(pts), upts, m_cameraMat, m_distortionCoefMat);
cerr << "upts = " << upts << endl;
But it doesn't seem to be working, I get weird output ... !!!???
m_cameraMat =
[606.184487913021, 0, 320;
0, 606.184487913021, 240;
0, 0, 1]
m_distortionCoefMat =
[-0.02448317075341529;
0.3189340323130755;
0;
0;
-1.013832345303472]
pts =
[533.44543, 347.06061;
529.32397, 234.65332]
upts =
[0.35208049, 0.1765976;
0.34534943, -0.0088211242]
I am totally clueless ... :|mikeitexpertSat, 19 Jan 2019 02:51:40 -0600http://answers.opencv.org/question/207434/RaspiCam fisheye calibration with OpenCVhttp://answers.opencv.org/question/193802/raspicam-fisheye-calibration-with-opencv/Hello everyone,
I am trying to calibrate RaspiCam Fisheye lens camera with OpenCV. I am using Python example code and the cheesboard row and column numbers are also correct but somehow I can not get a successful result. I have tested with a lso much of photos below you can see them.
My source code: [https://github.com/jagracar/OpenCV-python-tests/blob/master/OpenCV-tutorials/cameraCalibration/cameraCalibration.py](https://github.com/jagracar/OpenCV-python-tests/blob/master/OpenCV-tutorials/cameraCalibration/cameraCalibration.py)
my chess board rows and columns: rows = 9, cols = 6
![image description](/upfiles/1528974465930094.png)
but does not get a successful result
![image description](/upfiles/15289746429895581.png)
I found the solution.
https://gist.github.com/mesutpiskin/0ced27981487491403610324fea55038
![image description](/upfiles/15291476076591841.png)mesutpiskinThu, 14 Jun 2018 06:12:52 -0500http://answers.opencv.org/question/193802/Birds eye view perspectivetransform from camera calibrationhttp://answers.opencv.org/question/183753/birds-eye-view-perspectivetransform-from-camera-calibration/I am trying to get the bird's eye view perspective transform from camera intrinsic, extrinsic matrices and distortion coefficients.
I tried using the answer from [this][1] question.
The image used is the sample image left02.jpg from the opencv official github repo
[![The image to be prospectively un-distored left02.jpg image from opencv sample images i.e get the bird's eye view of the image][2]][2]
I calibrated the camera and found the intrinsic, extrinsic matrices and the distortion co-efficients.
I undistored the image and found the pose. To check if the params are right.
[![Image after un-distortion and visualising pose][3]][3]
The equations I used to find the perspective transformation matrix are (Refer the above link):
`Hr = K * R.inv() * K.inv()` where R is rotational matrix (from cv2.Rodrigues()) and K is obtained from cv2.getoptimalnewcameramatrix()
[ 1 0 | ]
Ht = [ 0 1 | -K*C/Cz ]
[ 0 0 | ]
Where `C=-R.inv()*T` Where T is translational vector from `cv2.solvePnP()`
and Cz is the 3rd component of the C vector
The required transformation is: `H = Ht * Hr`
The code I used to construct the above equation is:
K = newcameramtx # from cv2.getoptimalnewcameramatrix()
ret,rvec,tvec = cv2.solvePnP(world_points,corners2,K,dist)
R,_ = cv2.Rodrigues(rvec)
_,R_inv = cv2.invert(R)
_,K_inv = cv2.invert(K)
Hr = np.matmul(K,np.matmul(R_inv,K_inv))
C = np.matmul(-R_inv,tvec)
Cz = C[2]
temp_vector = np.matmul(-K,C/Cz)
Ht = np.identity(3)
for i,val in enumerate(temp_vector):
Ht[i][2] = val
homography = np.matmul(Ht,Hr)
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]))
# where img is the above undistored image with visualized pose
The resulting warped image is not correct.
[![With homographic matrix = Ht*Hr][4]][4]
If I remove the translation from the homography by using the below code
homography = Hr.copy()
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]))
I am getting the following image
[![With homographic matrix = Hr][5]][5]
I think the above image shows that my rotational part is correct but my translation is wrong.
Since the translational matrix (Ht) is an augmented matrix am unsure whether my construction of the above matrix is correct.
I specifically want to figure out the bird's eye perspective transformation from the camera calibration.
So, How do I correct the above equations so that I am getting the perfect bird's eye view of the chessboard image
Could anyone also please explain the math on how the above equations for Ht and Hr are derived? I don't have much exposure to Linear algebra so these equations are not very obvious to me.
**UPDATE:**
homography = np.matmul(Ht,Hr)
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]),flags=cv2.WARP_INVERSE_MAP)
cv2.WARP_INVERSE_MAP flag gave me a different result
[![][6]][6]
Still not the result I am looking for!
[1]: https://stackoverflow.com/questions/23275877/opencv-get-perspective-matrix-from-translation-rotation
[2]: https://i.stack.imgur.com/vUmcl.png
[3]: https://i.stack.imgur.com/mNLBy.png
[4]: https://i.stack.imgur.com/PnO3L.png
[5]: https://i.stack.imgur.com/bLlYD.png
[6]: https://i.stack.imgur.com/Y4GqK.pngabhijitThu, 01 Feb 2018 23:15:21 -0600http://answers.opencv.org/question/183753/Why both stereoRectify-workflows give different result?http://answers.opencv.org/question/145307/why-both-stereorectify-workflows-give-different-result/I get different result in both workflows, which should be equal (in my opinion).
1. ***First workflow***: First i remove the distortion in the images, and then in the following functions in the workflow i use zero-distortion (e.g. Mat() as parameter).
2. ***Second workflow***: I don't remove the distortion, but instead of it I use the distortion coefficients in the following functions (in stereoRectify() and in initUndistortRectifyMap()).
**First workflow (with inital undistortion):**
undistort(image1, image1, camera_matrix1, distCoeffs1);
undistort(image2, image2, camera_matrix2, distCoeffs2);
....
E = findEssentialMat(image_points1, image_points2, camera_matrix1, RANSAC);
recoverPose(E, image_points1, image_points2, camera_matrix1, R, T);
stereoRectify(camera_matrix1, Mat(), camera_matrix2, Mat(), image1.size(), R, T, R1, R2, Proj1, Proj2, Q);
Mat mapx1, mapy1;
initUndistortRectifyMap(camera_matrix1, Mat(), R1, Proj1, image1.size(), CV_16SC2, mapx1, mapy1);
remap(image1, image1_rectified, mapx1, mapy1, INTER_LINEAR);
Mat mapx2, mapy2;
initUndistortRectifyMap(camera_matrix2, Mat(), R2, Proj2, image2.size(), CV_16SC2, mapx2, mapy2);
remap(image2, image2_rectified, mapx2, mapy2, INTER_LINEAR);
Resulting disparity map:
![image description](/upfiles/14937273029084983.png)
**Second workflow (without inital undistiortion):**
E = findEssentialMat(image_points1, image_points2, camera_matrix1, RANSAC);
recoverPose(E, image_points1, image_points2, camera_matrix1, R, T);
stereoRectify(camera_matrix1, distCoeffs1, camera_matrix2, distCoeffs2, image1.size(), R, T, R1, R2, Proj1, Proj2, Q);
Mat mapx1, mapy1;
initUndistortRectifyMap(camera_matrix1, distCoeffs1, R1, Proj1, image1.size(), CV_16SC2, mapx1, mapy1);
remap(image1, image1_rectified, mapx1, mapy1, INTER_LINEAR);
Mat mapx2, mapy2;
initUndistortRectifyMap(camera_matrix2, distCoeffs2, R2, Proj2, image2.size(), CV_16SC2, mapx2, mapy2);
remap(image2, image2_rectified, mapx2, mapy2, INTER_LINEAR);
Resulting disparity map:
![image description](/upfiles/14937273157034368.png)
Then i use these rectified images to calculate the disparity map, but i get different results in both workflows.
The second workflow looks like it gives better results (at least on the wall on the right side).
I would expect that both workflows do get the same results...
mirnyyTue, 02 May 2017 07:16:38 -0500http://answers.opencv.org/question/145307/Does the resolution of an image affect the distortion co-efficientshttp://answers.opencv.org/question/118918/does-the-resolution-of-an-image-affect-the-distortion-co-efficients/ What parameters does the distortion co-efficients rely on. If I take an image with 2MP and another image with 12MP with the same camera , will the distortion co-efficients change?
PrototypeWed, 14 Dec 2016 06:14:28 -0600http://answers.opencv.org/question/118918/How does undistortion works ?http://answers.opencv.org/question/115064/how-does-undistortion-works/I am reading the book Learning OpenCV by Reilly and I can't seem to figure out how the distortion coefficients are computed.
![image description](/upfiles/1480335355580217.png)
1) How are the coefficients computed ? Is it through least squares fit ? Like how the homography matrix can be computed ?
1) How are the points even established ? If we use a chessboard we would only be able to obtain xd and yd i.e. the distorted points ? How do we get xp and yp ?NbbMon, 28 Nov 2016 06:38:29 -0600http://answers.opencv.org/question/115064/Undistort images or not before finding the Fundamental/Essential Matrix?http://answers.opencv.org/question/114828/undistort-images-or-not-before-finding-the-fundamentalessential-matrix/ I am quite confused right now. In order to find the Fundamental Matrix and the Essential Matrix, my common way is by first, undistort the images before did the other processes like detecting keypoints, matching the keypoints, find the Fundamental Matrix and then, the Essential Matrix. Is this correct? Can I **not** undistort the images in order to find the Fundamental Matrix and the Essential Matrix?
Another question is, as for the function `findEssentialMat` of the OpenCV, does it operate on the undistorted points, or distorted points, or both?HilmanSat, 26 Nov 2016 17:09:05 -0600http://answers.opencv.org/question/114828/initundistortrectifymap line 103 and 137 what is going on?http://answers.opencv.org/question/73533/initundistortrectifymap-line-103-and-137-what-is-going-on/ Hi,
I'm having trouble understanding a line in the original source code of the function initUndistortRectifyMap(..). In the corresponding docs this part doesn't seem to be mentioned.
The code is on line 103 and 137 of the following undistort.cpp function:
[link text](https://github.com/Itseez/opencv/blob/master/modules/imgproc/src/undistort.cpp#L103)
It appears to be taking the product of the camera intrinsic matrix A and multiplying it with the rotation. It then takes the inverses (all on line 103). This is then used on line 137 when bits are extracted out from the result on line 103. The results I get when using this code are excellent but I just can't understand it or tie it into the documentation at:
[link text](http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#initundistortrectifymap)
In particular I don't see how the first three lines of equations in the doc corresspond to the inverse of the cam matrix A and the rotation matrix?
![image description](http://docs.opencv.org/_images/math/8808430360ef87d99c3a5725cd2ba7d2852ba689.png)
Can some clever person put me right or point me at a doc that just explains that bit?
Thanksricor29Sun, 18 Oct 2015 06:01:19 -0500http://answers.opencv.org/question/73533/undistortion of a camera image with remaphttp://answers.opencv.org/question/64343/undistortion-of-a-camera-image-with-remap/ Hi,
I need an undistorted image of a camera for an AR application. cv::undistort is too slow for my purpose, so I want to try initUndistortRectifyMap and remap to do the init only once and safe computational time. Here is my first test:
//create source matrix
cv::Mat srcImg(res.first, res.second, cvFormat, const_cast<char*>(pImg));
cv::Mat cam(3, 3, cv::DataType<float>::type);
cam.at<float>(0, 0) = 528.53618582196384f;
cam.at<float>(0, 1) = 0.0f;
cam.at<float>(0, 2) = 314.01736116032430f;
cam.at<float>(1, 0) = 0.0f;
cam.at<float>(1, 1) = 532.01912214324500f;
cam.at<float>(1, 2) = 231.43930864205211f;
cam.at<float>(2, 0) = 0.0f;
cam.at<float>(2, 1) = 0.0f;
cam.at<float>(2, 2) = 1.0f;
cv::Mat dist(5, 1, cv::DataType<float>::type);
dist.at<float>(0, 0) = -0.11839989180635836f;
dist.at<float>(1, 0) = 0.25425420873955445f;
dist.at<float>(2, 0) = 0.0013269901775205413f;
dist.at<float>(3, 0) = 0.0015787467748277866f;
dist.at<float>(4, 0) = -0.11567938093172066f;
cv::Mat map1, map2;
cv::initUndistortRectifyMap(cam, dist, cv::Mat(), cam, cv::Size(res.second, res.first), CV_32FC1, map1, map2);
cv::remap(srcImg, *m_undistImg, map1, map2, cv::INTER_CUBIC);
At first, I create an opencv matrix with my image (format is BGRA), then I create the camera and distortion matrix. After this, I call initUndistortRectifyMap and then remap.
As you can see in [screen.jpg](/upfiles/1434548664747827.jpg) the camera image is wrong. I have no idea whats the problem. Any suggestions? What's wrong in my code?
Best regards
PellaeonPellaeonWed, 17 Jun 2015 08:46:47 -0500http://answers.opencv.org/question/64343/