OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sat, 28 Sep 2019 09:38:48 -0500getoptimalnewcameramatrix function for omndirectional camera?http://answers.opencv.org/question/218990/getoptimalnewcameramatrix-function-for-omndirectional-camera/I am working on stereo rectification of wide FOV fisheye lens. I want to do stereo rectification in latitude and longitude space so stretching at the edges can be avoided. Hence I am using opencv omndir package with RECTIFY_LONGLATI flag for stereo rectification. I am getting good results but the most of the portion of image is cropped. I want to scale the camera matrix such that no region is cropped while stereo rectification. I don't know how to scale the camera matrix keeping epipolar constraint for stereo rectification. I have found functions for estimating new camera matrix in calibration and fisheye module but not in omnidir module.
I tried to scale (double) the focal length in order to check can i get whole image without cropping. I am getting both left and right image without cropping but stereo rectification is disturbed. I know just scaling the focal length will disturb the rectification. Principal point also need to scaled considering distortions coefficients. I don't know how to do this, hence I was searching for function similar to getoptimalnewcameramatrix in omndir package. I will be helpful If anyone can help me with the maths behind scaling camera matrix and keeping epipolar constraint in stereo rectification.
I am attaching small portion of my code, camera calibration output file and results below.
[raw left image](https://drive.google.com/open?id=16xaCHDxJTAwYZUvqedKo2Nwi0d8ukaNF)
[raw right image](https://drive.google.com/open?id=1AH_6eGgEWC6ROcaWjYN-5kBmXQXpaQPe)
[rectified image without scaling camera matrix](https://drive.google.com/file/d/1oZplAxtfQoJZeU9Kzf5C4DRbfqoCo_YY/view?usp=sharing)
[non rectified image with focal length scaling](https://drive.google.com/open?id=1DX_aRibUG6JRZ2DXI1RXcoRn3cK8J1qf)
[output calibration file](https://drive.google.com/open?id=1sGenGRNfSAg5asxPAsz0brTfnWcspjMa)
code snippet.
Mat left_img = imread("../data/left42.png");
Mat right_img = imread("../data/right42.png");
Mat left_img_undistorted, right_img_undistorted;
Mat R1, R2;
cv::omnidir::stereoRectify(rvec, tvec, R1, R2);
int flags_ = cv::omnidir::RECTIFY_LONGLATI;
cv::Size imgSize = left_img.size();
// K1 and K2 are estimated camera matrix from calibration.
// D1 and D2 are estimated distortion coefficients from calibration
// xi11 and xi2 are estimated Mei's coefficients from calibration
Mat K1_ = K1.clone();
Mat K2_ = K2.clone();
// scaling focal length
K1_.at<double>(0,0) = K1_.at<double>(0,0)*2;
K1_.at<double>(1,1) = K1_.at<double>(1,1)*2;
cv::Matx33d Knew(imgSize.width / (3.1415), 0, 0, 0, imgSize.height /(3.1415), 0, 0, 0, 1);
cv::omnidir::undistortImage(left_img, left_img_undistorted, K1_, D1, xi1, flags_, Knew, imgSize, R1);
cv::omnidir::undistortImage(right_img, right_img_undistorted, K2_, D2, xi2, flags_, Knew, imgSize, R2);
Mat final;
hconcat(left_img_undistorted, right_img_undistorted, final);
for( int j = 0; j < final.rows; j += 40 )
line(final, Point(0, j), Point(final.cols, j), Scalar(0, 255, 0), 1, 8);
imwrite("../out/rectified.png", final);ak1Sat, 28 Sep 2019 09:38:48 -0500http://answers.opencv.org/question/218990/problem with stereoRectify() results [magnified output]http://answers.opencv.org/question/211666/problem-with-stereorectify-results-magnified-output/Hello, newbie here.
I have problems concerning the rectification of a stereo image pair.
I have as a input videos from a pair of cameras attached to a stereo microscope.
My final task is to 3D reconstruct the result.
At this point I have stereo calibrated them with a reprojection error of about 1.1 per camera and
4 for the stereo result.
After, I pass the calibration results for stereo rectification in stereoRectify() and remap.
stereoRectify(KL , DL , KR , DR , image_size , R , T , R1 , R2 , P1 , P2 , Q , 0, 0.01 ) ;
initUndistortRectifyMap(KL,DL,R1,P1,image_size, CV_16SC2 , map11, map12) ;
initUndistortRectifyMap(KR,DR,R2,P2,image_size, CV_16SC2 , map21, map22) ;
remap(initial_image[0],remappedImage0, map11,map12, INTER_LINEAR, BORDER_CONSTANT, Scalar());
remap(initial_image[1],remappedImage1, map21,map22, INTER_LINEAR, BORDER_CONSTANT, Scalar());
Now the output image pair looks rectified, but its a scaled up version of the input. Actually I
see only a magnified region of the center on both final images as the rectification result.
I try to realize why this is happening, but I'm still missing something and I'm stuck.
Is there any hint or reference or someone that has faced similar problems in the past?
I attach a couple of images for reference.
One is with the pair of the original and the rectified image (where the zoom in is obvious), and the second
one is the rectification result.
Any help would be invaluable.
My opencv version is 3.4.5 on Linux Mint 18.3
-A pair of the original and the rectified image
![A pair of the original and the rectified image](/upfiles/1555429439424476.jpg)
- Stereo rectification image pair
![Stereo rectification image pair](/upfiles/15554295311831289.jpg)3cc3bc3b1Tue, 16 Apr 2019 11:00:10 -0500http://answers.opencv.org/question/211666/Imagetransformation after cv2.stereoRectifyUncalibratedhttp://answers.opencv.org/question/179076/imagetransformation-after-cv2stereorectifyuncalibrated/I am using python 2.7 and OpenCV 3.2.0 for uncalibrated image-rectification and dense stereo matching.
My problem is now after getting my transformation-matrices with
cv2.stereoRectifyUncalibrated()
I don't know how to proceed and create my rectification-maps and do the remapping. I know that in the calibrated case I could use
cv2.initUndistortRectifyMap() 'and' cv2.remap()
The problem is I don't have a calibration matrix that cv2.initUndistortRectifyMap() needs and so my question is, if there is a workaround with openCV or do I have to write a function myself?KaridoFri, 24 Nov 2017 10:09:11 -0600http://answers.opencv.org/question/179076/Camera calibration distorted outputhttp://answers.opencv.org/question/90246/camera-calibration-distorted-output/ Hey everyone,
I've been trying for weeks to get a good calibration running and I've read through the forums about how to get a good setup. I think i'm close, but I have no idea what is causing this type of distortion on my undistored images.
![image description](/upfiles/14581682203027546.png)
The image that's bounded by the red box looks to be the correct undistored image, but I don't know what all of this noise is that is outside of the red box. Any help would be appreciated. I used 13 images in this example and I've tried using more, but the red box just goes farther away and the noise around the red box grows. Below are the results of the output:
![image description](/upfiles/14581683138795368.png)ems316Wed, 16 Mar 2016 17:46:53 -0500http://answers.opencv.org/question/90246/How to derive relative R and T from camera extrinsicshttp://answers.opencv.org/question/89968/how-to-derive-relative-r-and-t-from-camera-extrinsics/Hi,
I have an array of cameras capturing a scene. They are precalibrated and their coordinates are stored as translation from scene origin, and 3 Euler angles describing the camera's orientation.
I need to supply stereoRectify() the **relative** translation and rotation of the second camera with respect to the first camera. I have found several contradictory definitions of R and T, none of which seem to give me a correct rectified image (epipolar lines are not horizontal).
With trial and error, the following (still probably incorrect) is the closest I've been able to get:
R = R<sub>1</sub> * R<sub>2</sub><sup>T</sup>
T = R<sup>T</sup> * ( T<sub>1</sub> - T<sub>2</sub> )
Where R<sub>1</sub> and R<sub>2</sub> are 3x3 rotation matrices formed from the Euler angles, and T<sub>1</sub> and T<sub>2</sub> are translation vectors from the scene origin. R and T are then sent to stereoRectify() once R has been converted to axis-angle notation with Rodrigues().
I have also tried R = R<sub>2</sub> * R<sub>1</sub><sup>T</sup> with
T = R<sub>1</sub> * ( T<sub>2</sub> - T<sub>1</sub> ), along with a few other permutations. All incorrect.
If I could get the real way to obtain R and T from world-space, that would help immensely with identifying the source of the incorrect outputs I'm getting.
I have taken into account the z-foward and -y up coordinate system of OpenCV. I have also rendered a pair of CGI images to verify that incorrect calibration was not the issue. (My goal is to compute the disparity map between each camera (taken as pairs), in order to later perform derive depth maps and perform image-based view synthesis.)
Thanks a million!
adriencFri, 11 Mar 2016 22:08:20 -0600http://answers.opencv.org/question/89968/stereoRectify problemshttp://answers.opencv.org/question/69544/stereorectify-problems/ Hi.
I'm working with SFM results. I've calculated orientation params for each camera, undistorted images and selected some stereopairs. Now I would like to calculate rectification for each stereopair.
left and right images:
![image description](http://i011.radikal.ru/1508/13/b9d4b397d1ff.jpg)
![image description](http://s019.radikal.ru/i601/1508/1a/ef107a09cc17.jpg)
At first rotatation between camera 1 & 2 using quaternion approach: rot_q = q2*q1.inverse();
translation T = C2 - C1, where Cx - camera centers.
Then
stereoRectify(K1, D, K2, D, img1.size(), R12, T, OR1, OR2, OP1, OP2, OQ, 0);
where Kx - camera matrix & D - distortion vector with zero values.
and
initUndistortRectifyMap(K1, D, OR1, OP1, img1.size(), CV_32FC1, MX1, MY1);
initUndistortRectifyMap(K2, D, OR2, OP2, img2.size(), CV_32FC1, MX2, MY2);
and in conclusion
remap(img1, rect_img_left, MX1, MY1, INTER_CUBIC);
remap(img2, rect_img_right, MX2, MY2, INTER_CUBIC);
But result seems to be not absolutely correct
![image description](http://s020.radikal.ru/i704/1508/df/f55e232588dc.jpg)
So it was cropped as you can see
![image description](http://s011.radikal.ru/i315/1508/b8/a8b315aceadd.jpg)
![image description](http://s015.radikal.ru/i330/1508/5c/6e5f0d676132.jpg)
and an Y shift between corresponding epipolines approx 5 pixels
Also, I've tried to build rectification using fundamental matrix & stereoRectifyUncalibrated, because I know corresponding points between images. For this scene it works fine
![image description](http://s011.radikal.ru/i315/1508/07/340ded7991ed.jpg)
But for some projects (uav images for example) I've got heavy distortions after warping and in general it must be more correctly to use the first approach. Any ideas?
Upd. Input matrices by the link
https://www.dropbox.com/s/aryl0k5j4o0uz2g/mat.yml?dl=0mrhxxhThu, 27 Aug 2015 09:57:35 -0500http://answers.opencv.org/question/69544/Effect of flag parameter in stereoRectify()http://answers.opencv.org/question/43465/effect-of-flag-parameter-in-stereorectify/I'm using the `stereoRectify()` function to rectify stereo image pairs in advance of doing stereo matching. However, I'm struggling to understand exactly what the 'flags' parameter does. The documentation has the following to say:
"Operation flags that may be 0 or `CV_CALIB_ZERO_DISPARITY` . If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area."
From this, and the explanation in the Learning OpenCV book, my understanding is that passing the `CV_CALIB_ZERO_DISPARITY` flag leads to the canonical geometry where the 2 image planes are co-planar, the optical axes are parallel, and a disparity of 0 occurs for points at infinity. On the other hand, passing a flag of 0 means that the rectification leaves the virtual rectified cameras pointing slightly inwards, and hence having their optical axes intersect at a finite point.
My questions are:
1. When a flag of 0 is passed, what determines the angling of the rectified cameras (how is the finite intersection point determined?)
2. If the cameras are angled slightly inwards, will this lead to poorer stereo matching accuracy, because the epipolar lines are no longer horizontal (and hence aligned to image rows)? AJWThu, 02 Oct 2014 18:07:06 -0500http://answers.opencv.org/question/43465/Results of stereoRectify() not as expectedhttp://answers.opencv.org/question/41520/results-of-stereorectify-not-as-expected/Hi,
I've got a little problem with the results of `stereoRectify()` when using the `cv2` python module. I calculated the matrix `R` and the vector `T` using `stereoCalibrate()` for 2 converging horizontally aligned cameras (rotated inwards by 15 degrees each). `R` is the rotation matrix that makes the right camera's principal ray parallel to the principal ray of the left camera (when applied in the right camera's coordinate system). It rotates the principal ray by 30 degrees (clockwise if you're looking on top of the two cameras).
What I'd expect the matrices `R1` and `R2` (calculated by `stereoRectify()`) to be, is a rotation by -15 degrees for `R1` and a rotation 15 degrees for `R2` to make the principal rays parallel. But it's the other way round. `R1` is a rotation by 15 degrees and `R2` is a rotation by -15 degrees around the coordinate system's y-axis.
Why is that? What am I missing? Are these matrices not meant to be applied to the cameras but the (virtual) image planes? Note that the rectification does indeed work as intended and I get correctly rectified images when I pass R1 or R2 to `initUndistortRectifyMap()` and use `remap()` after that. I'm just trying to figure out what `R1` and `R2` actually mean and the [documentation](http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#stereorectify) of `stereoRectify` does not really help.
I appreciate any hints. Thanks a lot.cv4012Tue, 09 Sep 2014 14:35:29 -0500http://answers.opencv.org/question/41520/strange stereorectify error with rotation matrixhttp://answers.opencv.org/question/3441/strange-stereorectify-error-with-rotation-matrix/Hi,
I am doing stereo rectification using calibration data for a stereo pair in standard form (intrinsics as 3x3 Mat, distortions as 5x1 Mat, Rotation as 3x3 Mat and Translation as 3x1 Mat). But I keep getting the following exception
OpenCV Error: Formats of input arguments do not match (All the matrices must have
the same data type) in cvRodrigues2,
file /build/buildd/opencv-2.3.1/modules/calib3d/src/calibration.cpp, line 507
terminate called after throwing an instance of 'cv::Exception' what():
/build/buildd/opencv-2.3.1/modules/calib3d/src/calibration.cpp:507: error: (-205)
All the matrices must have the same data type in function cvRodrigues2
Here is the calibration data [http://pastebin.com/uGLzjYqx](http://pastebin.com/uGLzjYqx)
I call `stereorectify` with the following
cv::stereoRectify(ints_left_, ints_right_, dtcs_left_, dtcs_right_,
cv::Size(640, 480), rotation_, translation_,
homography_left_, homography_right_,
rppm_left_, rppm_right_, qmat_);
Thanks
makokalWed, 24 Oct 2012 08:52:15 -0500http://answers.opencv.org/question/3441/Having trouble with stereoRectifyhttp://answers.opencv.org/question/20805/having-trouble-with-stereorectify/I ran Bouguet's calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html) on Matlab and have the parameters from the calibration (intrinsic [focal lengths and principal point offsets] and extrinsic [rotation and translations of the checkerboard with respect to the camera]).
Feature coordinate points of the checkerboard on my images are also known.
I want to obtain rectified images so that I can make a disparity map (for which I have the code for) from each pair of rectified images.
Here is my code that keeps breaking off at the stereoRectify function, saying that there was an "unhandled exception": https://gist.github.com/anonymous/6586653
It might be worthy to note that I did not use stereo cameras. It was just one camera, and images were taken with the camera moving relative to the scene. Is stereoRectify still applicable here?thevidyyMon, 16 Sep 2013 18:46:37 -0500http://answers.opencv.org/question/20805/