OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 29 Apr 2020 07:03:48 -0500ROI with stereo calibrationhttp://answers.opencv.org/question/229659/roi-with-stereo-calibration/ My task is to calibrate stereo camera system in the full resolution and then use a ROI of the image sensor for the rectification and stereo matching. The stereo camera system is very slow in full resolution (~10 FPS) and by only using a ROI it can achieve very good performance (>60 FPS). Because the ROI can be arbitrarily moved it is impossible to calibrate every possible ROI with the checkerboard.
Now when I calibrate in the full resolution of the camera system then I can't use this directly to use it to rectify the ROI of the cameras unless the cameras are read out in full resolution which would neglect the performance benefit.
To my understanding I need to apply transformations to the camera matrices that I would supply to `initUndistortRectifyMap`. By doing this I get some artifacts on the edges and sometimes the stereo block matcher can't find a match probably because the epipolar lines are not aligned. My guess is that the transformation should also be applied to the camera matrices that are supplied to `stereoRectify`.
There is another question [here](https://answers.opencv.org/question/225915/stereo-calibration-intrinsic-parameters-estimation-at-different-resolutions/) that is similar but deals with scaling of the resolution. blubbaerWed, 29 Apr 2020 07:03:48 -0500http://answers.opencv.org/question/229659/Finding depth of an object using 2 camerashttp://answers.opencv.org/question/228501/finding-depth-of-an-object-using-2-cameras/Hi,
I am new to stereo imaging and learning to find depth of an object.
I have 2 cameras kept separately looking at a cardboard surface.
**Given** :
- 8 points marked on the cardboard surface.
- Captured one image each from both the camera.
- Identified (x,y) coordinates of all 8 points in both the images.
**Problem** : Find the depth of each point i.e. distance of each point from the cameras.
*I tried solving it using following approach but I got weird result* :
1. Noted down 8 common points from both the left and right images captured from 2 different cameras.
2. Determined Fundamental Matrix between both the images using 8 points.
3. The fundamental matrix F relates the points on the image plane of one camera in image coordinates (pixels) to the points on the image plane of the other camera in image coordinates
- Opencv Function : cv::findFundamentalMat()
- Input to the function : 8 common points from both the image
- Output = Fundamental matrix of 3x3
4. Performed stereo rectification
- It reprojects the image planes of our two cameras so that they reside in the exact same plane, with image rows perfectly aligned into a frontal parallel configuration.
- Opencv Function : cv::stereoRectifyUncalibrated()
- Input to the function : 8 common Points from the images and fundamental matrix
- Output = Rectification matrices H1 and H2 for both the images.
5. Determined depth of a point
- Trying to find the depth of a point which is aprox. 39 feet (468 inches) away from the camera.
- Formula to find depth is Z = (f * T) / (xl – xr)
- Z is depth, f is focal lenth, T is distance between camera, xl and xr are x coordinate of a point on left image and right image respectively.
- Following are the values taken for the variables :
- f = From the determined camera intrinsic of the camera, I got fx and fy. So I found out f = sqrt(fx*fx + fy*fy)
- T = 2 cameras are kept apart 36 feet i.e. 432 inches. So, I gave T = 432
- xl and xr are x values of the point from left and right images which are perspective transformed using rectification matrices H1 and H2.
- But I got very weird result.
You can look at the screenshot of my experimentation and result.
![image description](/upfiles/158601511677758.png)
So could someone tell me the approach I am taking is right or wrong ?cvsolverSat, 04 Apr 2020 11:09:11 -0500http://answers.opencv.org/question/228501/Stereo Camera Calibration Problemshttp://answers.opencv.org/question/182099/stereo-camera-calibration-problems/I am trying to calibrate my stereo camera with the OpenCV Samples/stereo_calib.cpp file. I am getting a high rms error as well as a nonsensical rectified image output. I am entering the width,height,square size(side in cm),and the list of input left and right images. Is there any other parameter I should be wary of? What is the best procedure to calibrate it?
![image description](/upfiles/15155858618438808.png)![image description](/upfiles/15155858765541174.png)leafdetWed, 10 Jan 2018 06:05:38 -0600http://answers.opencv.org/question/182099/Orientation of stereo camera translation vectorhttp://answers.opencv.org/question/208503/orientation-of-stereo-camera-translation-vector/ Hello,
I have 2 cameras and want to calculate the disparity between them.
The position and orientation of the cameras are given in arbitrary world coordinates. For example (10, 20, 5) and (20, 25, 5).
To calculate the relative rotation between both cameras is easy, but I can't find out how I have to transform the world coordinates, so that my translation vector is in the coordinate system, which is expected by stereoRectify()
Can someone help me with that?AVK369Thu, 07 Feb 2019 04:32:19 -0600http://answers.opencv.org/question/208503/Why the optical center stay unchanged when do stereRectifyhttp://answers.opencv.org/question/200777/why-the-optical-center-stay-unchanged-when-do-stererectify/I find that the stereo rectify algorithm always keep the optical center unchanged and just rotate the two cameras. Is it possible to translate the cameras during stereo rectification?german_irisTue, 09 Oct 2018 01:43:09 -0500http://answers.opencv.org/question/200777/How to correctly triangluate points with a stereo camera setuphttp://answers.opencv.org/question/200756/how-to-correctly-triangluate-points-with-a-stereo-camera-setup/ Hello,
i need to triangluate points from two stereo cameras. I have a setup with two cameras that are tilted with an angle of 40° between the image planes. As far as I understand the process, I do the following steps to get the triangulated points:
1. Rectify input images using stereoRectify and the camera matrix and distortion vector I get from stereo calibration.
2. Using initUndistortRectifyMap to calculate the maps for remapping, here the camera matrix and distrotion coefficients from stereo calibration and the projection matrix and rotation matrix from stereoRectify is used.
3. Remap the input image using rempa function and the calculated maps from step 2.
4. Finally caluclate the triangulation of matching points from both images using the triangulate function and the projection matrix from stereoRectify.
Did I missed something? I get a result doing these steps. But I suppose that something goes wrong. As i get an inconsistent output from the calculated result in my further processing.
I am suspious if the rectification works correctly. What i get after rectification is the following set for the two cameras.
![Rectified image for first camera](/upfiles/1539017974865263.jpg)
![Rectified image for second camera](/upfiles/15390179854216488.jpg)
I would appreciate any advice. And i hope i didn´t missed to mention a crucial point.
Kind regards,
DavidkraaggMon, 08 Oct 2018 12:04:44 -0500http://answers.opencv.org/question/200756/How can I use triangulatePoints() if I have only CameraParams?http://answers.opencv.org/question/75749/how-can-i-use-triangulatepoints-if-i-have-only-cameraparams/ How can I get 3x4 projection matricies of the first and the second cameras to use [`triangulatePoints()`](http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=calibratecamera#triangulatepoints) if I have only [`struct detail::CameraParams`](http://docs.opencv.org/2.4/modules/stitching/doc/camera.html?highlight=cameraparams#detail::CameraParams) for each of two cameras which I got from [`detail::Estimator`](http://docs.opencv.org/2.4/modules/stitching/doc/motion_estimation.html?highlight=homographybasedestimator#)?AlexBSun, 08 Nov 2015 06:44:28 -0600http://answers.opencv.org/question/75749/