OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 01 Jul 2020 19:05:12 -0500projectpoints distortionhttp://answers.opencv.org/question/231939/projectpoints-distortion/Hi everybody, I am opening a new question to add images and put together several other questions
- https://answers.opencv.org/question/227094/cvprojectpoints-giving-bad-results-when-undistorted-2d-projections-are-far/
- https://answers.opencv.org/question/197614/are-lens-distortion-coefficients-inverted-for-projectpoints/
To recap, I'm trying to project 3D points to an undistort image, in particular to the images from the [KITTI raw dataset](http://cvlibs.net/datasets/kitti/).
For those who do not know it, for each of the sequences of the dataset, a calibration file is provided both in terms of intrinsic and extrinsic; the dataset contains both camera images and lidar 3D points; also, a tutorial for the projection of the 3D points is provided, but everything is proposed with the undistorted/rectified images that the dataset also provides.
However, the intrinsic/extrinsic matrices are also provided so in general I should be able to project the 3D points directly on the image without having to either use their rectification or making any undistorsion of the raw image (raw means without processing in this case).
So let's start with their example, a projection of the points directly on the rectified image (please note, I'm saying rectified and not undistorted because the dataset has four cameras, so they provide the undistorted/rectified images).
![image description](/upfiles/15936456583289347.jpg)
as you can see this image is pretty undistorted and the lidar points should be correctly aligned (using their demo toolkit here in matlab).
Next, using the info provided:
> - S_xx: 1x2 size of image xx before rectification
- K_xx: 3x3 calibration matrix of camera xx before rectification
- D_xx: 1x5 distortion vector of camera xx before rectification
- R_xx: 3x3 rotation matrix of camera xx (extrinsic)
- T_xx: 3x1 translation vector of camera xx (extrinsic)
together with the info about where the lidar-sensor is I can easily prepare an example to
- read the lidar data.
- rotate the points in the camera coordinate frame (it is the same thing that giving the projectpoints this transformation with the rvec/tvec, which i use as identities since the first camera is the origin of all the cordinate systems, but this is a detail)
- project the points using projectPoints
Ok let's try to do this.... using
D_00 = np.array([ -3.745594e-01, 2.049385e-01, 1.110145e-03, 1.379375e-03, -7.084798e-02], dtype=np.float64)
![image description](/upfiles/159364653919928.jpg)
I hope you can see the image! there are points above the actual limit of the lidar points, as well some "noise" in between the other points (ok just to be fair, in the first image I didn't put all the points, but you should see a different "noise pattern" in this second image).
At the beginning I thought it was due to a stupid error in my code, parsing or whatsoever... but the I tried to do the same just using the undistorted image and thus setting the distortion coefficients to zero and... magic
D_00_zeros = np.array([ 0.0, 0.0, 0.0, 0.0, 0.0], dtype=np.float64)
![image description](/upfiles/15936469688493019.jpg)
Can this be still my fault on parsing data? mmm ... so, I decided to create a "virtual plane" and move it through the 3D space, moving this plane close to the camera.
![image description](/upfiles/1593647290947237.png)
![image description](/upfiles/1593647369769384.png)
![image description](/upfiles/15936473845954262.png)
![image description](/upfiles/15936473993019015.png)
and.. whaaat?! what's going on here!? Then I started to look around the internet and experimenting solutions; in particular, one of the most "inspiring" was [this one](https://answers.opencv.org/question/227094/cvprojectpoints-giving-bad-results-when-undistorted-2d-projections-are-far/) I linked before (the first link), when @HYPEREGO said *I read somewhere that using more than the usual 4/5 distortion coefficients in this function sometimes lead to bad results* ... so, just to the sake of argument, I tried to use
D_00 = np.array([ -3.745594e-01, 2.049385e-01, 1.110145e-03, 1.379375e-03], dtype=np.float64)
instead of
D_00 = np.array([ -3.745594e-01, 2.049385e-01, 1.110145e-03, 1.379375e-03, -7.084798e-02], dtype=np.float64)
and again... magic, here the same distance as before (9.1 m)
![image description](/upfiles/15936479356358398.png)
and even close to the camera
![image description](/upfiles/1593647955745727.png)
So what's going on here? Moreover, with respect to the second link, the K1 value is negative and I'm having a barrel distortion instead of the pincushion the wiki supposes
![image description](https://docs.opencv.org/2.4/_images/distortion_examples.png)
in particular
with `k1 > 0`
![image description](/upfiles/1593643837616328.png)
with `k1<0`
![image description](/upfiles/15936437527542456.png)
Following the wiki the effect should be the opposite as @Stringweasel pointed out
The vectors I used are, respectively,
D_00 = np.array([ 0.25, 0.0, 0.0, 0.0, 0.0], dtype=np.float64)
D_00 = np.array([ -0.25, 0.0, 0.0, 0.0, 0.0], dtype=np.float64)
---
So to conclude: I think I'm completely missing something here, OR there is a bug from the middle-age of opencv.. but It'd be weired..
Hope that some OpenCV *guru* can help me!
Bests,
AugustotrigalWed, 01 Jul 2020 19:05:12 -0500http://answers.opencv.org/question/231939/OpenCV optical flow implementations evaluation on KITTI 2015 benchmarkhttp://answers.opencv.org/question/214666/opencv-optical-flow-implementations-evaluation-on-kitti-2015-benchmark/I am trying to evaluate cuda based dual tv-l1 dense optical flow algorithm on kitti 2015 training set and observed that the quality is much less than the actual dual tv-l1 implementation (http://www.ipol.im/pub/art/2013/26/?utm_source=doi).
I am trying to tune the parameters according to the actual dual tv-l1 implementation but not getting any quality gains. What are the good parameter values for dual tv-l1 OpenCv implemenation?
I am also seeing the quality of cuda pyramidal LK is is better compared to the cuda farneback and which are inturn better than cuda dual tv-l1 of OpenCV. Any clues about the parameters.AallexMon, 24 Jun 2019 00:32:48 -0500http://answers.opencv.org/question/214666/Extracting the Essential matrix from the Fundamental matrixhttp://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/Hello everybody,
today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.
**INTRODUCTION**
I'm implementing an algorithm able to recover the **calibration** of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (http://www.cvlibs.net/publications/Geiger2013IJRR.pdf), I suppose that I know the value of **K_00**, **K_01**, **D_00**, **D_01** (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.
I do the following:
- Starting from the raw distorted images, I apply the undistortion using the intrinsics.
- Extract corresponding points from the **Left** and **Right** images
- Match them using a matcher (FLANN or BFMatcher or whatever)
- Filter the matched points with an outlier rejection algorithm (I checked the result visually)
- Call **findFundamentalMat** to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)
If I try to calculate the error of the points correspondence applying `x' * F * x = 0` the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new.
Since I want to rectify the images, I need the essential matrix.
**THE PROBLEM**
First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):
cv::Mat E = K_01.t() * fundamentalMat* K_00;
I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):
cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship `x'^T * E * x=0 ` *(HZ page 257, formula 9.11)* (I iterate over all the normalized coordinates)
cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
Every execution of the algorithm, the value of the Fundamental Matrix **slightly** change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot!
I tried to decompose the matrix using the OpenCV SVD implementation (I've understand is not the best, for that reason I'll switch probably to LAPACK for doing this, any suggestion?) and again here, the constraint that the two singular values must be equal is not respected, and this drive all my algorithm in a completely wrong estimation of the rectification.
I would like to test this algorithm also with the images produced with my own cameras (I've two Allied Vision camera) but I'm waiting for a high quality chessboard, so the KITTI dataset is my starting point.
**EDIT** one previous error was in the formula, I've calculated the residual of E as `x^T * E * x'=0 ` instead of `x'^T * E * x=0`. This is now fixed and the residual error of E seems to be good, but the Essential matrix that I get everytime is very different... And after the SVD, the two singular value doesn't look similar as they have to.
**EDIT** This is the different SVD singular value result:
cv::SVD produce this result:
>133.70399
>127.47910
>0.00000
while Eigen::SVD produce the following:
>1.00777
>0.00778
>0.00000
Okay maybe is not an OpenCV related problem, for sure, but any help is more than welcomeHYPEREGOMon, 04 Mar 2019 11:56:14 -0600http://answers.opencv.org/question/209787/