OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sun, 29 Mar 2020 06:33:36 -0500Inverse Matrix from a Decomposehomographyhttp://answers.opencv.org/question/228224/inverse-matrix-from-a-decomposehomography/I need to obtain the inverse of a rotation matrix I get from decomposehomography() but I'm having some trouble as it looks like the type of the matrix I obtain from that function does not seem to work with .inv(). Here's an example where Prev_rot_matrix is another matrix
int solutions = decomposeHomographyMat(H, cameraMatrix, Rs_decomp, ts_decomp, normals_decomp);
for (int i = 0; i < solutions; i++)
{
if(normals_decomp[i].at<double>(2)>0)
{
aux=FrameVar(Rs_decomp[i], Prev_rot_matrix); /*Prev_rot_matrix has the same structure as Rs_decomp[i]*/
if(aux<Var)
{
Var=aux;
SOL=i;
}
}
}
double FrameVar(Mat Rot_Curr, Mat Rot_Prev)
{
//Previous rotation: Rot_Prev ;
current rotation: Rot_Curr
double NewAngle, OldAngle, aux;
Mat Rot_Curr_vect, Rot_Prev_vect;
Mat VarAngle(cv::Size(1,1), CV_64FC1);
Rodrigues(Rot_Curr, Rot_Curr_vect);
Rot_Prev_vect=Rot_Prev.inv()*Rodrigues(Rot_Prev, Rot_Prev_vect);
...
So when I try to compile it I get:
> ‘cv::MatExpr’ is not derived from
> ‘const cv::Affine3<T>’
> Rot_Prev_vect=Rot_Prev.inv()*Rodrigues(Rot_Prev,
> Rot_Prev_vect);
and a bunch of other errors.
In case that that meant Rot_Prev class does not have .inv(), how can I obtain its inverse matrix? I want to get the previous rotation vector in the new frame coordinatesAquasSun, 29 Mar 2020 06:33:36 -0500http://answers.opencv.org/question/228224/Inverse Flow - Forward Warping or Bilinear "Splatting"http://answers.opencv.org/question/211672/inverse-flow-forward-warping-or-bilinear-splatting/I am interested in the inverse or backward flow given the forward flow. The following code does this forward warping or bilinear splatting, but it is annoyingly slow (~8ms @VGA on my i7-7820HK). It seems likely to me that this could/should be closer to 1-2ms. Any insights into speeding this up?
inline bool isOnImage(const cv::Point& pt, const cv::Size& size)
{
return pt.x >= 0 && pt.x < size.width && pt.y >= 0 && pt.y < size.height;
}
cv::Mat img_proc::inverseFlow(const cv::Mat& flow)
{
cv::Mat inverse_flow = cv::Mat::zeros(flow.size(), CV_32FC2);
cv::Mat weights = cv::Mat::zeros(flow.size(), CV_32FC2);
const int rows = flow.rows;
const int cols = flow.cols;
for(int i = 0; i < rows; ++i)
{
auto flow_ptr = flow.ptr<cv::Vec2f>(i);
for(int j = 0; j < cols; ++j)
{
const float du = flow_ptr[j][0];
const float dv = flow_ptr[j][1];
const int u = j + std::round(du);
const int v = i + std::round(dv);
if(!isOnImage({u,v}, flow.size()))
{
continue;
}
const int du_floor = (int) std::floor(du);
const int du_ceil = (int) std::ceil(du);
const int dv_floor = (int) std::floor(dv);
const int dv_ceil = (int) std::ceil(dv);
const int u_min = std::min(cols-1, std::max(0, j + du_floor));
const int u_max = std::min(cols-1, std::max(0, j + du_ceil));
const int v_min = std::min(rows-1, std::max(0, i + dv_floor));
const int v_max = std::min(rows-1, std::max(0, i + dv_ceil));
const float uf = j + du;
const float vf = i + dv;
const float w0 = (u_max - uf) * (v_max - vf); // TL
const float w1 = (uf - u_min) * (v_max - vf); // TR
const float w2 = (uf - u_min) * (vf - v_min); // BR
const float w3 = (u_max - uf) * (vf - v_min); // BL
weights.at<cv::Vec2f>(v_min, u_min) += cv::Vec2f{w0,w0};
weights.at<cv::Vec2f>(v_min, u_max) += cv::Vec2f{w1,w1};
weights.at<cv::Vec2f>(v_max, u_min) += cv::Vec2f{w3,w3};
weights.at<cv::Vec2f>(v_max, u_max) += cv::Vec2f{w2,w2};
inverse_flow.at<cv::Vec2f>(v_min, u_min) += w0 * cv::Vec2f{-du,-dv};
inverse_flow.at<cv::Vec2f>(v_min, u_max) += w1 * cv::Vec2f{-du,-dv};
inverse_flow.at<cv::Vec2f>(v_max, u_min) += w3 * cv::Vec2f{-du,-dv};
inverse_flow.at<cv::Vec2f>(v_max, u_max) += w2 * cv::Vec2f{-du,-dv};
}
}
cv::divide(inverse_flow, weights, inverse_flow);
return inverse_flow;
}Der LuftmenschTue, 16 Apr 2019 12:45:19 -0500http://answers.opencv.org/question/211672/What does wiki means by inverse of original system?(Wiener Deconvolution)http://answers.opencv.org/question/192939/what-does-wiki-means-by-inverse-of-original-systemwiener-deconvolution/[Can anyone explain how to calculate this in cpp opencv?](https://en.wikipedia.org/wiki/Wiener_deconvolution#Interpretation)
I have been able to calculate the signal to noise rationand the square of magnitude of the blurring filter.
I am not sure how to calculate 1/H(f).vaibhav_wimpstaMon, 04 Jun 2018 01:11:05 -0500http://answers.opencv.org/question/192939/Inverse Perspective Mapping (newbie)http://answers.opencv.org/question/77262/inverse-perspective-mapping-newbie/Hello all,
I have a picture containing two geometric shapes on the same plane. The picure is taken from some unknown point of view. One shape is a square of know size, the other is unknown. Is it possible to revert the perspective transform, and measure the size of the unknown shapes? I am new to OpenCV, and I've only understood that this has to do with Inverse Perspective Mapping. What is the sequence of function calls?
![image description](/upfiles/14484690065764264.jpg)
Thank you
I've tryed both affine and perspective transform, but the result is not what I want. The arch is still distorted, even if the square is not.
ORIGINAL
![image description](/upfiles/144856172568003.jpg)
AFFINE
![image description](/upfiles/14485617768689178.jpg)
PERSPECTIVE
![image description](/upfiles/1448561791961170.jpg)
Any idea?
johnfulgorWed, 25 Nov 2015 02:44:21 -0600http://answers.opencv.org/question/77262/Perspective Transform using Chessboardhttp://answers.opencv.org/question/62956/perspective-transform-using-chessboard/ Hey,
I need some help with this problem:
I have a camera that takes a picture of something on a horizontal plane with a specific angle.
That creates a perspective transform of this "something". And I would like to get this picture as if I would look down from top of it.
What I did already and one thing that I don't know how to do:
1. I placed a chessboard there.
2. I find the corners of the chessboard.
3. ???
4. cvGetPerspectiveTransform
5. cvWarpPerspective
My problem is point 3.
I have to find out Source and Destination Points which depend on the corners of the chessboard and the width of the picture that was taken, because they show the transformation.
Source is easy: (0,0), (Width, 0), (0,Height) and (Width,Height), because I want the whole picture to be transformed.
Destination however is difficult for me. I don't know how to find those points.
I want that the whole picture (Not just the part with the chessboard inside) is transformed within a single step.
Like in the picture below.
I would appreciate any help.
Greetings and my thanks in advance,
Phanta
![image description](/upfiles/14331120304698347.png)PhantaSun, 31 May 2015 07:20:48 -0500http://answers.opencv.org/question/62956/opencv idft output and calculation of phasehttp://answers.opencv.org/question/56345/opencv-idft-output-and-calculation-of-phase/ I have been trying to calculate the phase information of a complex matrix in opencv. As I am new in using opencv I am sure I am failing to look for the correct answer. So, I have this program.
I am sure the matrix invDFT holds complex values.
So what is the easiest way of calculating the phase of the total matrix? And how can I imshow the DFT output for this program? I have used phase and I am not sure if its correct.
Thanks. I already said I am new in opencv. So please pardon if my questions are too basic. Thanks once again.
int main()
{
// Read image from file
// Make sure that the image is in grayscale
Mat img = imread("input.bmp", 0);
Mat mag, ph;
Mat planes[] = { Mat_<float>(img), Mat::zeros(img.size(), CV_32F) };
Mat complexI; //Complex plane to contain the DFT coefficients {[0]-Real,[1]-Img}
merge(planes, 2, complexI);
dft(complexI, complexI); // Applying DFT
// Reconstructing original imae from the DFT coefficients
Mat invDFT, invDFTcvt;
idft(complexI, invDFT, DFT_SCALE | DFT_REAL_OUTPUT); // Applying IDFT
Mat planesi[] = { Mat_<float>(invDFT), Mat::zeros(invDFT.size(), CV_32F) };
split(invDFT, planesi);
invDFT.convertTo(invDFTcvt, CV_8U);
imshow("Output", invDFTcvt);
phase(planesi[0], planesi[1], ph, false);
namedWindow("phase image", CV_WINDOW_AUTOSIZE);
imshow("phase image", ph);
//show the image
imshow("Original Image", img);
// Wait until user press some key
waitKey(0);
return 0;
}tahseen_kamalThu, 26 Feb 2015 22:20:13 -0600http://answers.opencv.org/question/56345/How to Invert 3x2 Transformation Matrixhttp://answers.opencv.org/question/39311/how-to-invert-3x2-transformation-matrix/Hi,
i am having trouble inveting an 3x2 Transformation Matrix. If my original transformation is rotation with +5°, i want the inverse, which rotation is -5°.
Then i want to transform some point with the new inverse Matrix.
If I use
cv::Mat inverse;
inverse = H.inv(cv::DECOMP_SVD);
I get back a matrxi, but it is 2x3 instead of 3x2, and then i cannt use cv::transform anymore because it gets a SIGABRT.
What am i doing wrong ?
regard Peter pkohoutWed, 13 Aug 2014 03:35:28 -0500http://answers.opencv.org/question/39311/Completely black image after inverse DFT on GPUhttp://answers.opencv.org/question/34065/completely-black-image-after-inverse-dft-on-gpu/Hello,
I'm doing some image deconvolution using the OpenCV GPU libraries. I successfully perform DFTs on my image and my kernel and then use the `mulSpectrums` function for the deconvolution. I'm now attempting to recover the deconvolved image so I can write it back out to a file, but the result of doing an inverse DFT on the deconvolved image gives me a completely black image. I've had a look at [this question](http://answers.opencv.org/question/11485/opencv-gpudft-distorted-image-after-inverse/) and tried the solutions in there, but alas with no results. The relevant part of my code is as below. If anyone could take a look and help me figure out what I'm doing wrong, that'd be great!
//DFT the image
cout << filepath << ": performing DFT" << endl;
GpuMat dftImageG = GpuMat(Mat::zeros(complexImageG.size(), CV_32FC2));
gpu::dft(complexImageG, dftImageG, dftImageG.size());
complexImageG.release();
GpuMat newKernelG;
newKernelG.upload(tempKernel);
//Perform the deconvolution.
cout << filepath << ": deconvolving" << endl;
gpu::mulSpectrums(dftImageG, newKernelG, dftImageG, 0);
newKernelG.release();
//Perform an inverse DFT to get an image back.
cout << filepath << ": performing inverse DFT" << endl;
GpuMat inverseDFT = GpuMat(Mat::zeros(dftImageG.size(), CV_32FC1));
gpu::dft(dftImageG, inverseDFT, inverseDFT.size(), DFT_REAL_OUTPUT | DFT_SCALE | DFT_INVERSE);
dftImageG.release();
Edit: The kernel is already DFTed, even if it's not shown here.CommanderDJSun, 25 May 2014 20:12:05 -0500http://answers.opencv.org/question/34065/Inverse Perspective Mapping with Known Rotation and Translationhttp://answers.opencv.org/question/33267/inverse-perspective-mapping-with-known-rotation-and-translation/Hi,
I need to obtain a new view of an image from a desired point of view (a general case of bird's eye view).
Imagine we change the camera's position with a **known rotation and transformation**. what would be the new image of the same scene?
We may put it in another way: how can we compute **homography matrix** by having the rotation and translation matrices?
I really appreciate any help!gozariTue, 13 May 2014 11:25:31 -0500http://answers.opencv.org/question/33267/Inverse bilinear interpolation (pupil tracker)http://answers.opencv.org/question/15210/inverse-bilinear-interpolation-pupil-tracker/I have build a eye tracking application using openCV and i wish to control the location of the mouse pointer using the location of the left eye pupil.
What i have is four points of the pupil that correspond to the four screen corners. Now i would like to map the current coordinate of the pupil given the four corner positions into a screen coordinate position.
Are there any build in functions in openCV that would let me do this? I already did some research and found that inverse bilinear interpolation would allow me to do this. However i can't seem to find this functionality in opencv for Point2f types. napoleonSat, 15 Jun 2013 06:08:02 -0500http://answers.opencv.org/question/15210/Inverse Perspective Mapping -> When to undistort?http://answers.opencv.org/question/15526/inverse-perspective-mapping-when-to-undistort/BACKGROUND:
I have a a camera mounted on a car facing forward and I want to find the roadmarks. Hence I'm trying to transform the image into a birds eye view image, as viewed from a virtual camera placed 15m in front of the camera and 20m above the ground. I implemented a prototype that uses OpenCV's warpPerspective function. The perspective transformation matrix is got by defining a region of interest on the road and by calculating where the 4 corners of the ROI are projected in both the front and the bird's eye view cameras. I then use these two sets of 4 points and use getPerspectiveTransform function to compute the matrix. This successfully transforms the image into top view.
QUESTION:
When should I undistort the front facing camera image? Should I first undistort and then do this transform or should I first transform and then undistort.
If you are suggesting the first case, then what camera matrix should I use to project the points onto the bird's eye view camera. Currently I use the same raw camera matrix for both the projections.
Please ask more details if my description is confusing!Ashok ElluswamyThu, 20 Jun 2013 19:33:57 -0500http://answers.opencv.org/question/15526/Compute Affine transform like with Active Appearance Modelshttp://answers.opencv.org/question/13951/compute-affine-transform-like-with-active-appearance-models/Hello i am looking into computing the affine transformation of one image onto another image as is done using the Lucas Kanade Algorithm or inverse compositional algorithm in active appearance models. The difference being of course that i am not interested in warping many points in order to match faces but in matching just one template image with another image with a affine transform in real time (the template and image are really small). Is there any build in functionality in opencv i can use to do this?
thanksnapoleonSun, 26 May 2013 04:36:42 -0500http://answers.opencv.org/question/13951/How to do inverse on complex matrix in OpenCV?http://answers.opencv.org/question/10328/how-to-do-inverse-on-complex-matrix-in-opencv/I have trouble in doing inverse of a complex matrix. As far as I know, complex matrix is simply a two-channel matrix (CV_32FC2 / CV_64FC2).
(Written in C++)
Let's say I have a matrix C:
Mat C(2, 2, CV_64FC2);
C.at<Vec2d>(0,0)[0] = 1;
C.at<Vec2d>(0,0)[1] = 1;
C.at<Vec2d>(0,1)[0] = 3;
C.at<Vec2d>(0,1)[1] = 4;
C.at<Vec2d>(1,0)[0] = 2;
C.at<Vec2d>(1,0)[1] = -1;
C.at<Vec2d>(1,1)[0] = 5;
C.at<Vec2d>(1,1)[1] = 2;
Mat InverseMat;
invert(C, InverseMat, DECOMP_SVD);
After I perform the invert function, I keep getting this error:
OpenCV Error: Assertion failed (type == CV_32F || type == CV_64F) in invert
The invert function works well with a grayscale loaded image (1 channel), but I have hard time to do inverse on complex matrix which contains real and imaginary part.
Can someone please tell me how to solve the inverse problem of a complex matrix? Preferably using DECOMP_SVD method, as I can't get desired result using DECOMP_LU or DECOMP_CHOLESKY method when I tried with a single channel image, probably because of the matter of singular matrix. Thanks.
From the solution I received, it's something like this:
void invComplex(const cv::Mat& m, cv::Mat& inverse)
{
cv::Mat twiceM = cv::Mat(m.rows * 2, m.cols * 2,CV_64FC1);
std::vector<cv::Mat> comp;
cv::split(m,comp);
cv::Mat real = comp[0];
cv::Mat imag = comp[1];
for(int i=0; i<m.rows; i++)
{
for(int j=0; j<m.cols; j++)
{
twiceM.at<double>(i,j) = real.at<double>(i,j);
twiceM.at<double>(i,j + m.cols) = imag.at<double>(i,j);
twiceM.at<double>(i + m.rows,j) = -imag.at<double>(i,j);
twiceM.at<double>(i + m.rows,j + m.cols) = real.at<double>(i,j);
}
}
cv::Mat twiceInv;
cv::invert(twiceM,twiceInv);
inverse = cv::Mat(m.cols,m.rows,m.type());
for(int i=0; i<inverse.rows; i++)
{
for(int j=0; j<inverse.cols; j++)
{
double re = twiceInv.at<double>(i,j);
double im = twiceInv.at<double>(i,j + inverse.cols);
cv::Vec2d val(re,im);
inverse.at<cv::Vec2d>(i,j) = val;
}
}
}New userFri, 29 Mar 2013 01:51:18 -0500http://answers.opencv.org/question/10328/