OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Thu, 18 Jan 2018 05:08:08 -0600Accuracy of OpenCV's DFT (C++ implementation)http://answers.opencv.org/question/182729/accuracy-of-opencvs-dft-c-implementation/I'm coding a with C++ and OpenCV. I'm comparing the outputs (complex numbers) of MATLAB's FFT function and OpenCV's DFT function. I am able to get identical results for normal images such as the one below (Fig. 1). However, after some processing to get the image (Fig. 2), the values are very different by orders of magnitude and I think the OpenCV's output is wrong.
Is this expected of the OpenCV's DFT function? I've tried all sorts of things like putting "DFT_SCALE" as well. Any suggestions please? Thanks!
![Fig.1](/upfiles/15162731822390162.jpg)
![Fig.2](/upfiles/15162732072692394.jpg)
I've also included my MATLAB code. Fig.1 is the variable assigned "S" and Fig.2 is the variable "Normin2". (product of imshow(Normin2)). Normin2 has a maximum value of 1.359 and a minimum value of -1.2667.
![MATLAB code](/upfiles/15162755494912427.jpg)Colin PeerisThu, 18 Jan 2018 05:08:08 -0600http://answers.opencv.org/question/182729/complex conjugatehttp://answers.opencv.org/question/98003/complex-conjugate/ how do i get the complex conjugate??
is the result of cvDFT complex???
IplImage * src_img = cvLoadImage("for_fft.jpg", 0);
IplImage * dst_inverse = cvCreateImage(cvGetSize(src_img), IPL_DEPTH_8U, src_img->nChannels);
IplImage * dst_freq = cvCreateImage(cvGetSize(src_img), IPL_DEPTH_8U, src_img->nChannels);
IplImage * dst_swap = cvCreateImage(cvGetSize(src_img), IPL_DEPTH_8U, src_img->nChannels);
//spatial: input, freq: frequency domain
CvMat *spatial = cvCreateMat(src_img->height, src_img->widthStep, CV_64FC2);
CvMat *freq = cvCreateMat(src_img->height, src_img->widthStep, CV_64FC2);
//DFT
for (i = 0; i < src_img->imageSize; i++) {
spatial->data.db[i * 2] = (double)(unsigned char)src_img->imageData[i];
spatial->data.db[i * 2 + 1]; //for complex
}
cvDFT(spatial, freq, CV_DXT_FORWARD);//DFT
//print it out in log scale
double tmp = 0;
double max_f = INT_MIN;
double min_f = INT_MAX;
for (i = 0; i < src_img->imageSize; i++) {
tmp = log10(1 + sqrt(SQUARE(freq->data.db[i * 2]) + SQUARE(freq->data.db[i * 2 + 1])));
if (tmp < min_f)min_f = tmp;
if (tmp > max_f) max_f = tmp;
}
for (i = 0; i < src_img->imageSize; i++) {
dst_freq->imageData[i] = (unsigned char)(256 / (max_f - min_f)*log10(1 + sqrt(SQUARE(freq->data.db[i * 2]) + SQUARE(freq->data.db[i * 2 + 1]))))+5;
}
FreqShift(dst_freq, dst_swap);
cvDFT(freq, spatial, CV_DXT_INVERSE_SCALE);
for (i = 0; i < src_img->imageSize; i++) {
dst_inverse->imageData[i] = (char)spatial->data.db[i * 2];
}
this is what i did 2D image dft
i want to get the complex conjugate what should i do ? I just start opencv 5 days ago. please let me know.
HiHelloWed, 06 Jul 2016 18:54:54 -0500http://answers.opencv.org/question/98003/Replace dft in seamless cloning with directx ffthttp://answers.opencv.org/question/90945/replace-dft-in-seamless-cloning-with-directx-fft/Is there an easy way to replace the calculation of the fourier transformation in https://github.com/Itseez/opencv/blob/master/modules/photo/src/seamless_cloning_impl.cpp#L128 with the directx fft implementation https://msdn.microsoft.com/en-us/library/windows/desktop/ff476277%28v=vs.85%29.aspx ? I'm using this for a windows store app so I cannot use CUDA or OpenCL. But I can use directx for hardware acceleration. Are there faster ways to solve the poisson equation without using fft? Any suggestions are welcome. ebroglioThu, 24 Mar 2016 16:24:46 -0500http://answers.opencv.org/question/90945/Meaning of cv::idft() with DFT_REAL_OUTPUT optionhttp://answers.opencv.org/question/80387/meaning-of-cvidft-with-dft_real_output-option/I would like to ask about cv::idft() with DFT_REAL_OUTPUT for non-symmetric matrix. Let me explain in detail.
If an input matrix in time domain has only real component, dft of the input matrix has some kind of symmetric structure.
For example,
const int width = 4;
const int height = 4;
cv::Mat in1ch = (cv::Mat_<double>(width, height)
<< 1.0, 0.2, 0.4, 0.5,
0.4, 0.3, 0.2, 0.1,
0.5, 0.7, 0.3, 0.8,
0.2, 0.9, 0.8, 0.1
);
// dfted: dft result of in1ch
// [7.4 0.4-0.6i 0.2 0.4+0.6i;
// -0.2+i 1-0.6i 1.4-0.2i -0.2-i;
// 1.4 1.2+1.4i -0.2 1.2-1.4i;
// -0.2-i -0.2+i 1.4+0.2i 1+0.6i]
cv::Mat dfted;
cv::dft(in1ch, dfted, DFT_COMPLEX_OUTPUT);
And the idft result of the symmetric matrix will have only real component.
// idfted: idft result of dfted
// it is the same with in1ch
cv::Mat idfted;
cv::idft(dfted, idfted, DFT_REAL_OUTPUT | DFT_SCALE);
On the contrary, the idft result of a non-symmetric matrix will have real and imaginary component.
However, if I give a non-symmetric matrix to cv::idft() with DFT_REAL_OUTPUT, I get a matrix having only real component.
// notSym: non-symmetric matrix which is almost the same with dfted
// [7.4 1.3+0.2i 0.2 0.4+0.6i;
// -0.2+i 1-0.6i 1.4-0.2i -0.2-i;
// 1.4 1.2+1.4i -0.2 0.4+i;
// -0.2-i -0.2+i 1.4+0.2i 1+0.6i]
cv::Mat notSym = dfted.clone();
notSym.at<cv::Vec2f>(0, 1) = cv::Vec2f(1.3, 0.2);
notSym.at<cv::Vec2f>(2, 3) = cv::Vec2f(0.4, 1.0);
// idfted2:
// [1.1125 0.1 0.2875 0.6
// 0.5125 0.2 0.0875 0.2
// 0.6125 0.6 0.1875 0.9
// 0.3125 0.8 0.6875 0.2]
cv::Mat idfted2;
cv::idft(notSym, idfted2, DFT_REAL_OUTPUT | DFT_SCALE);
Here are the questions:
- Q1. To apply cv::idft() for non-symmetric matrix Is a meaningful procedure? If so, what is the meaning of the result?
- Q2. What is an implementation of cv::idft() with DFT_REAL_OUTPUT? Is there some reference?
polycarbonateWed, 23 Dec 2015 04:38:03 -0600http://answers.opencv.org/question/80387/Would subtracting the phases of two images be a superior difference metric than subtracting the images directly?http://answers.opencv.org/question/73745/would-subtracting-the-phases-of-two-images-be-a-superior-difference-metric-than-subtracting-the-images-directly/ I'm hoping someone can sanity check this idea as I am admittedly a bit of a noob when it comes to working with FFT's.
Say I have two (registered) images of two of the same object and want to use one as a baseline to check for differences in quality control (looking for scratches and whatnot). My initial naive approach is to just subtract the two images directly and then whatever remains can be treated as defects. However, this method is subject to error in the presence of illumination differences.
I'm thinking that I would be better served if I take the FFT of the images, and subtract only the phase information, and then use the IFFT of that result as the defect map, and that this should hopefully eliminate false positives due to lighting.
Does this seem like a reasonable assumption or is there some kind of detail that I'm overlooking? Thanks for any advice you can offer!Brandon212Tue, 20 Oct 2015 11:10:46 -0500http://answers.opencv.org/question/73745/