OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sat, 13 Oct 2018 01:50:54 -0500Phase correlation for template matchinghttp://answers.opencv.org/question/201054/phase-correlation-for-template-matching/ I am trying to implement template matching using phase correlation. I have already done it in the spatial domain. [You can see here][1]
My template image is `imagePart` and the image in which I am finding it is `imageBig`.
Now I am trying it using DFT to increase speed. I am following these steps as suggested by Cris Luengo
1. Pad the template (floating image) to the size of the other image
(with zeros).
2. Compute the FFT of both.
3. Flip the sign of the imaginary component of one of the results (complex conjugate).
4. Multiply the two.
5. Compute the IFFT of the result.
6. Find the location of the pixel with the largest value.
My code is:
int r_big,c_big,r_part,c_part;
Mat imagePart_pad,imageBig_padded, imagePart_padded, mul_output;
int m,n;
Mat complexI_big, complexI_part;
void corr_frq()
{
r_big=imageBig.rows;
c_big= imageBig.cols;
r_part=imagePart.rows;
c_part=imagePart.cols;
//Pad template to match size of big image.
copyMakeBorder(imagePart,imagePart_pad,0,(r_big-r_part),0,(c_big-c_part),BORDER_CONSTANT,Scalar(0));
m = getOptimalDFTSize( imageBig.rows );
n = getOptimalDFTSize( imageBig.cols );
copyMakeBorder(imageBig, imageBig_padded, 0, m - imageBig.rows, 0, n - imageBig.cols, BORDER_CONSTANT, Scalar::all(0));
copyMakeBorder(imagePart_pad, imagePart_padded, 0, m - imageBig.rows, 0, n - imageBig.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes[] = {Mat_<float>(imageBig_padded), Mat::zeros(imageBig_padded.size(), CV_32F)};
Mat planes2[] = {Mat_<float>(imagePart_padded), Mat::zeros(imagePart_padded.size(), CV_32F)};
merge(planes, 2, complexI_big);
merge(planes2, 2, complexI_part);
dft(complexI_big,complexI_big);
dft(complexI_part,complexI_part);
mulSpectrums(complexI_big, complexI_part, mul_output, 0, true );
cv::Mat inverseTransform;
cv::dft(mul_output, inverseTransform, cv::DFT_INVERSE|cv::DFT_REAL_OUTPUT);
normalize(inverseTransform, inverseTransform, 0, 1, CV_MINMAX);
imshow("Reconstructed", inverseTransform);
waitKey(0);
imshow("image part pad",imagePart_pad);
// waitKey(0);
}
After performing above operation I am getting this [![output][2]][2]
Shouldn't it give max output at the image location where the `imagePart` (tick sign is). Am I doing something wrong?
[1]: https://stackoverflow.com/questions/52753902/tackle-low-fps-for-correlation-code-to-compute-shift-in-image
[2]: https://i.stack.imgur.com/mjZyR.jpgAnuragSat, 13 Oct 2018 01:50:54 -0500http://answers.opencv.org/question/201054/Stitching images with phase correlation: how to get overlap area position?http://answers.opencv.org/question/189755/stitching-images-with-phase-correlation-how-to-get-overlap-area-position/I'm trying to stitching images which are some screenshots of a long webpage scrolled from top to bpttom (also contain the browser titlebar at top of them and windows taskbar at bottom of them). Since they are plane and their overlap areas are totally identical, according to this question (http://answers.opencv.org/question/96464/simple-image-stitching-c/) , phase correlation is a proper method. I tried the `phasecorrelate` method (https://docs.opencv.org/2.4/modules/imgproc/doc/motion_analysis_and_object_tracking.html#phasecorrelate) and it works fine. But to stitch them, I also have to know the initial or final positions of the overlap areas. How to accomplish that? I'm new to OpenCV, so maybe my question is silly. Thanks for your help.
More detail:
Every 2 images are consist of 4 parts, and the height of every part is unknown. Part C of Img1 is totally identical with part C of img2, and the position of part C is what I want to know.
img1:
AAAAA
BBBBB
BBBBB
BBBBB
BBBBB
CCCCC
CCCCC
DDDDD
img2:
AAAAA
CCCCC
CCCCC
EEEEE
EEEEE
EEEEE
EEEEE
DDDDD
L.RosenThu, 19 Apr 2018 09:31:50 -0500http://answers.opencv.org/question/189755/