OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 08 Nov 2017 17:22:29 -0600Shape Matching using Discrete Fourier Transformhttp://answers.opencv.org/question/177945/shape-matching-using-discrete-fourier-transform/This is the very first time messing around with DFTs so apologizes inadvance if the Math just flies over my head or my question sounds rather stupid.
**Disclaimer:** I have not used [OpenCV API][1] for any DFT related steps stated below
---
What I have done so far
Let *T* represent the template image and *S* be an image I get from the camera feed
1. Grabbed the contours from *T*
2. Applied DFT on them
3. Made the attained DFTs scale invariant by dividing them with their magnitude i.e. `F[i] = F[i]/|F[i]|`
4. Computed their magnitude spectrum
5. Log-polar transformed the magnitudes and calculated the phases i.e. angles
---
My questions are:
1. Just to confirm, after performing Step 2, my attained descriptors are translation invariant correct?
2. How do I make them rotation invariant?
3. Now this is where I get really confused. After reading a couple papers on image registration, Steps 4 and 5 are needed. This is in order to calculate the phase correlation then trying to maximize this value. For my application, is this really necessary? Or do I just have to make the descriptors rotation invariant then calculate the euclidean distance to the descriptors of *S* and trying to minimize this value my final step?
---
Any help/suggestions are sincerely appreciated!
[1]: https://docs.opencv.org/3.2.0/d2/de8/group__core__array.html#gadd6cf9baf2b8b704a11b5f04aaf4f39deshirimaWed, 08 Nov 2017 17:22:29 -0600http://answers.opencv.org/question/177945/Calculate the distortion of two nearly identical Contours with “matchShapes()” and rotation of one contourhttp://answers.opencv.org/question/111814/calculate-the-distortion-of-two-nearly-identical-contours-with-matchshapes-and-rotation-of-one-contour/I have two files:
1. A .bmp file which shows a contour of an object made by AutoCAD
2. A image of the object made by a camera
From both files I extracted the contour of the object with findContours(). So i got 2 contours.
1. Contour from the .bmp file
2. Contour from the object of the camera image
My problem is, that the object can be distorted in comparison to the contour from the .bmp file. My task is to get the angle between both contours.
[I tried to show my problem with this picture. The lightning in the image is distorted in comparison to the contour from the .bmp file][1]
My first approach was a bounded rectangle around both contours. This worked good but in some cases the bounded rectangle isn't singular.
In the second approach I rotate one contour about an specific delta a and then get the value from "matchShapes()". I think that I should get the smallest value, when i rotated exactly about the angle between the contours. But this didn't worked. Here is my code for the rotation:
int index_contour = 0;
float ret_contour_rot = 10;
//Dummy contour
vector<vector<Point> > conture_bmp_rotated = mcontours_bmp;
Point2f aSWP;
Point2f aCP;
Point2f aCP_rot;
Point2f atranspt;
for( double aDelAngle = 0; aDelAngle <= 2*3.14159; aDelAngle = aDelAngle + 0.01 )
{
aDelAngle = deg;
// get moments
Moments amoments;
amoments = moments( mcontours_bmp[ index_contour ], false );
//get center of gravity
aSWP = Point2f( amoments.m10 / amoments.m00, amoments.m01 / amoments.m00 );
for( int i = 0; i < (int) mcontours_bmp[ index_contour ].size(); i++ )
{
//get first contour point
aCP = Point2f( mcontours_bmp[ index_contour ][ i ].x, mcontours_bmp[ index_contour ][ i ].y );
//vector between contour point and center of gravity
atranspt.x = aCP.x - aSWP.x;
atranspt.y = aCP.y - aSWP.y;
// get rotated contour point
aCP_rot.x = cos( aDelAngle ) * atranspt.x - sin( aDelAngle ) * atranspt.y + aSWP.x;
aCP_rot.y = sin( aDelAngle ) * atranspt.x + cos( aDelAngle ) * atranspt.y + aSWP.y;
//copy point in new contour
mcontours_bmp_rotated[ index_contour ][ i ].x = aCP_rot.x;
mcontours_bmp_rotated[ index_contour ][ i ].y = aCP_rot.y;
}
float aret_contour_rot_act = matchShapes( conture_dxf_rotated[ index_contour ], mcontours_image[index_contour ], 1, 3 );
// get the smallest return value from matchShapes
if( aret_contour_rot_act < aret_contour_rot )
{
aret_contour_rot = aret_contour_rot_act;
}
cout << "rot ret: " << aret_contour_rot_act << endl;
cout << "rot deg: " << deg << endl;
}
cout << "rot ret ende: " << aret_contour_rot << endl;
cout << "rot deg ende: " << degree << endl;
[1]: https://i.stack.imgur.com/O7EAk.jpgsjankoThu, 10 Nov 2016 08:36:15 -0600http://answers.opencv.org/question/111814/Matching shapes (especially characters) based on their histogramshttp://answers.opencv.org/question/61049/matching-shapes-especially-characters-based-on-their-histograms/I would like to find a similarity measure between two images showing characters taking into account their histograms. An image example may contain just an individual letter or it could contain several characters and look like this:
![image description](/upfiles/14305586551268872.jpg).
Can anyone please guide me through how to use `calcHist` function in order to feed its output to `compareHist` function and find a similarity measure between such images. The important point is that the characters may not look the same and may be a little bit distorted while they still remain similar.
I tried out outer approaches such as `matchShapes` but because they are quite often rotation invariant, they sometimes do not provide satisfactory results and letter p is then similar to letter d. cza-57Sat, 02 May 2015 04:33:26 -0500http://answers.opencv.org/question/61049/