Hey algorithm enthusiast,
Here is an implementation I am trying to make it work for shape matching that is invariant to rotation, translation and scaling as per theory.
Reference Links
- http://www.isy.liu.se/cvl/edu/TSBB08/lectures/DBgrkX1.pdf
- http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/MORSE/boundary-rep-desc.pdf
- http://users.monash.edu.au/~dengs/resource/papers/pcm01.pdf
- http://users.monash.edu.au/~dengs/resource/papers/vcir.pdf
- https://homepages.cae.wisc.edu/~ece533/project/f06/karrels_ppt.pdf
- http://www.codeproject.com/Articles/196168/Contour-Analysis-for-Image-Recognition-in-C (ref sample application in spatial domain)
What is the process that I tried
Get RGB input image, convert to gray scale
Guassian Blur the image with 3x3 kernel and perform thresholding
Do some morphological operation to close the edges and find external contour of this thresholded image
Find the largest contour that forms the boundary of desired object. The returned contour points are in counter clock wise direction, so reverse it to make it clockwise order. (As explained in few papers)
Perform differential coding (current coordinate - next coordinate) for contour coordinates and save it as vector of new points. Convert this to two planes (one for x and another for y).
Perform DFT for these differential encoded contour points. Calculate Magnitude from result of DFT, discard DC coefficient and normalize other Fourier coefficients with first harmonic (this is done for scale invariancy).
Perform Euclidean distance for template image and input test image.
Stuck up with
The Euclidean distance calculated is coming same/closer for any test sample I give. Not sure where I do mistake. I do not find any sample implementation for frequency domain shape matching.
Please help in understanding any mistakes that is being done.
Sample image in recognition