Ask Your Question

maephisto's profile - activity

2020-12-01 15:33:17 -0600 received badge  Student (source)
2020-12-01 15:30:40 -0600 received badge  Notable Question (source)
2020-04-21 02:16:28 -0600 received badge  Popular Question (source)
2017-09-15 03:12:01 -0600 commented answer Extracting A4 sheet out of troubling backgrounds

Thank you for the advice, Ziri, it sounds like a good approach! Can I ask what ops/steps did you apply to get to the tow

2017-09-14 02:34:42 -0600 edited question Extracting A4 sheet out of troubling backgrounds

Extracting A4 sheet out of troubling backgrounds So, my challenge is to extract a A4 paper document out of a mobile phon

2017-09-14 02:34:42 -0600 received badge  Editor (source)
2017-09-14 02:33:12 -0600 asked a question Extracting A4 sheet out of troubling backgrounds

Extracting A4 sheet out of troubling backgrounds So, my challenge is to extract a A4 paper document out of a mobile phon

2017-05-19 09:07:22 -0600 commented question Image matching - form photo vs form template

@berak That's probably true, as I said, I'm a begginer. Any tips on how solve my problem?

2017-05-19 08:19:15 -0600 asked a question Image matching - form photo vs form template

I'm trying to detect wether certain photos are photos that represent a predefined formular template, but filled with some data. I'm new to image processing and OpenCV but my first attempt is to use FlannBasedMatcher and compare the count of keypoints detected.

However, I'm having problems finding a way to calculate the confidence of the matching based on that. First idea - the nr len(kp1) should be roughly the same as len(kp2)

Is there a better way to calculate it? Or, is there a better way to do this matching?

import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('filled-form.jpg',0)          # queryImage
img2 = cv2.imread('template.jpg',0) # trainImage
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50)   # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in xrange(len(matches))]
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
    if m.distance < 0.7*n.distance:
        matchesMask[i]=[1,0]
draw_params = dict(matchColor = (0,255,0),
                   singlePointColor = (255,0,0),
                   matchesMask = matchesMask,
                   flags = 0)
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
print "KP1", len(kp1), "KP2", len(kp2), "matches", len(matchesMask)
plt.imshow(img3,),plt.show()