Ask Your Question
1

What is estimateGeometricTransform of Matlab in openCV?

asked 2018-02-09 13:29:37 -0600

Riya208 gravatar image

Hello,

I am trying to rewrite the following code Object Detection in a Cluttered Scene Using Point Feature Matching in OpenCV using python.

It would be great if somebody could explain to me how the estimateGeometricTransform in the Matlab code works and is there any equivalent OpenCV command? I have seen people saying getAffineTransform is equivalent to estimateGeometricTransform, but I am not sure.

So far the code is python is:

import numpy as np
import cv2

# Read the templates

temp = cv2.imread('template.jpg')

# Show the templates

cv2.imshow("Template", temp)

# Read the input Image

inputImage = cv2.imread('Main.jpg')

# Show the input Image

cv2.imshow("Main Image",inputImage)

# Create SURF object.

surf = cv2.xfeatures2d.SURF_create(20000)

# Find keypoints and descriptors directly

kp1, des1 = surf.detectAndCompute(inputImage,None)
kp2, des2 = surf.detectAndCompute(tramTemplate,None)
print("Key points of an Input Image, Descriptors of an Input Image", len(kp1), len(des1))
print("Key points of Tram Template, Descriptors of Tram Template", len(kp2), len(des2))

#Detect feature points in both images.

inputImagePoint = cv2.drawKeypoints(inputImage,kp1,None,(255,0,0),4)
tramTemplatePoint = cv2.drawKeypoints(tramTemplate,kp2,None,(255,0,0),4)
cv2.imshow("Input Image Key Point", inputImagePoint)
cv2.imshow("Tram Template Key Point", tramTemplatePoint)

# Match the features using their descriptors.

bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)

# Show Matched features

M = np.array(matches)

M1 = M[:, 0]
M2 = M[:, 1]

# Apply ratio test

good = []
for m,n in matches:

if m.distance < 0.75*n.distance:
    good.append([m])

matchedFeaures = cv2.drawMatchesKnn(inputImage,kp1,tramTemplate,kp2, good, None, flags=2)
cv2.imshow("Matched Feaures", matchedFeaures)

# Part of code is missing

aff = cv2.getAffineTransform(M1, M2)
cv2.imshow("Affine Transformed Image", aff)

# Get the bounding polygon of the reference image.

fromCenter = False
rectangleBox = cv2.selectROI(tramTemplate, fromCenter)
cv2.waitKey()

In the Matlab code, I don't understand what the following lines mean? Can somebody please explain it to me? It says "Display putatively matched features.", but I don't get it how.

matchedBoxPoints = boxPoints(boxPairs(:, 1), :);
matchedScenePoints = scenePoints(boxPairs(:, 2), :);

I am kinda stuck from this point. I believe that the variable "boxPoints" is a key feature and "boxPairs" is the matched feature using their descriptors, right?

Also, the getAffineTransform gives me an error: "src data type = 17 is not supported"

I kind of need it for my project

Thank you very much.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2018-11-28 17:32:27 -0600

nikitha gravatar image

You could try cv2.estimateRigidTransform() which gives 6 DoF affine transformation

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2018-02-09 13:29:37 -0600

Seen: 1,631 times

Last updated: Feb 09 '18