Ask Your Question

Stitch images with OpenCV 3 (+contrib) and Python 2.7

asked 2015-11-09 13:15:02 -0500

Dansag gravatar image

I want to stitch 2 mages using OpenCV 3.0 (with contrib) and Python 2.7.

I have written a program to do this but the result is so bad. I'm using SIFT features to do this. I have displayed the found features and I think they are very good, the problem is the homography. The transformation applied to the images is totally wrong and I don't know why.

Here are the two images that I want to stitch

enter image description here

enter image description here

Here the keypoints

enter image description here

And here the results after applying the homography (and its inverse)

enter image description here enter image description here

What I have tested

Here my program, 90% is copied from this post: I have added two methods explained below.


# Load the two images
img1 = cv2.imread(PATH + "image1.jpg", -1)
img2 = cv2.imread(PATH + "image2.jpg", -1)

# Transform to RGB
cv2.cvtColor(img1, cv2.COLOR_BGR2RGB, img1)
cv2.cvtColor(img2, cv2.COLOR_BGR2RGB, img2)

# Get their dimensions
height, width = img1.shape[:2]

# Resize them (they are too big)
img1 = cv2.resize(img1, (width / 4, height / 4))
img2 = cv2.resize(img2, (width / 4, height / 4))

# Get the resized image's dimensions
height, width = img1.shape[:2]

# Initiate SIFT detector
sift = X2D.SIFT_create()

# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)

# BFMatcher with default params
bf = cv2.BFMatcher()

# Note: 'k=2' -> for each keypoint, there can be
# 2 or less matches.
matches = bf.knnMatch(des1,des2, k=2)

###### Here the filter that I have added:
# Filter the matches to remove the wrong ones. This filter 
# removes all pairs of keypoints whose y-coordinate difference
# is lower than the image's height * 0.1.
# The method is implementated below.
matches = filterMatches(kp1, kp2, matches, height, 0.1)

# Apply ratio test
good = []
for m,n in matches:
    if m.distance < 0.75*n.distance:

# Get the coordinates of the mattching keypoints:
# This method is implemented below.
sP, dP = Tools.pointsFromMatches(kp1, kp2, matches)

# Get the homography and its inverse:
H, mask = cv2.findHomography(sP, dP, cv2.RANSAC)
iH = np.linalg.inv(H)

# Apply the homography to both images:
alignedImg1 = cv2.warpPerspective(img1, H, (width, height), flags = cv2.INTER_LINEAR + cv2.BORDER_CONSTANT)
alignedImg2ToImg1 = cv2.warpPerspective(img2, iH, (width, height), flags = cv2.INTER_LINEAR + cv2.BORDER_CONSTANT)

# Show the results:

The implementation of my methods

def filterMatches(kp1, kp2, matches, imgHeight, thresFactor = 0.4):
    Removes the matches that correspond to a pair of keypoints (kp1, kp2)
    which y-coordinate difference is lower than imgHeight * thresFactor.

        kp1 (array of cv2.KeyPoint): Key Points.

        kp2 (array of cv2.KeyPoint): Key Points.

        matches (array of cv2.DMatch): Matches between kp1 and kp2.

        imgHeight (Integer): height of the image that has produced kp1 or kp2.

        thresFactor (Float): Use to calculate the threshold. Threshold is 
            imgHeight * thresFactor.

        array of cv2.DMATCH: filtered matches.

    filteredMatches = [None]*len(matches)
    counter = 0
    threshold = imgHeight * thresFactor
    for i in range(len ...
edit retag flag offensive close merge delete


Did you ever manage to solve this problem? I'm having a similar issue.

bprodz gravatar imagebprodz ( 2017-06-07 23:49:10 -0500 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2018-06-03 18:22:24 -0500

Hi I can find module X2D referenced in lines:

# Initiate SIFT detector
sift = X2D.SIFT_create()

Can anyone help me?

edit flag offensive delete link more
Login/Signup to Answer

Question Tools

1 follower


Asked: 2015-11-09 13:15:02 -0500

Seen: 4,745 times

Last updated: Nov 09 '15