ORB feature detection is being confused by repetitive elements (black bars). How do I fix?

asked 2019-09-18 11:45:17 -0600

cccoleman gravatar image

I'm trying to align a scanned form to the blank template, so we can auto-score them. I'm doing the Image Registration (Alignment) using ORB Feature Detection , findHomography, and WarpPerspective -- as described in Image Alignment (Feature Based) using OpenCV..

Unfortunately, my form has series of black bars printed along the side. Though one might expect these would help registration, OpenCV's DrawMatches shows me that these black bars are throwing off the registration.

Matches

Because each black bar is identical, the matcher is spending all its time trying to match up black bars with each other, ignoring the rest of the document. Thus the resultant homography is bad, and the resultant warped images are so bad they resemble abstract art:
bad solution

If I manually crop the bars out, the registration goes smoothly, but obviously, I need a more robust solution going forward. Any ideas?
Any idea why ORB is ignoring everything but the black bars?
Any idea how I can force it to look for other features found elsewhere in the document? Any ideas for fixing this in general?

Code below taken from Image Alignment (Feature Based) using OpenCV..

from __future__ import print_function
import cv2
import numpy as np


MAX_FEATURES = 500
GOOD_MATCH_PERCENT = 0.15


def alignImages(im1, im2):

  # Convert images to grayscale
  im1Gray = cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY)
  im2Gray = cv2.cvtColor(im2, cv2.COLOR_BGR2GRAY)

  # Detect ORB features and compute descriptors.
  orb = cv2.ORB_create(MAX_FEATURES)
  keypoints1, descriptors1 = orb.detectAndCompute(im1Gray, None)
  keypoints2, descriptors2 = orb.detectAndCompute(im2Gray, None)

  # Match features.
  matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
  matches = matcher.match(descriptors1, descriptors2, None)

  # Sort matches by score
  matches.sort(key=lambda x: x.distance, reverse=False)

  # Remove not so good matches
  numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
  matches = matches[:numGoodMatches]

  # Draw top matches
  imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None)
  cv2.imwrite("matches.jpg", imMatches)

  # Extract location of good matches
  points1 = np.zeros((len(matches), 2), dtype=np.float32)
  points2 = np.zeros((len(matches), 2), dtype=np.float32)

  for i, match in enumerate(matches):
    points1[i, :] = keypoints1[match.queryIdx].pt
    points2[i, :] = keypoints2[match.trainIdx].pt

  # Find homography
  h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)

  # Use homography
  height, width, channels = im2.shape
  im1Reg = cv2.warpPerspective(im1, h, (width, height))

  return im1Reg, h


if __name__ == '__main__':

  # Read reference image
  refFilename = "form.jpg"
  print("Reading reference image : ", refFilename)
  imReference = cv2.imread(refFilename, cv2.IMREAD_COLOR)

  # Read image to be aligned
  imFilename = "scanned-form.jpg"
  print("Reading image to align : ", imFilename);  
  im = cv2.imread(imFilename, cv2.IMREAD_COLOR)

  print("Aligning images ...")
  # Registered image will be resotred in imReg. 
  # The estimated homography will be stored in h. 
  imReg, h = alignImages(im, imReference)

  # Write aligned image to disk. 
  outFilename = "aligned.jpg"
  print("Saving aligned image : ", outFilename); 
  cv2.imwrite(outFilename, imReg)

  # Print estimated homography
  print("Estimated homography : \n",  h)
edit retag flag offensive close merge delete

Comments

1

Try the Lowe ratio test like here or here.

Eduardo gravatar imageEduardo ( 2019-09-19 08:53:50 -0600 )edit