Ask Your Question

How to exclude outliers from detected Orb features?

asked 2019-11-18 12:55:25 -0500

postlude gravatar image

updated 2019-11-19 03:01:39 -0500

I am using the approach shown below (see bottom of post) to detect an object within a video stream.

As can be seen from the image below (red arrows) I get a number of false points / outliers outside the detected area, especially if I move the detected object. What I would like to do is draw a rectangle around the main cluster of points as returned by cv2.perspectiveTransform(), excluding the outlying points. What is the best way to achieve this?

UPDATE: I have updated the image below to hopefully show more clearly what I'm trying to achieve

image description

# 2017.11.26 23:27:12 CST

## Find object by orb features matching

import numpy as np
import cv2
imgname = "box.png"          # query image (small object)
imgname2 = "box_in_scene.png" # train image (large scene)


## Create ORB object and BF object(using HAMMING)
orb = cv2.ORB_create()
img1 = cv2.imread(imgname)
img2 = cv2.imread(imgname2)

gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)

## Find the keypoints and descriptors with ORB
kpts1, descs1 = orb.detectAndCompute(gray1,None)
kpts2, descs2 = orb.detectAndCompute(gray2,None)

## match descriptors and sort them in the order of their distance
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(descs1, descs2)
dmatches = sorted(matches, key = lambda x:x.distance)

## extract the matched keypoints
src_pts  = np.float32([kpts1[m.queryIdx].pt for m in dmatches]).reshape(-1,1,2)
dst_pts  = np.float32([kpts2[m.trainIdx].pt for m in dmatches]).reshape(-1,1,2)

## find homography matrix and do perspective transform
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
h,w = img1.shape[:2]
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)

## draw found regions
img2 = cv2.polylines(img2, [np.int32(dst)], True, (0,0,255), 1, cv2.LINE_AA)
cv2.imshow("found", img2)

## draw match lines
res = cv2.drawMatches(img1, kpts1, img2, kpts2, dmatches[:20],None,flags=2)

cv2.imshow("orb_match", res);

edit retag flag offensive close merge delete


I don't see nothing wrong. Fortunately, I used same ORB. Can you post orignal image?

supra56 gravatar imagesupra56 ( 2019-11-18 14:52:20 -0500 )edit

to detect an object

mind you, this will only work IF your object is actually in the scene. you cannot detect absence of it like that.

(this is NOT "object-detection")

berak gravatar imageberak ( 2019-11-19 01:26:14 -0500 )edit

@supra56 the code is working as expected, I simply want to add an additional step to remove points from the matches. I've updated the image to hopefully explain this better.

postlude gravatar imagepostlude ( 2019-11-19 03:04:09 -0500 )edit

@postlude. I tested it and it working now that excluding points that youre mentioned. But I need 2 orignal images. So I can test it with your code before I post answer.

supra56 gravatar imagesupra56 ( 2019-11-19 06:15:17 -0500 )edit
postlude gravatar imagepostlude ( 2019-11-19 07:27:32 -0500 )edit

Thanks for images. I will used this for cv2.perspectiveTransform, etc.

supra56 gravatar imagesupra56 ( 2019-11-19 08:58:53 -0500 )edit

2 answers

Sort by ยป oldest newest most voted

answered 2019-11-19 01:22:46 -0500

berak gravatar image

you can use the "ratio test"

from the tutorial :

# Apply ratio test
good = []
for m,n in matches:
    if m.distance < 0.75*n.distance:

# now use the 'good' matches ...
edit flag offensive delete link more


Thanks, but isn't this the purpose of dmatches = sorted(matches, key = lambda x:x.distance) ? I'm not looking for the closest feature matches to the original image, I'm looking for the n closest points in dst to the cluster centre.

postlude gravatar imagepostlude ( 2019-11-19 02:42:24 -0500 )edit

that only sorts by distance, but does not exclude outliers at all (you'd have to manually chop off items at one end)

have another look at the ratio test, it compares distance a->b to b->a, that's not what your sorting does, right ?

I'm looking for the n closest points in dst to the cluster centre.

hehe, you don't have a cluster center yet.

berak gravatar imageberak ( 2019-11-19 02:50:44 -0500 )edit

@berak OK, thanks! I assumed that crossCheck=True with dmatches = sorted and e.g. matches[:10] would work, but I just tried with crossCheck=False and the ratio test, and it does work better.

postlude gravatar imagepostlude ( 2019-11-19 03:30:09 -0500 )edit

@berak out-of-interest, how would I approach finding a cluster centre if I wanted to do that? Would kmeans be overkill?

postlude gravatar imagepostlude ( 2019-11-19 03:31:35 -0500 )edit

finding a cluster centre

center of mass: just sum up all pointcoords, and divide by npoints

geometric center: iteratively find a bounding box, take center of that.

but again, you'll have to remove the outliers first, so, after the matching.

berak gravatar imageberak ( 2019-11-19 03:56:00 -0500 )edit

answered 2019-11-19 08:54:46 -0500

supra56 gravatar image

updated 2019-11-19 09:01:32 -0500

Don't used value greater than 20. So, I selected between 8 to 14. If you want to use cv2.perspectiveTransform, you need to change value greater than 12. Change this lines 40 in your code as above:

res = cv2.drawMatches(img1, kpts1, img2, kpts2, dmatches[:20],None,flags=2)
res = cv2.drawMatches(img1, kpts1, img2, kpts2, dmatches[:12],None,flags=2)

Output: image description

Another one is 20 lines:

#!/usr/bin/env python37
#Raspberry pi 3/4, OpenCV 4.1.2
#Date: November 19th 2019
import cv2 as cv

img1 = cv.imread('mt_image.jpg')
img2 = cv.imread('mt_template.jpg')

orb = cv.ORB_create()
kp1, des1 = orb.detectAndCompute(img1, None)
kp2, des2 = orb.detectAndCompute(img2, None)
bf = cv.BFMatcher(cv.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1, des2)
matches = sorted(matches, key = lambda x:x.distance)

img3 = cv.drawMatches(img1, kp1, img2, kp2, matches[:8], None, flags=2)
cv.imwrite('feature_orb.jpg', img3)
cv.imshow('Feature Matching', img3)


Output: image description To see comparing. Unfortunately, I'm running out of my time, but haven't attempted cv2.perspectiveTransform.

edit flag offensive delete link more
Login/Signup to Answer

Question Tools

1 follower


Asked: 2019-11-18 12:55:25 -0500

Seen: 47 times

Last updated: Nov 19