How should I use OpenCV 3.1 ORB in Python3?

asked 2016-12-08 04:00:06 -0500

ddddn gravatar image

updated 2016-12-08 04:20:21 -0500

I'm new to CV,I have a lot of images and I want to compare one image with the others in my images dataset.So I decide to index all the images,after I do some search and know ORG,SIFT,SURF is I was looking for.But I don't know how to use the keypoint and descriptor,below is my code:

import cv2

nfeatures = 1
cv2.ocl.setUseOpenCL(False)
img = cv2.imread('images/forest-copyright.jpg', 0)
img2 = cv2.imread('images/forest-high.jpg', 0)

def kpdes(img):
    orb = cv2.ORB_create(nfeatures=nfeatures)
    kp = orb.detect(img, None)
    kp,des = orb.compute(img, kp)
    print(kp,des)

kpdes(img)
kpdes(img2)

Some parts of output:

[KeyPoint 0000000002A2EF00] [[252 48 188 124 41 124 81 184 161 63 167 25 87 63 74 91 192 213 237 0 60 79 243 0 219 235 112 93 224 225 78 67]]

How should I use the descriptor like “[[252 48 188 124 41 124 81 184 161 63 167 25 87 63 74 91 192 213 237 0 60 79 243 0 219 235 112 93 224 225 78 67]]”,what dose it mean? How can I store them in Elasticsearch and query them? I found the descriptor would be changed if I increase nfeatures. Yes,there are so many questions for me,waiting for helper! After I read some docs,I convert the descriptor to 256 bits binary string like this "00101000010111000111110101111...",and calculate HAMMING-DISTANCE,below is my current code:

import numpy as np
import cv2


nfeatures = 1000
threshold = 150
cv2.ocl.setUseOpenCL(False)
img = cv2.imread('images/forest-copyright.jpg', 0)
img2 = cv2.imread('images/forest-high.jpg', 0)


def kpdes(img):
    orb = cv2.ORB_create(nfeatures=nfeatures)
    kp = orb.detect(img, None)
    kp, des = orb.compute(img, kp)
    des_bin_list = []
    for row in des:
        char_list = []
        for char in row:
            char_list.append(np.binary_repr(char, 8))
        des_bin_list.append(''.join(char_list))
    return des_bin_list


def get_ham_dis(str1, str2):
    distance = 0
    for char1, char2 in zip(str1, str2):
        if char1 != char2:
            distance += 1
    return distance


def dis(des1, des2):
    bad_points = 0
    index = 0
    for des_bin1, des_bin2 in zip(des1, des2):
        index += 1
        ham_dis = 0
        ham_dis = get_ham_dis(des_bin1, des_bin2)
        if ham_dis > threshold:
            bad_points += 1
    print(bad_points)
des_1 = kpdes(img)
des_2 = kpdes(img2)
dis(des_1, des_2)
edit retag flag offensive close merge delete

Comments

have a look at the tutorials , and refine your question ?

berak gravatar imageberak ( 2016-12-08 04:20:42 -0500 )edit

@berak I'm really don't understand how to accomplish my task after I read almost related tutorials.I don't know how to use the discriptors,and what should I do when the discriptor count is different bettween two images,etc ORZ

ddddn gravatar imageddddn ( 2016-12-09 04:18:36 -0500 )edit

I want to compare one image with the others in my images dataset

usually, keypoints/2d-features are used to match a known scene between 2 images, with the intent, to derive the pose from one image to the other.

now, using ORB (or any other feature2d thing) features for your task "as is" might be a bit unlucky, because each image will have a different count of kp/descs.

you can overcome that by

a:) using a dense, fixed grid of keypoints (say, 20x20)

b:) clustering your descriptors (aka BagOfWords) (which again is not easily doable with ORB, since you would need float descs for kmeans based clustering)

in the last stage(comparing), you'll probably need something better, than the "1-nearest-neighbour" search used now, like a knn search (or a flann::index in c++)

berak gravatar imageberak ( 2016-12-09 04:50:01 -0500 )edit

@berak " using a dense, fixed grid of keypoints", what should I do for that?Could you give me a sample, thank you :) And can you compare two images below with ORB,and get a conclusion that if they are similar or not?

ddddn gravatar imageddddn ( 2016-12-09 19:16:38 -0500 )edit