I have a little experience with OpenCV (Python) and just received a Jetson Nano and want to test OpenCV with CUDA. So I tried an ORB-example but am stuck with converting GPU-Keypoints to CPU-Keypoints, but I am unable to get any result.
def orb_with_cuda():
MAX_FEATURES = 100
GOOD_MATCH_PERCENT = 0.15
# load images into numpy
npMat1 = get_sample("im1.jpg")
npMat2 = get_sample("im2.jpg")
# upload into Cuda
cuMat1 = cv2.cuda_GpuMat()
cuMat2 = cv2.cuda_GpuMat()
cuMat1.upload(npMat1)
cuMat2.upload(npMat2)
#convert to Gray
cuMat1g = cv2.cuda.cvtColor(cuMat1, cv2.COLOR_RGB2GRAY)
cuMat2g = cv2.cuda.cvtColor(cuMat2, cv2.COLOR_RGB2GRAY)
#ORB
corb = cv2.cuda_ORB.create(MAX_FEATURES)
_kps1, _descs1 = corb.detectAndComputeAsync(cuMat1g, None)
_kps2, _descs2 = corb.detectAndComputeAsync(cuMat2g, None)
#convert Keypoints to CPU
kps1 = [cv2.KeyPoint() for i in range(MAX_FEATURES)]
kps2 = [cv2.KeyPoint() for i in range(MAX_FEATURES)]
corb.convert(_kps1, kps1)
corb.convert(_kps2, kps2)
#BruteForce Matching
cbf = cv2.cuda_DescriptorMatcher.createBFMatcher(cv2.NORM_HAMMING)
cmatches = cbf.match(_descs1, _descs2)
# Sort matches by score
cmatches.sort(key=lambda x: x.distance, reverse=False)
# Remove not so good matches
numGoodMatches = int(len(cmatches) * GOOD_MATCH_PERCENT)
cmatches = cmatches[:numGoodMatches]
# Draw top matches
imMatches = cv2.drawMatches(npMat1, kps1, npMat2, kps2, cmatches, None)
cv2.imwrite("gpu_matches.jpg", imMatches)
return()
The code above is running w/o any errors, but it seems the CPU-keypoints are all empty (the created images does not show any matching lines). The equivalent Non-CUDA code is running fine.
Any idea what I am doing wrong? I tried several conversion methods, this is the 'best' I can find. Thanks for your help!
(Using OpenCV4.1.2 on Jetson Nano with Python3.6.9)