Ask Your Question

Christopher's profile - activity

2019-01-25 07:00:29 -0600 received badge  Famous Question (source)
2016-12-08 11:53:02 -0600 received badge  Notable Question (source)
2016-01-24 15:24:04 -0600 received badge  Popular Question (source)
2014-10-29 13:45:20 -0600 received badge  Supporter (source)
2014-10-29 11:25:27 -0600 received badge  Critic (source)
2014-10-19 11:16:30 -0600 commented answer Image matching problem

I expect this would not be an issue, as the OCR would need to be trained to the particular font and would under no circumstances pick up common handwriting.

2014-10-17 15:06:56 -0600 commented question FLANN Index in Python - Training Fails with Segfault

To answer your questions:

Matching one to one worked just fine. In fact, I tested that with find_obj.py!

But then I came upon something else. http://stackoverflow.com/questions/25781782/making-flann-matcher-editable-and-savable-to-disk ...which passed descriptors inside a list to add().

So I made a switch that no longer causes the segfault:

This:<br> flann.add([des2]) <br> instead of: <Br> flann.add(des2) <br>

It seems it needed a list of ndarrays, rather than an ndarray. Of course. The perils of using undocumented things, I guess.

2014-10-17 14:53:25 -0600 received badge  Editor (source)
2014-10-17 02:51:28 -0600 received badge  Student (source)
2014-10-16 22:17:43 -0600 asked a question FLANN Index in Python - Training Fails with Segfault

Dear Internet,

I'm trying to add a number of images to a FLANN index (thousands in reality, once I have this working) and then find the closest match in the index to a query image. But it segfaults. :(

Bare essentials code:

import numpy as np
import cv2
import sys


FLANN_INDEX_LSH = 6


img1 = cv2.imread(sys.argv[1],0)


brisk = cv2.BRISK()


kp1, des1 = brisk.detectAndCompute(img1,None)


index_params= dict(algorithm = FLANN_INDEX_LSH,
                   table_number = 6, # 12
                   key_size = 12,     # 20
                   multi_probe_level = 1) #2
search_params = dict(checks=50)   # or pass empty dictionary

flann = cv2.FlannBasedMatcher(index_params,search_params)

for filename in sys.argv[2:]:
    img2 = cv2.imread(filename, 0)
    print "Detecting and computing {0}".format(filename)
    kp2, des2 = brisk.detectAndCompute(img2,None)
    print "Adding..."
    flann.add(des2)


print len(flann.getTrainDescriptors()) #verify that it actually took the descriptors in

print "Training..."
flann.train()                    

print "Matching..."
matches = flann.knnMatch(des1,k=2)

(I've tried this both with 2.4.9 and a pull of 3.0 alpha as from the day I'm posting this, both had same result.)

Problem: Here's the output where it fails:

Training...

Segmentation fault (core dumped)

So it dies at training, or at match() if I skip training.

What has me stuck:

  1. I can't find this functionality in the docs as regards Python. Am I in the wrong places?
    http://docs.opencv.org/trunk/modules/flann/doc/flann_fast_approximate_nearest_neighbor_search.html
    http://docs.opencv.org/trunk/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html

  2. For lack of docs, I'm trying to emulate in Python what I see in matching_to_many_images.cpp from the samples since that's the closest thing to what I'm trying to do. I think. So that's where I'm getting what I have so far.

Now what? Kind of lost. Sorry if this is actually clearly documented and I just couldn't find it.

Thanks!

2014-05-11 07:52:59 -0600 commented question Python Feature Matching Speed

CUDA may well be it. I've become aware of that since reading more... unfortunately it is not available in the Python wrapper. :(

I'll be trying the C++ samples and what kind of times I get.

2014-05-10 22:49:48 -0600 commented question Python Feature Matching Speed

I'm getting these results with 2.4.8, 2.4.9, and 3.0.

2014-05-10 22:43:12 -0600 asked a question Python Feature Matching Speed

I'm looking to find which object from a large database of images (I'll precompute and store the feature descriptions) in a video feed in realtime. I see people pulling this off in C++, like the following video I found: https://www.youtube.com/watch?v=kbYDjBa3Lyk

But my speeds in Python are way too high. I'm seeing 300-1k ms depending on the detection method with FLANN for __ONE__ object, trying all of the feature matching sample code that came with the OpenCV source. Is this just the way of things? Might I have done something wrong when compiling OpenCV?

Basically, are the speeds like in the video attainable in Python or do I need to go learn C++ to actually make my project work? Would love any ideas what I might be doing wrong if I am the problem. :)