Ask Your Question

hjf's profile - activity

2015-08-09 19:24:08 -0600 received badge  Nice Question (source)
2014-07-16 09:39:52 -0600 received badge  Famous Question (source)
2014-03-04 11:03:34 -0600 received badge  Nice Answer (source)
2014-02-25 09:07:23 -0600 received badge  Teacher (source)
2014-01-07 08:16:49 -0600 received badge  Notable Question (source)
2013-10-15 16:21:40 -0600 received badge  Popular Question (source)
2013-04-12 09:29:13 -0600 commented answer parallel processing in Windows

actually I'm using 2.4.5

2013-04-12 09:25:40 -0600 answered a question Decreasing capture resolution of webcam

this is a limitation of the opencv against the Linux video API. I'm using a logitech C170, lsusb -v shows that it supports a lot of resolutions and formats (can even do BGR which would be good to feed opencv with).

on windows I can set the resolution and FPS just fine with this camera. for linux I had to change the opencv source (there's a define somewhere, i think #DEFAULT_WIDTH or something like that) and you can set to 320x240, for example. but it's a hardcoded value.

2013-04-12 00:24:57 -0600 received badge  Student (source)
2013-04-11 23:45:56 -0600 answered a question Prebuilt Windows binaries, CUDA

cv::getBuildInformation() can tell you that. In my case it reads:

  Other third-party libraries:
    Use IPP:                     NO
    Use Eigen:                   NO
    Use TBB:                     NO
    Use OpenMP:                  NO
    Use GCD                      NO
    Use Concurrency              YES
    Use C=:                      NO
    Use Cuda:                    NO
    Use OpenCL:                  NO

So there's your answer.

2013-04-11 21:07:56 -0600 received badge  Editor (source)
2013-04-11 21:07:09 -0600 asked a question parallel processing in Windows

I developed an application with OpenCV on Linux, using TBB there to exploit the parallel capabilities of opencv (mostly: knnsearch in FLANN and SURF detection). It uses 100% of my cpu when needed, as expected.

I then ported this application to Windows, but it's only using 25% of my quad-core there. I'm using the prebuilt openCV binaries. I read that since 2.4.3, these are compiled using the Concurrency Framework. Aren't SURF and FLANN "concurrency framework aware"?

I don't have a problem compiling from source using TBB if needed, but i'd like to know if this functionality should work outside the box.

2013-04-01 11:32:46 -0600 commented answer SURF matching against a database of images

sorry i don't seem to be able to format comments. yes, if i do size (observedDescriptors.rows,k) it works but I only get 1 result (always the same and it's not the right one). i'm very confused :P

2013-04-01 09:15:25 -0600 received badge  Supporter (source)
2013-04-01 09:13:54 -0600 commented answer SURF matching against a database of images

Part 2: query Matrix<int> indices = new Matrix<int>(supermatrix.Rows, k); Matrix<float> dist = new Matrix<float>(supermatrix.Rows, k); fln.KnnSearch(observedDescriptors,indices,dist,k,12);

the result here is that "dist" (the return value) Is a Matrix<float> filled with all rows and cols with value "0.0"

Now if I do the opposite (notice the FLANN is created with observedDescriptors which is N times smaller than supermatrix, with N=number of images):

Emgu.CV.Flann.Index fln = new Index(observedDescriptors,4); Matrix<int> indices = new Matrix<int>(supermatrix.Rows, k); Matrix<float> dist = new Matrix<float>(supermatrix.Rows, k); fln.KnnSearch(supermatrix, indices,dist, k,12);

the result is correct, but the FLANN index is not really "exploited". also this is slower than linear search

2013-04-01 09:08:29 -0600 commented answer SURF matching against a database of images

What I'm doing currently is the following:

Part 1: initialization

  1. Read all the images in a directory, use

surfCPU.ComputeDescriptorsRaw and

surfCPU.DetectKeyPointsRaw (btw: SURFDetector surfCPU = new SURFDetector(500, false);)

  1. for each DetectKeypoints, load into a big matrix:

supermatrix = surfCPU.ComputeDescriptorsRaw(img, null, add.modelKeyPoints);

  1. also keep a List<int> of KeypointIndices:

imageLastIndex.Add(supermatrix.Rows); (assuming each image has 100 points: 100, 200,300: image 0 is points 0 to 99, image 2 is points 100 to 199, etc)

  1. once all the images have been analyzed, build the index:

Emgu.CV.Flann.Index fln = new Index(supermatrix,4);

(followed in next comment)

2013-03-31 13:51:33 -0600 asked a question SURF matching against a database of images

I'd like to compare an image taken with a webcam against a database of known images. I've been trying to use SURF for this, and it works OK when the number of images is small. Using BruteForceMatcher i get good, repeatable results although it is very slow, since I need to compare against a database of 3000 images, which would take several minutes.

A post in stackoverflow led me to a paper where they say using a FLANN index of the images is very, very fast (down to a few milliseconds for thousands of images). A poster in that thread says he uses a big FLANN index with the SURF features of all images, and uses this to compare to the current image. I've tried to do what he claims. I'm using EMGU to acces openCV from C#. What I get is a Matrix<float> of image features for each image, and i concatenate all the matrices to get a big matrix of all features. WIth this i I build a FLANN index and use this to compare the current observation against this big database with KnnMatch.

It returns in about 25ms with all rows with value zero.

Now, if i reverse this, and build the FLANN index with the points of the observed image and compare the big bad matrix of all features, for 100 images it takes about 300ms and it works.

Has anyone tried this approach with success? Is it even possible to do this with opencv?

Thanks


EDIT: Added code of comments


What I'm doing currently is the following:

Part 1: initialization

Read all the images in a directory, use

surfCPU.ComputeDescriptorsRaw

and

surfCPU.DetectKeyPointsRaw (btw: SURFDetector surfCPU = new SURFDetector(500, false);)

for each DetectKeypoints, load into a big matrix:

supermatrix = surfCPU.ComputeDescriptorsRaw(img, null, add.modelKeyPoints);

also keep a List<int> of KeypointIndices:

imageLastIndex.Add(supermatrix.Rows); (assuming each image has 100 points: 100, 200,300: image 0 is points 0 to 99, image 2 is points 100 to 199, etc)

once all the images have been analyzed, build the index:

Emgu.CV.Flann.Index fln = new Index(supermatrix,4);

Part 2:

query Matrix<int> indices = new Matrix<int>(supermatrix.Rows, k);
Matrix<float> dist = new Matrix<float>(supermatrix.Rows, k); 
fln.KnnSearch(observedDescriptors,indices,dist,k,12);

the result here is that "dist" (the return value) Is a Matrix<float> filled with all rows and cols with value "0.0"

Now if I do the opposite (notice the FLANN is created with observedDescriptors which is N times smaller than supermatrix, with N=number of images):

Emgu.CV.Flann.Index fln = new Index(observedDescriptors,4); 
Matrix<int> indices = new Matrix<int>(supermatrix.Rows, k);
Matrix<float> dist = new Matrix<float>(supermatrix.Rows, k); 
fln.KnnSearch(supermatrix, indices,dist, k,12);

the result is correct, but the FLANN index is not really "exploited". also this is slower than linear search