Ask Your Question

Nyenna's profile - activity

2016-05-18 04:37:03 -0600 received badge  Notable Question (source)
2015-01-07 02:47:04 -0600 received badge  Popular Question (source)
2014-10-31 15:12:23 -0600 marked best answer Compute inliers for a homography

I am trying to compute the inliers of a homography found using RANSAC by myself, but I never get the exact same list than the one given by the mask. Here are two codes that I wrote. Both use the same Homography matrix cv::Mat H.

Using findHomography(), I get a mask as output and I can print the id of the inliers using the following code:

// Somewhere before I did: findHomography(points1, points2, CV_RANSAC, 3, mask)
for (int i = 0; i < nbMatchings; i++)
{
    // Select only the inliers (mask entry set to 1)
    if ((int)mask.at<uchar>(i, 0) == 1)
    {
        std::cout << i << ", ";
    }
}
std::cout << std::endl;

Then, supposing I want to compute the inliers myself (without using cv::Mat mask), and by computing the distance between a point and its correspondence using the very same code as in modules/calib3d/src/fundam.cpp:

for (int i = 0; i < nbMatchings; i++)
{
    // cv::Mat H is filled by findHomography(points1, points2, CV_RANSAC, 3, mask)
    const double* H2 = H.ptr<double>();

    float Hf[] = { (float)H2[0], (float)H2[1], (float)H2[2], (float)H2[3], (float)H2[4], (float)H2[5], (float)H2[6], (float)H2[7] };
    float ww = 1.f/(Hf[6]*points1[i].x + Hf[7]*points1[i].y + 1.f);
    float dx = (Hf[0]*points1[i].x + Hf[1]*points1[i].y + Hf[2])*ww - points2[i].x;
    float dy = (Hf[3]*points1[i].x + Hf[4]*points1[i].y + Hf[5])*ww - points2[i].y;
    float dist = (float)(dx*dx + dy*dy);

    if (dist <= 9) // The threshold used for RANSAC is 3, so the squared distance is 9
    {
        std::cout << i << ", ";
    }
}
std::cout << std::endl;

Both codes work, and almost give the same results. The point is that in my understanding, it should result exactly in the same list in both codes, and it is obviously not. Usually, the second code outputs up to 10% more inliers than the first code. But some inliers are in the first list and not in the second whilst some are in the second and not in the first.

Does anybody have an idea about this problem? Why is it the case that, using the very same Homography matrix (it is computed only once), I don't get the same inliers twice?

2014-03-01 17:33:00 -0600 received badge  Student (source)
2014-02-06 15:20:09 -0600 received badge  Nice Answer (source)
2013-08-01 06:48:37 -0600 answered a question Opencv4Android: NDK error

I would have a look at this answer

2013-07-26 03:58:59 -0600 edited answer Is there a way of 'hiding' the opencvManager icon from the user?

Of course, you can build OpenCV in static link and no manager is used (OpenCV will be integrated to your .apk).

Check this out here.

However, keep in mind that this will build a huge .apk

2013-07-25 07:05:28 -0600 received badge  Nice Answer (source)
2013-07-25 06:42:30 -0600 answered a question how works bruteforcematcher?

You can compute the distance between two descriptors according to different metrics (e.g. L2 distance for floating-point descriptors). The smaller the distance, the more similar the matching.

Assuming that you use the bruteforce matcher to match the descriptors from one image (called the "query" image to another image (the "train" image), here is what it does:

For each descriptor of the query image, compute the distance to each descriptor of the train image and keep the smallest one.

2013-07-23 02:44:46 -0600 commented answer Does using FLANN-Matcher for every frame make sense?

What improvements are you talking about? The hierarchical clustering trees search is implemented both in OpenCV and in the author's code...

2013-07-17 02:11:13 -0600 commented question Access feature in OpenCV4Android 2.4.9 (built from trunk)

It is now in the master branch on github.

2013-07-09 09:39:15 -0600 commented question Access feature in OpenCV4Android 2.4.9 (built from trunk)

Do you mean I should open an issue and send a pull request?

2013-07-09 02:50:51 -0600 answered a question Access feature in OpenCV4Android 2.4.9 (built from trunk)

I could eventually find a way, apparently, by adding the prototype of the function in calib3d.hpp. I am not sure if it is correct or not and will make further tests.

2013-07-09 02:48:30 -0600 commented question Access feature in OpenCV4Android 2.4.9 (built from trunk)

No, from C++ actually...

2013-07-08 15:03:46 -0600 asked a question Access feature in OpenCV4Android 2.4.9 (built from trunk)

Using the NDK, I could build the master branch of OpenCV for Android and it seems to work. However, I really would like to use the five-point algorithm, available here:

`modules/calib3d/src/five-points.cpp`

It does not work if I include calib3d:

`#include <opencv2/calib3d/calib3d.hpp>` (i.e. I cannot use `cv::findEssentialMat(...)`).

But it seems to be implemented though! How can I include it in my OpenCV4Android built from trunk?

2013-06-25 02:09:03 -0600 answered a question OCR support in OpenCV?

I don't think so, but I would advise you to have a look at Tesseract OCR.

2013-06-08 03:58:31 -0600 received badge  Scholar (source)
2013-06-03 03:14:57 -0600 commented question When does a feature in 2.4.9 become stable?

Because the unstable version is on github and does not appear in the roadmap (which makes sense =))...

2013-06-03 03:08:45 -0600 answered a question How to scale an 640x480pixels down to 8x4pixels?

In c++, the function is called cv::resize. Have a look at the documentation here.

2013-05-31 02:50:07 -0600 commented question When does a feature in 2.4.9 become stable?

@RaulPL: I had problems to install 2.4.9 indeed. But how can I know when it will be stable and included, say, in the OpenCV4Android library? Because I couldn't compile 2.4.9 for Android with success... @yes123: Version 2.4.9 is the version on Github. I believe it is the "unstable" version.

2013-05-30 09:39:21 -0600 asked a question When does a feature in 2.4.9 become stable?

I am interested in using a functionality that apparently exists on github (OpenCV 2.4.9). I would like to use it on Android, and therefore I am waiting for the stable release including this feature. How can I know when it will be included in the stable versions and in the OpenCV4Android version?

Precision: I am interested in the function called findEssentialMat (the five-points algorithm).

2013-05-22 02:08:47 -0600 commented answer Why does the BestOf2NearestMatcher's result depend on the image features input order?

That's exactly why I asked.

2013-05-17 08:08:35 -0600 commented answer Why does the BestOf2NearestMatcher's result depend on the image features input order?

Do you always get the same number of matches? Actually, it is apparently using the FLANN library, which approximates the best match. Depending on the algorithm which is used, maybe it is simply not deterministic? More precisely, maybe it does not always find two matches for a feature and, in this case, it continue; while ignoring the feature. Is it possible?

2013-05-17 07:51:18 -0600 commented answer Why does the BestOf2NearestMatcher's result depend on the image features input order?

You posted the number of matches. My reasoning is that if it is the number of features as well, then your code is computing the matches only in one way...

2013-05-16 09:46:53 -0600 commented answer Why does the BestOf2NearestMatcher's result depend on the image features input order?

My mistake. But is it true that image1 has 1095 features and image2 has 1107?

2013-05-16 09:09:14 -0600 answered a question Why does the BestOf2NearestMatcher's result depend on the image features input order?

When you match, say, image A to image B, you actually find the best correspondence in B for each feature in A.

And this is definitely not symmetric:

  • Your first image (corresponding to imgFeat1) has 1095 features. For each of those features, the algorithm gives you the closest feature in imgFeat2. It might happen that two features in imgFeat1 are matched to the same feature in imgFeat2.
  • Inversely, your second image has 1107 features that are matched to the features of the first image.
2013-05-14 02:08:29 -0600 answered a question [Features2D] implementation differences of different Features Detectors/Extractors/Matchers

First of all, ORB descriptors are binary descriptors and therefore require to use the Hamming distance (the Euclidean distance makes no sense in this case). Your wrong results certainly come from here.

Then, I would say that matcher.add() adds descriptors to the index. When you match() your descriptors_scene, it actually matches descriptors_scene to the index (i.e. descriptors here).

So it is not wrong that ORB matches the query to the train image. And I believe that SURF does the same.

2013-05-14 01:58:16 -0600 commented answer Does the matching work in OpenCV 2.4.4?

If you want to draw lines between your images, you will need to copy both of them in the same matrix before. I don't see why your results would be incorrect. The matches you get from the original image to the similar one are probably really coherent when those with the other images aren't (but you would need to draw lines to see this). As I said, the algorithm gives the _best possible match w.r.t the hamming distance_ even though it is really bad. Your filtering step is also very naive, and you should prefer a geometry validation using e.g. a homography matrix. But the first step would be to draw line to convince yourself that the algorithms work.

2013-05-08 09:02:37 -0600 received badge  Nice Answer (source)
2013-05-08 08:47:53 -0600 received badge  Teacher (source)
2013-05-08 07:24:46 -0600 commented answer Does the matching work in OpenCV 2.4.4?

No. As I said in at least three answers to your posts: you will always get "good matches" (inliers) and "bad matches" (outliers). First of all, you should draw the matches to see if you have good matches or not (but apparently you haven't done this or haven't shared the result to us). I don't know how you can say that this is not working if you haven't even drawn the matches in a comprehensible way...

2013-05-08 06:37:37 -0600 commented answer Does the matching work in OpenCV 2.4.4?

Except it does not do a two-ways matching. It matches the first set of descriptors to the second. In other words: matching A to B won't give the exact same results as matching B to A.

2013-05-08 06:36:15 -0600 answered a question Does the matching work in OpenCV 2.4.4?

It matches the features of the first image to the best correspondence in the second image. Which means that if your first image has 241 descriptors, you will get 241 matches no matter the second image.

Try inverting your descriptors:

matcher.match(descriptorsa,descriptorsb,matches);

should not return the same number of matches as:

matcher.match(descriptorsb,descriptorsa,matches);

If the first image has 200 descriptors and the second only one, then you will get 200 matches: the best solution for each will be the unique descriptor of the second image!

2013-05-02 09:10:55 -0600 commented question Bruteforce and matching giving reverse results. Why?

The code in your question is using BRUTEFORCE_SL2, and you should use BRUTEFORCE_HAMMING. Did you change this? How many matches do you have before the filtering?

2013-05-02 01:04:31 -0600 answered a question OpenCV error unsupported format or comibation of formats when doing FlannBased matching

I don't know the Java API, but the following line:

[with Distance = cvflann::L2<float>, IndexType = cvflann::Index<cvflann::L2<float> >]

suggests that you are using L2 distance. FREAK creates binary descriptors and thus requires the Hamming distance.

2013-05-02 00:57:57 -0600 commented question Bruteforce and matching giving reverse results. Why?

I believe you should use the hamming distance with the BFMatcher (since you are using binary descriptors). Have you tried to draw the matches? If you believe that your matching is incorrect, you might want to share us a picture of your results.

2013-05-01 12:25:19 -0600 commented question Bruteforce and matching giving reverse results. Why?

You might link the StackOverflow question you also posted...

2013-04-29 15:54:01 -0600 commented question How to keep the best 500 Keypoints from an Arraylist in Java/Android?

I think you could put a reference to the answered copy of your question on StackOverflow