Ask Your Question

valgussev's profile - activity

2017-09-02 21:12:47 -0600 received badge  Popular Question (source)
2013-02-24 09:07:24 -0600 commented answer How to use SIFT with a color inverted image

Looks the same.. http://i.stack.imgur.com/fY2PR.png But when I subtract 180 or add 180 (within [0, 360]) looks like http://i.stack.imgur.com/Oniad.png , interesting

2013-02-24 06:58:08 -0600 commented answer How to use SIFT with a color inverted image

thanks for the explanation, but as you can see here i.stack.imgur.com/XRfdh.png gradients are not absolutely opposite

2013-02-24 06:37:24 -0600 received badge  Scholar (source)
2013-02-23 11:08:49 -0600 commented answer How to use SIFT with a color inverted image

Do you mean by "gradient direction" keypoint orientation on which descriptor orientation depends? What do you mean by "swap the contents"? We have 8 bins histogram for each 16 regions (8x4x4=128), you mean to switch values of the opposite bins? for example 1st and 5th, 2nd and 6th?

2013-02-23 03:23:15 -0600 commented answer How does the SiftDescriptorExtractor convert descriptor values?
2013-02-23 02:50:29 -0600 asked a question How to use SIFT with a color inverted image

For example I have two images, where first one is a regular and second one with a color inversion (I mean 255 - pixel color value).

I've applied SIFT algorithm to both of them, so now I have key points and descriptors of each image.

KeyPoints positions do match, but KeyPoints orientations and Descriptors values do not, because of color inversion.

I'm curious do anybody try to solve such a problem?

2013-02-17 11:50:37 -0600 commented answer How does the SiftDescriptorExtractor convert descriptor values?

now i got it, thank you!

2013-02-17 10:09:53 -0600 commented answer How does the SiftDescriptorExtractor convert descriptor values?

no no no, I'm not talking about values between 0 and 1, just let's forget about them. Now I'm asking about lines 634-638 in sift.cpp. After these lines we have dst with 128 values inside of it ranging between 0 and 255. My question is how to put these 128 values in a descriptors_object?

2013-02-17 09:19:20 -0600 commented answer How does the SiftDescriptorExtractor convert descriptor values?

but descriptors_object should be a Mat object, how to convert float to cv::Mat?

2013-02-16 12:30:00 -0600 commented answer How does the SiftDescriptorExtractor convert descriptor values?

Thanks for the answer! still, for example I have n descriptors with 128 values (between 0 and 1) each. How can I put them in Mat object? mat_descriptor_object.at<uchar> (n,j) = value*255; like this?

2013-02-16 12:21:52 -0600 received badge  Supporter (source)
2013-02-16 09:11:14 -0600 asked a question How does the SiftDescriptorExtractor convert descriptor values?

I have a question about the last part of the SiftDescriptorExtractor job,

I'm doing the following:

    SiftDescriptorExtractor extractor;
    Mat descriptors_object;
    extractor.compute( img_object, keypoints_object, descriptors_object );

Now I want to check the elements of a descriptors_object Mat object:

std::cout<< descriptors_object.row(1) << std::endl;

output looks like:

[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 0, 0, 0, 32, 15, 0, 0, 0, 0, 0, 0, 73, 33, 11, 0, 0, 0, 0, 0, 0, 5, 114, 1, 0, 0, 0, 0, 51, 154, 20, 0, 0, 0, 0, 0, 154, 154, 1, 2, 1, 0, 0, 0, 154, 148, 18, 1, 0, 0, 0, 0, 0, 2, 154, 61, 0, 0, 0, 0, 5, 60, 154, 30, 0, 0, 0, 0, 34, 70, 6, 15, 3, 2, 1, 0, 14, 16, 2, 0, 0, 0, 0, 0, 0, 0, 154, 84, 0, 0, 0, 0, 0, 0, 154, 64, 0, 0, 0, 0, 0, 0, 6, 6, 1, 0, 1, 0, 0, 0]

But in Lowe paper it is stated that:

Therefore, we reduce the influence of large gradient magnitudes by thresholding the values in the unit feature vector to each be no larger than 0.2, and then renormalizing to unit length. This means that matching the magnitudes for large gradients is no longer as important, and that the distribution of orientations has greater emphasis. The value of 0.2 was determined experimentally using images containing differing illuminations for the same 3D objects.

So the numbers from the feature vector should be no larger than 0.2 value.

The question is, how these values have been converted in a Mat object?

2013-01-28 11:41:19 -0600 asked a question A little bit more about feature detection in opencv?

Hi, Alex,

You've been the only person who has answered my question here.

I have added my code to Github. Could you please take a look on it?

Thanks in advance.

2013-01-28 11:35:55 -0600 answered a question How to make my own feature detection method in opencv?

I have added my project to Github could you please take a look on it and tell me what am I doing wrong?

2013-01-17 09:58:30 -0600 commented answer How to make my own feature detection method in opencv?

In this example the Surf algorithm of feature detection was used. I have made my own algorithm (Trajkovic) and it works great - all the corners (image features) are found. Then I try to use SurfDescriptorExtractor as it was used in example. The problem is that SurfDescriptorExtractor don't want to use my founded points in correct way (resulted picture appears with wrong connections, that means, that extractor didn't calculate the vectors correctly). So if detection works, the problem is in extractor or in FlannBasedMatcher am I right?

2013-01-16 10:42:35 -0600 asked a question How to make my own feature detection method in opencv?

Let's take a look on this basic tutorial named Features2D + Homography to find a known object. It uses SurfFeatureDetector to detect features:

  SurfFeatureDetector detector( minHessian );
  std::vector<KeyPoint> keypoints_object, keypoints_scene;
  detector.detect( img_object, keypoints_object );
  detector.detect( img_scene, keypoints_scene );

Then it uses SurfDescriptorExtractor to calculate descriptors (feature vectors) using detected features.

My questions are:

if I want to create my own feature detector (for example with Trajkovic or Harris algorithms) which Descriptor Extractor shall I use? are the features, that were found in SurfFeatureDetector, just the common points or the areas of points?

2012-10-25 11:27:48 -0600 asked a question How to make a feature from a corner?

I'm interesting how to make a feature from a corner? For example, I have found a corner via Harris or Trajkovic corner detector, but these are not yet the features that I can use in SiftDescriptorExtractor. How can I solve this problem?

2012-10-25 11:27:20 -0600 asked a question How to make a feature from a corner?

I'm interesting how to make a feature from a corner? For example, I have found a corner via Harris or Trajkovic corner detector, but these are not yet the features that I can use in SiftDescriptorExtractor. How can I solve this problem?

2012-10-22 15:05:39 -0600 received badge  Editor (source)
2012-10-21 14:25:59 -0600 commented answer Which feature descriptor should I use with Harris corner detector?

As I can see in this tutorial http://morf.lv/modules.php?name=tutorials&lasit=2#.UIRKJLQWFSU SurfDescriptorExtractor has just information about coordinates of the keypoints and then by itself calculate the descriptors. After the Harris corner detector I will have the same coordinates of the corners, so basically it should be the same. Am I wrong?

2012-10-21 11:23:54 -0600 commented answer Which feature descriptor should I use with Harris corner detector?

Thank you! So if i'll just find corners with Harris detector and then will use SiftDescriptorExtractor it's gonna work? Are "corners" and "features" in this case the same thing?

2012-10-21 10:58:09 -0600 received badge  Student (source)
2012-10-19 17:45:20 -0600 asked a question Which feature descriptor should I use with Harris corner detector?

I'm new to OpenCV, having a question: I'm interesting in this tutorial - Features2D + Homography to find a known object, as I can see it uses SurfFeatureDetector and then SurfDescriptorExtractor. If I will use the Harris corner detector which DescriptorExtractor I should use?