Ask Your Question

Doombot's profile - activity

2020-08-13 11:07:19 -0500 received badge  Famous Question (source)
2020-08-05 01:28:48 -0500 received badge  Famous Question (source)
2020-05-23 17:17:13 -0500 received badge  Popular Question (source)
2019-05-26 11:13:33 -0500 received badge  Popular Question (source)
2018-11-05 13:58:44 -0500 received badge  Notable Question (source)
2018-08-07 08:18:48 -0500 received badge  Notable Question (source)
2018-05-22 18:18:55 -0500 received badge  Popular Question (source)
2018-03-12 02:59:37 -0500 received badge  Popular Question (source)
2018-01-05 06:45:43 -0500 received badge  Notable Question (source)
2018-01-03 01:39:51 -0500 received badge  Notable Question (source)
2017-06-11 13:08:57 -0500 received badge  Famous Question (source)
2016-12-17 03:52:52 -0500 received badge  Popular Question (source)
2016-10-08 05:52:54 -0500 received badge  Popular Question (source)
2016-08-20 01:27:03 -0500 received badge  Popular Question (source)
2016-04-18 17:51:01 -0500 received badge  Notable Question (source)
2015-12-17 02:25:17 -0500 marked best answer FlannBasedMatcher correct declaration

Is there something fundamentally wrong with this code (see EDIT)

// 2 images are previously loaded in objectImg and sceneImg //

cv::Ptr<cv::BRISK> ptrBrisk = cv::BRISK::create();
ptrBrisk->detect(objectImg, objectKeypoints);
ptrBrisk->compute(objectImg,objectKeypoints,objectDescriptors);

ptrBrisk->detect(sceneImg, sceneKeypoints);
ptrBrisk->compute(sceneImg,sceneKeypoints,sceneDescriptors);

int k =2;

FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( objectDescriptors, sceneDescriptors, matches );

On execution, the last line (matcher.match) fails: " Unsupported format or combination of formats (type = 0) in cv::flann:buildIndex_ "

I tried to use:

std::vector< DMatch > matches;
cv::Ptr<cv::DescriptorMatcher> matcher = cv::DescriptorMatcher::create("FlannBased");
matcher->match( objectDescriptors, sceneDescriptors, matches );

But it fails at execution, same error message.

EDIT : I added some code and it now works, but (look after code):

cv::Ptr<cv::BRISK> ptrBrisk = cv::BRISK::create();
ptrBrisk->detect(objectImg, objectKeypoints);
ptrBrisk->compute(objectImg,objectKeypoints,objectDescriptors);

ptrBrisk->detect(sceneImg, sceneKeypoints);
ptrBrisk->compute(sceneImg,sceneKeypoints,sceneDescriptors);

if(objectDescriptors.type() != CV_32F)
{
    cv::Mat temp;
    objectDescriptors.convertTo(temp, CV_32F);
    objectDescriptors = temp;
}

if(sceneDescriptors.type() != CV_32F)
{
    cv::Mat temp;
    sceneDescriptors.convertTo(temp, CV_32F);
    sceneDescriptors = temp;
}

int k =2;

std::vector< DMatch > matches;
cv::Ptr<cv::DescriptorMatcher> matcher = cv::DescriptorMatcher::create("FlannBased");
matcher->match( objectDescriptors, sceneDescriptors, matches );

Isn't using a float version of the descriptors defeating the purpose of using binary descriptors, aka speed? I will try test it with a BFMatcher and keep it updated here!

2015-12-11 23:51:33 -0500 received badge  Popular Question (source)
2015-11-19 04:55:57 -0500 received badge  Nice Question (source)
2015-09-07 11:33:15 -0500 received badge  Famous Question (source)
2015-06-09 23:44:35 -0500 received badge  Notable Question (source)
2015-04-25 02:22:15 -0500 received badge  Popular Question (source)
2015-04-02 12:40:52 -0500 commented question Possible to use OpenCV2.2 alongside 2.4.11 ?

Thanks for the info! :)

2015-04-02 08:58:45 -0500 commented answer findHomography generating an empty matrix

Wow! I read this doc a couple of times and missed that! +1

2015-04-02 08:56:39 -0500 commented question Possible to use OpenCV2.2 alongside 2.4.11 ?

I deduced that there were bugs in it from this: https://github.com/Itseez/opencv/pull... I may reword my question if this is not the case, of course! I recognize the problems that I have encountered personally are with the 3.0 version... However, I am still interested to know if it is possible to run some 2.2 dlls alongside 2.4 dlls.

2015-04-02 08:04:10 -0500 received badge  Self-Learner (source)
2015-04-01 14:43:41 -0500 asked a question Possible to use OpenCV2.2 alongside 2.4.11 ?

Is it possible to use, say, opencv_feature2d220.dll , alongside the dll from OpenCV 2.4.11 in the same project?

I mean, I would not use opencv_feature2d2411.dll in this project but only opencv_feature2d220.dll .

There reason is that I prefer the former implementation of a specific method, which is to my opinion (and for others too) broken in 2.4 and 3.0 (bug reports were already filled for this case, BRISK).

Right now, I plan to simply link to the correct libraries in my project, but I am afraid of hidden bugs or incompatibilities.

Thanks!

2015-04-01 14:17:59 -0500 commented question The program can't start because opencv_core2410d.dll is missing.

From what you show, we cannot say if you followed all the correct steps to link a library in Visual Studio. Believe me, I had some problems too when I did it the first time. While not 100% up to date: http://docs.opencv.org/doc/tutorials/...

2015-04-01 14:15:27 -0500 commented question Problem during intalling OpenCV at VS2013

Here is something, if it helps, it was for another library: In order to use the library, I specified, in the "Additional Include Directories" of the property page, the path to the .h files. In the "Additional Library Directories", I specified the path to the folder containing both ".lib" files. In the "Additional Dependencies", I put the name of all the ".lib" files that could be found in the "Additional Library Directories". In the "Debug" mode, I used the debug version of the libraries if available.

In the code, I add an #include < brisk.h> to indicate that I want to use that library.

2015-04-01 14:13:30 -0500 commented question Problem during intalling OpenCV at VS2013

From what you show, we cannot say if you followed all the correct steps to link a library in Visual Studio. Believe me, I had some problems too when I did it the first time. While not 100% up to date: http://docs.opencv.org/doc/tutorials/...

2015-03-23 14:56:31 -0500 commented answer What are the difference between OpenCV 3.0 and OpenCV2.4.10

Thanks! IMHO, there's not enough information available on the differences between both of them, in general. In understand 3.0 is a beta, but I guess that having more info would help people test and experiment with it.

Ok, I guess I should write a tutorial about what I've learned since I work with 3.0 instead of complaining ;)

2015-03-18 08:08:36 -0500 commented question Very similar images, quite different results (BRISK)

An OpenCV 3 build I made from Github at some point in December...

2015-03-17 12:50:37 -0500 commented question Very similar images, quite different results (BRISK)

Even for the computer generated images? For the checkerboard (the real images), I agree 100% with you. Oh, I know! I will autocorrelate both image and see if they fit perfectly. Then I'll know if noise somewhat got into the computer generated images...

2015-03-17 09:24:57 -0500 asked a question Very similar images, quite different results (BRISK)

With the help of Gimp, I generated the following model image:

image description

Now, with the same image, I generated another (scene) image:

image description

Please note that the star in the first images and the stars in the second images are identical; they measure the exact same number of pixel, and the same is true for the other shapes. Truly, since they are both exported to a greyscale BMP with Gimp, I would expect the shapes to be identical pixel-per-pixel in both images.

Now, as you may notice, there are coloured points in both images. They are the keypoints detected by the BRISK detector, using the exact same parameters .

In the first image, some keypoints are found on the star and one on the crescent's tip. In the second image, a lot of keypoints are detected, in location where I globally would expect a keypoint to be.

Then, does anybody knows/have noticed this behaviour before? You understand that this setup is voluntarily simple, but I noticed a similar behaviour on real images. I used to put the blame on variability of lighting between my test images, which where taken in real-world situation, but now it makes me wonder what is really going on...

Example of real life detection where the phenomenon is present (cropped image):

Model: image description

Scene: image description

2015-03-06 11:36:49 -0500 asked a question Imread() and bitmaps

For some reason that eludes me, the "imread()" function is able to load some grayscale bitmaps, but not others. When I manually inspect the properties of the images in Windows, a "good" image look exactly the same as a "bad" image. I assume that it has to do with the bitmap format of the specific image, but I am not sure. I am using the function

source_image = imread(path);

and I manually check the path when it doesn't work. Contrary to other questions on the site, it works most of the time...

Now, the doc says:

Currently, the following file formats are supported:

    Windows bitmaps - *.bmp, *.dib (always supported)

Is there some way to know beforehand if a grayscale bitmap will be read? I mean, is there like a bitmap spec for OpenCV?

2015-02-26 09:52:58 -0500 commented question OpenCV3.0 c and c++

My guess would be: for the same reasons that people program in C++ over C in most desktop application. I know that for embedded systems, C is more practical, but... I am not the one responsible for the decision. though ;)

2015-02-23 08:06:10 -0500 commented question findHomography RANSAC should check chosen Points for collinearity?

Ok! I was curious since I use findHomography and sometime it finds results that are pretty much random. I mean, I manually check the matches beforehand to make sure the algorithm would be able to find something but it rather sometime find impossible shapes and sometime plainly give an empty H matrix as an answer.

2015-02-13 10:31:27 -0500 asked a question Subpixel location of keypoints. Why?

When detecting keypoints (such as BRISK, ORB, etc.), I get coordinates with subpixel accuracy (ex.: pt.x = 110.645 , pt.y = 285.432). While I am familiar with the concepts of subpixels, I wonder the location of the keypoint is a float versus an int (rounded up/down) value, such as pt.x=111 and pt.y=285. Ok, I could simply cast the float to an int but that doesn't answer why.

I mean, when the detection algorithm search for a keypoint, it first selects a pixel, then applies various tests in order to determine whether the pixel and the patch around it is really a keypoint according to the established criterion of the method. I know it retrieves the orientation of the keypoint, which might be a float in itself. But even looking at the code or the AGAST or BRISK paper, humbly I don't understand what is the point of using subpixels for the location of the keypoint.

But since it is the way it is implemented in OpenCV (3 for me, but I guess it is the same in 2.4.X), I assume there is a good reason! I might just have misread portions of the paper or missed something in the comments of the code...

Thanks!

2015-02-13 09:13:34 -0500 commented answer ORB - object needs to be very close to camera

This site (http://www.vision-doctor.co.uk/optics...) has some resources, especially an optical calculator and other tutorials. I would advise to look at various parts of the site and on the internet so you will get more knowledge in order to better explain the situation to your colleagues. Also, I would not immediately put the blame on the "quality" of the cameras. See it as choosing the right tool with the right specs for a given task. Finally, depending of your situation, you might want to get in touch with the distributor of the camera (the "seller"). In my experience, they are valuable assets in these situations because they can help you to select the right camera with the right parameters (lens, resolution, etc.).

2015-02-12 15:12:12 -0500 commented answer ORB - object needs to be very close to camera

Oh I completely misread his question!!! I'll let the OP assess if any of this is relevant then remove my answer if this is not usefull at all...

2015-02-12 14:06:30 -0500 commented question findHomography RANSAC should check chosen Points for collinearity?

I am curious, is it because you have analyzed the code? Or a deduction from the doc?

2015-02-12 14:03:18 -0500 commented question j'ai un projet une application android pour scanner de code à barre et lecture d'index de consommation pour les compteurs à partir d'une extraction des chiffres dans l'image capturé , comment je peux utiliser opencv pour cela , code avec eclipse java

Bonjour, je vous suggère d'utiliser un outil de traduction pour poser votre question en Anglais, puisque ce site est en langue anglaise. Également, donnez un titre concis à votre question puis de mettre l'essentiel de la question dans la zone de question.

Just explaining how the site works

2015-02-12 13:55:27 -0500 commented answer knnMatch in version 2.4.10

Well, for knnmatch you effectively need a vector of vector of knn match, since first you have the best Dmatch for a specific keypoint pair, then you have a vector for all Dmatchs of the image, then a second vector allowing you to store the second best matches with the same structure

This said, beside the change you show in this answer, is the code present in the question works correctly now?

2015-02-12 13:47:47 -0500 answered a question ORB - object needs to be very close to camera

Let's do some maths:

Your camera has a 640x480 resolution, is 6 inches away from the scene and has a lens with some focal length (I don't know, look on the spec). So, imagine that, given these parameters, 1 pixel on the image represent 1x1 mm square (or about 0.04'') in reality (this is a random value I chose, it actually depends of the focal length of your lens, the size of the captor, etc.) (If you don't want to calculate, just print a calibration grid with squares of a known dimension and count how many pixels are used to represent a single square...)

Now, you move the exact same camera to a distance of 48'' (4 feet), which is 8 times farther. Then, 1 pixel will represent a region of 8x8mm (about 0.32''). So, your number of pixels per inch has actually decreased! It means that you are no longer able to see fine details. So, when you do an ORB detection on the image, you actually no longer have the same image than in the first setup, or maybe the number of pixels per inch is too low to represent meaningful keypoint.

So, it seems to be the cause of the problem. Now, how to solve it? Well, it is not possible if you keep the same setup. I assume that you cannot place the camera close enough (6''), so you may need to use a lens providing a relevant level of zoom, or choose another camera with a sensor that allow an higher number of pixels per inch. Note that I assume all of this make some sense to you. If not, just ask in a comment and I'll try to point you to the right learning resources...

2015-02-05 08:13:08 -0500 answered a question findHomography generating an empty matrix

I asked the question on another site and someone had a good answer I think so here is a quick summary:

There is a bug report about using RANSAC or Least Median with findhomography() in OpenCV3. It seems to be common that it returns an empty matrix sometime. According to the bug report, it is likely related to a "new" implementation of Levenberg-Marquardt solver.

Meanwhile, I had already implemented the quick fix suggested by the author of the answer on the other site, namely checking if the matrix is empty after using findHomography() then taking appropriate action.

So that's it for now, I'll probably look for another implementation of the solver, maybe the one used in OpenCV 2.4.9-10...

2015-02-04 14:37:43 -0500 asked a question findHomography generating an empty matrix

When using findHomography():

Mat H = findHomography( obj, scene, cv::RANSAC , 3, hom_mask, 2000, 0.995 );

sometime, for some image, the resulting H matrix stays empty (H is a UINT8, 1x0x0). However, there is clearly a match between both images (and it looks like good keypoint matches are detected), and just a moment before, with two similar images with similar keypoint responses, a relevant matrix was generated. Input parameters "obj" and "scene" are both a vector of Point2f containing various coordinates.

Is this a common issue? Or do you think a bug might lurk somewhere? Personally, I have processed hundreds of images where a match exists and while I have seen sometime poor matches, it is the first time I get an empty matrix...

EDIT : This said, even if my eyes think that there should be a match in the image pairs, I realize that it might confuses some portion of the image with an other one and that maybe there is indeed no "good" match.

So my question would be: How does findHomography() behave when it is unable to find a suitable Homography? Does it return an empty matrix or will it always give a homography, albeit a very poor one? I just want to know if I encounter standard behaviour or if there is a bug in my own code.