2018-09-01 07:29:15 -0500 received badge ● Notable Question (source) 2017-09-27 19:50:22 -0500 received badge ● Popular Question (source) 2016-04-11 08:56:53 -0500 commented answer How to write and then read a CV_64FC1 and preserve precision? I went with the second option. Thank you! 2016-04-11 05:44:18 -0500 asked a question How to write and then read a CV_64FC1 and preserve precision? imread and imwrite apparently don't work with floats, so I should convertto integers, but then, there is no 64bit integer that I could recast to 64bit float. So what do I do? 2016-04-11 02:25:29 -0500 commented question What is the difference between layers and octaves in SIFT/SURF? When I started reading the quote, I was like.. I know this, I read it like a thousand times (about 2 times really). Then you're saying the same thing in your own words, and I just got it Then I realized that my question is... unsmart at least, and I now see I was very confused last Friday. I suppose the weekend helped. Thank you! 2016-04-08 08:08:31 -0500 commented question How can I get an estimate of how good is the homography that I get using findHomography? thank you for the hint. So, all I have to do now is to recalculate the homography on my own using the points in the mask and get the error, while I'm doing this. It is at least weird to calculate homography twice, only to get the error, and if I am creating a function that does this, I might as well write it straight into OpenCV and make a pull request 2016-04-08 05:52:05 -0500 asked a question What is the difference between layers and octaves in SIFT/SURF? I read both Lowe's papers ('99&'04) and I would say I understood most of them. I saw all SIFT related classes on youtube, but none explicitly says why we would use both octaves and layers? I perfectly understood that you get more layers in the same octave by ~calculating the ~Laplacian for different sigmas and then you resample to half the resolution to get the next octave, and again ~calculating the ~Laplacian for the same sigmas as in the first octave. And then you do this as many times as you feel like doing it. Initially, I thought that you use the layers (multiple sigmas) to find features of different sizes on one image, and then you resample, so that you calculate descriptors on every octave (resampling level) for every feature, so that you get descriptors at different scales that might be better matches for descriptors in the other image at a similar scale. Apparently, I was wrong, only one descriptor is calculated for every feature, as it is calculated out of gradient orientations, so it is ~invariant to scale anyway. But this leaves me wondering, why do we need to resample, why can't or shouldn't just use a high number of layers and just one octave (no resampling). Is this just because it is cheaper to resample? If yes, why don't we just resample? Note: ~ sign means sort of. I use it when I know it is not the exact explanation, but the exact one would be longer and it wouldn't add any value to the question. 2016-04-08 04:28:05 -0500 commented question How can I get an estimate of how good is the homography that I get using findHomography? The algorithm must also calculates the back-projection error, so it would be nice if it would also return it somehow. What do you mean by I could easily calculate it myself? I don't know what are the final tiepoints selected after RANSAC in findHomography, so how could I calculate it? 2016-04-07 04:37:17 -0500 asked a question How can I get an estimate of how good is the homography that I get using findHomography? The question is kinda giving it away, but I use: H = findHomography(obj, scene, RANSAC, 1);  and it would be nice if I could get some sort of a statistical measure, like the back-projection error. Any ideas? 2016-01-22 10:17:35 -0500 asked a question Why there is a hardcoded maximum numbers of descriptors that can be matched with BFmatcher? In BFMatcher::knnMatchImpl in matchers.cpp there is the following code: https://github.com/Itseez/opencv/blob... const int IMGIDX_SHIFT = 18; const int IMGIDX_ONE = (1 << IMGIDX_SHIFT);  https://github.com/Itseez/opencv/blob... CV_Assert( trainDescCollection[iIdx].rows < IMGIDX_ONE );  What is the point of this assertion that only makes sure that the number of training descriptors cannot be larger than precisely 262144, independently of the computer used? 2016-01-22 02:33:10 -0500 received badge ● Supporter (source) 2016-01-22 02:32:10 -0500 commented question How do I convince Visual Studio to go through OpenCV source files while debugging? @Eduardo, yes, did that, step 4 in my question. 2016-01-22 02:30:21 -0500 commented question How do I convince Visual Studio to go through OpenCV source files while debugging? that's what I did the first time, and unfortunately it did not work, that's why I proceeded with other things like what I described in my question. The thing is that after I did everything I described above and a few more other things (disabled optimization, etc) it started working somehow, but unfortunately I don't know exactly what did the trick. If I am to give some advice to anyone that has the same problem, is "keep trying" cause it's definitely working with the right incantation. When you start stepping into the code (F11), keep pressing it until you go over the headers, don't just assume that it's not working just because you've been through 10 headers already and there's no source code yet. Not really a good answer, but it might help someone... 2016-01-21 09:47:55 -0500 commented question How do I convince Visual Studio to go through OpenCV source files while debugging? that's quite bad. If there really is no solution debugging becomes impossible unless you have an actual error. How do you manage to do any work without this? Also it would be a lot easier to learn how OpenCV works and implicitly the concepts behind it, by going through the code. 2016-01-21 09:08:44 -0500 received badge ● Scholar (source) 2016-01-21 08:56:54 -0500 received badge ● Student (source) 2016-01-21 08:52:16 -0500 asked a question How do I convince Visual Studio to go through OpenCV source files while debugging? Steps I've taken: Built OpenCV from source, so that I have the *d.pdb and *d.dll in the same folder (C:\OpenCV\bin) Added C:\OpenCV\bin to Tools -> Options -> Debugging -> Symbols. Note: I never got to see that it loads any pdb from this folder, but it doas load the pdbs from Microsoft's server. Added C:\OpenCV\bin to the system PATH variable, so that the executable has the libraries available at run time Added C:\OpenCV\modules to Solution's Property Pages -> Common Properties -> Debug Source Files (it looks like this would do the trick, but it doesn't) Added C:\OpenCV\install\include to Project's Property Page -> Configuration Properties -> C/C++ -> General -> Additional include directories, so that I can include OpenCV's header files, for this I had to also build the install project, which was not selected by default. Added C:\OpenCV\lib\Debug to Project's Property Page -> Configuration Properties -> Linker -> General -> Additional library directories, so that it knows where to look for .lib files Added all *d.lib files from C:\OpenCV\lib\Debug (where there are also some .pdb, by the way) to Project's Property Page -> Configuration Properties -> Linker -> Additional Dependencies, because otherwise apparently it doesn't know that I'd like to use everything in the folder I just specified previously. Everything was done under Debug configuration. The result is that I can build, I can debug, but if I step into some function, it gets me to the header file, instead of the source file, which is not very useful. Does anyone know what did I miss to set up so I could go through OpenCV source files when I'm debugging my own project? I run Visual Studio 2015 Community edition on Windows 10 if it has any importance... 2016-01-21 03:48:26 -0500 asked a question Why would SURF detector not compute descriptors? The problem is that the detector does not compute descriptors, although it does find features and calculate keypoints (a lot of them). What I also know is that it has something to do with the images themselves, because everything work for some images and it doesn't for others. The code below doesn't give any errors, but descriptors_2 is empty after it's run, and that's why BFmatcher complains later on. Ptr detector = SURF::create(); detector->setHessianThreshold(2000); detector->setUpright(true); detector->setNOctaves(4); detector->setNOctaveLayers(6); detector->setExtended(true); vector keypoints_1, keypoints_2; Mat descriptors_1, descriptors_2; detector->detectAndCompute(img1, Mat(), keypoints_1, descriptors_1); detector->detectAndCompute(img2, Mat(), keypoints_2, descriptors_2);  I know it's hard for someone to test this without having my actual data, but I hope that someone had something like this before and figured it out and maybe it's the same reason for me. So, any ideas? 2016-01-13 10:19:55 -0500 asked a question Does OpenCV support GDAL for writing? I noticed that there's an example that uses GDAL for reading here https://github.com/Itseez/opencv/blob... , which is very cool, but what do you do with a 50 channels Mat variable, after you're done processing it? 2016-01-13 03:38:53 -0500 asked a question Why is cv::Mat::data always pointing to a uchar? I try to read a NEF file using LibRaw and then put it in a cv::Mat. The NEF file stores data as 12bit, this means I need 16 bit, so I ought to use CV_16UC4 like this: Mat img1(height, width, CV_16UC4);  Libraw stores data as ushort*[4], so I thought that this should work: for (i = 0; i < iwidth*height; i++) { img1.data[4*i+1] = Processor.imgdata.image[i][0]; img1.data[4*i+2] = Processor.imgdata.image[i][1]; img1.data[4*i+3] = Processor.imgdata.image[i][2]; img1.data[4*i+4] = Processor.imgdata.image[i][3]; }  I also get a build error that data may be lost since a ushort to uchar conversion is going to take place, which makes sense, but still, how do I put data bigger than uchar in the data?