Feature Matching Large images [closed]
Hello everyone! I have a little problem with the feature matching. I'm using this method with Ptr<SurfFeatureDetector> detector = SurfFeatureDetector::create(minHessian);
instead of
SurfFeatureDetector detector( minHessian );
But I'm using it with enormous images (72Mb / 11804 * 5850px) and when I start the program, my computer freeze! It just freeze! Nothing works any more. I just waited 30mn and nothing happened. I was wondering if there is a method that can work to do feature matching with those big images (like, if the L2-SIFT is implemented?)
Thanks in advance :) Have a nice day everyone! :)
Edit : Sorry, that's right I didn't give informations. I'm actually on Windows 7 SP1, using a DELL with 4Gb RAM. I'm working with OpenCV 3.1.0 And the code I have is :
Mat img_1 = imread("... /*Large image*/", IMREAD_GRAYSCALE);
Mat img_2 = imread("..."/*Little image*/, IMREAD_GRAYSCALE);
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
Ptr<SurfFeatureDetector> detector = SurfFeatureDetector::create(minHessian);
//SurfFeatureDetector detector(minHessian);
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector->detect(img_1, keypoints_1);
detector->detect(img_2, keypoints_2);
std::cout << " Step 1" << std::endl;
and my computer doesn't type "Step 1" in the console, so I suppose the problem come from
detector->detect(img_1, keypoints_1);
I know it works when I resize img_1 by a factor of 0.5 . So I'm sure it's a memory problem.
First set of questions
Once you have update your question, stuff will become more clear!
I have now edit my question, maybe it'll be more understandable.
a DELL with 4Gb RAM and I know it works when I resize img_1 by a factor of 0.5 . So I'm sure it's a memory problem. --> then I guess the conclusion is pretty straightforward. The calculation of your features and the matching on such an image needs more than 4GB of RAM (because not everything is assigned to OpenCV) and thus your PC starts memory swapping which is VERY slow and clogs up the computer!
That's why my question is to know if there is a function implemented for big images, such as L2-SIFT (that is SIFT that work with blocks of the big image) or something like this :)
It is not off the shelf available, but how about cutting it manually into parts and processing them one by one?
Yup, that's right I could do it myself, and I will probably, if there is no other solution :). But actually, the results are quit bad :/ I'm trying to match forest images, and results are unusable ^^ So first, I have to solve this problem ^^
match forest images --> oh boy ... I can imagine about 1000 reasons why this will not work. The info is just way to general to use keypoint matching... unless there are like unique structures in this forest like castles or towers or so ...
I'm always optimistic when I start a project! :) I use the worst case every time to be sure it'll work everywhere ;) But, I made my own SIFT implementation and strangely, it works well in the forest (or maybe it's because it's not very effective that it's more effective? I don't know) But it takes me long time to do it. So I wanted something that I can use faster :) Maybe I'll have to continue on my way ^^
Good luck!
Thanks! :D Maybe can I ask you if you know how to do a good Best Bin First algorithm with vectors dimensions 128? I can't find any informations about this algorithm...