1 | initial version |
dear FLY, do yourself a favour, leave that code alone for a while.
imho, you got 2 major problems there:
i can only try to address the 2nd problem
you don't have to resize the images. surf is scale invariant
you've got to find a way how to properly use an SVM with your descriptors:
extractor->compute( row_img, keypoints, descriptors_1);
descriptors_1 will be a Mat with n rows, one for each descriptor found in the image. n will vary for each image(it'll find 3 descriptors in one image, 5 in another), so you'll have to find a way dealing with that, either :
whatever you choose, you'll need the same procedure in the prediction stage, btw.
so , again. start all simple this time, take only 2 positive and 2 negative images, and try to train an SVM only on that. (maybe you even should skip the images/descriptors, and start with simple mockup data, [2,5,3,4] : label 1; [3,5,3,1] : label -1 , etc )
don't proceed, until you get that step right, please.
2 | No.2 Revision |
dear FLY, do yourself a favour, leave that code alone for a while.
imho, you got 2 major problems there:
i can only try to address the 2nd problem
you don't have to resize the images. surf is scale invariant
you've got to find a way how to properly use an SVM with your descriptors:
extractor->compute( row_img, keypoints, descriptors_1);
descriptors_1 will be a Mat with n rows, one for each descriptor found in the image. n will vary for each image(it'll find 3 descriptors in one image, 5 in another), so you'll have to find a way dealing with that, either :
whatever you choose, you'll need the same procedure in the prediction stage, btw.
so , again. start all simple this time, take only 2 positive and 2 negative images, and try to train an SVM only on that. (maybe you even should skip the images/descriptors, and start with simple mockup data, [2,5,3,4] : label 1; [3,5,3,1] : label -1 , etc )
don't dare to proceed, until you get that step right,
please.
please.
3 | No.3 Revision |
dear FLY, do yourself a favour, leave that code alone for a while.
imho, you got 2 major problems there:
i can only try to address the 2nd problem
you don't have to resize the images. surf is scale invariant
you've got to find a way how to properly use an SVM with your descriptors:
extractor->compute( row_img, keypoints, descriptors_1);
descriptors_1 will be a Mat with n rows, one for each descriptor found in the image.
n will vary for each image(it'll find 3 descriptors in one image, 5 in another),
so another).
since SVM expects all training vec to have the same length, you'll have to find a way dealing with that, either :
whatever you choose, you'll need the same procedure in the prediction stage, btw.
so , again. start all simple this time, take only 2 positive and 2 negative images, and try to train an SVM only on that. (maybe you even should skip the images/descriptors, and start with simple mockup data, [2,5,3,4] : label 1; [3,5,3,1] : label -1 , etc )
don't dare to proceed, until you get that right,
please.
4 | No.4 Revision |
dear FLY, do yourself a favour, leave that code alone for a while.
imho, you got 2 major problems there:
i can only try to address the 2nd problem
you don't have to resize the images. surf is SURF is scale invariantinvariant, just trust in it.
you've got to find a way how to properly use an SVM with your descriptors:
extractor->compute( row_img, keypoints, descriptors_1);
descriptors_1 will be a Mat with n rows, one for each descriptor found in the image. n will vary for each image(it'll find 3 descriptors in one image, 5 in another).
since SVM expects all training vec vecs to have the same length, you'll have to find a way dealing with that, either :
whatever you choose, you'll need the same procedure in the training and prediction stage, btw.stage.
so , again. start all simple this time, take only 2 positive and 2 negative images, and try to train an SVM only on that. (maybe you even should skip the images/descriptors, and start with simple mockup data, [2,5,3,4] : label 1; [3,5,3,1] : label -1 , etc )
don't dare to proceed, until you get that right,
please.