Ask Your Question
0

Can not distinguish two insects by using SIFT

asked 2013-05-23 16:44:48 -0600

encinaar gravatar image

updated 2013-05-25 06:22:05 -0600

I wan to create a classifier in order to identify an insect by its captured image. At the first time, I used HuMomemnts but images captured in different resolutions gave incorrect results since HuMoments are scale variant. After doing some search on the internet, I found that usage SIFT and SURF can solve my problem and thus, I tried to see what happens when I use SIFT. The first two images below belongs to to different insect kind. The results was bizarre since all features out of 400 were matching (see 3rd image).

image description image description image description

int main()
{
Mat src = imread(firstInsect);
Mat src2 = imread("secondInsect");

if(src.empty() || src2.empty())
{
    printf("Can not read one of the image\n");
    return -1;
}

//Detect key point in the image
SiftFeatureDetector detector(400);
vector<KeyPoint> keypoints;
detector.detect(src, keypoints);

//cout << keypoints.size() << " of keypoints are found" << endl;

cv::FileStorage fs(firstInsectXML, FileStorage::WRITE);
detector.write(fs);
fs.release();


SiftFeatureDetector detector2(400);
vector<KeyPoint> keypoints2;
detector.detect(src2, keypoints2);

cv::FileStorage fs2(secondInsectXML,  FileStorage::WRITE);
detector.write(fs2);
fs2.release();


//Compute the SIFT feature descriptors for the keypoints
//Multiple features can be extracted from a single keypoint, so the result is a
//matrix where row "i" is the list of features for keypoint "i"

SiftDescriptorExtractor extractor;
Mat descriptors;
extractor.compute(src, keypoints, descriptors);

SiftDescriptorExtractor extractor2;
Mat descriptors2;
extractor.compute(src2, keypoints2, descriptors2);


//Print some statistics on the matrices returned
//Size size = descriptors.size();
//cout<<"Query descriptors height: "<<size.height<< " width: "<<size.width<< " area: "<<size.area() << " non-zero: "<<countNonZero(descriptors)<<endl;



//saveKeypoints(keypoints, detector);


Mat output;
drawKeypoints(src, keypoints, output, Scalar(0, 0, 255), DrawMatchesFlags::DEFAULT);
imwrite(firstInsectPicture, output);

Mat output2;
drawKeypoints(src2, keypoints2, output2, Scalar(0, 0, 255), DrawMatchesFlags::DEFAULT);
imwrite(secondInsectPicture, output2); 


//Corresponded points
BFMatcher matcher(NORM_L2);
vector<DMatch> matches;
matcher.match(descriptors, descriptors2, matches);

cout<< "Number of matches: "<<matches.size()<<endl;

Mat img_matches;
drawMatches(src, keypoints, src2, keypoints2, matches, img_matches);
imwrite(resultPicture, img_matches); 

system("PAUSE");
waitKey(10000);

return 0;

}

Question 1: Why all of the features are matching in these two images? Question 2: How can I store(i.e. XML file) features of an image in a way that the features can be stored in order to train them in a classification tree (i.e. random tree)?

EDIT:

image description image description

Working on grayscale images does not give different results. Matching 2 same kind of insects and matching 2 different kind of insects produces same number of matches.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
4

answered 2013-05-24 03:13:26 -0600

Guanta gravatar image

updated 2013-05-25 07:19:47 -0600

This result is not strange at all. SIFT and SURF are not working for binary images, there you'd need to compare the shapes, see e.g. http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=shape#matchshapes, see also http://answers.opencv.org/question/10763/what-kind-of-features-that-can-be-extracted-from/#10774 for other possibilities. If you have also the grayscale versions of your images then your approach would work fine. For comparing a larger database a BoW-approach could also be useful, see http://answers.opencv.org/question/8677/image-comparison-with-a-database/#8686


EDIT (Answering to your two new questions)

  1. Steven already mentioned it: You may have detected three matches in your first image and 6 images in your second one, now you match all your features of the first one to those of the second -> you'll get the three closest matches. These are the ones you have drawn.

  2. Each classifier needs the training feature vectors in one matrix and also the classes in a response-matrix. So, create first one large matrix (let's say you have 10 images and take always the 20 best features -> 200 rows and each feature has the dimension 128 -> Matrix of 200x128), compute now for each image the features, select the 20 best ones, copy them in your big matrix. This big matrix you can now save/load via FileStorage. The response-matrix has in our example the dimension 200x1, where every feature-row get's its class-label.

Note: This approach of taking the 20 best features won't give you good results! There you have to unify your features somehow. This you can do either with a Bag-of-Words-approach (see my link above) or switching to texture-features (which are basically more primitive features but applied on the whole image (or grid of your image) and typically stored in histograms). Then you pass the BoW-Descriptors or your texture-features to your classifier as described above.

edit flag offensive delete link more

Comments

2

As a comment to that, when using the built in matchers. It will always match to the most probable match in the second image. This means that the matching happens locally, which is actually correct in your case. Take for example the pointy edges that match, locally they look exactly the same, so the matching is correct. Indeed, go for shape based matching.

StevenPuttemans gravatar imageStevenPuttemans ( 2013-05-24 04:55:16 -0600 )edit

Question Tools

Stats

Asked: 2013-05-23 16:44:48 -0600

Seen: 1,568 times

Last updated: May 25 '13