Ask Your Question
0

Multiple object recognition in Video with OpenCV using SURF and FLANN

asked 2013-09-20 07:57:41 -0600

chicio gravatar image

updated 2013-09-20 07:59:48 -0600

Hi everyone,

I want to recognize in a video if an object is contained (IPHONE APP). I have a lot of different object in the training set (300+) and they will grow in the future. Following the example contained in the library (matching_to_many_images.cpp) and others on the opencv doc web site I was able to write a simple application that recognize if one of 2 training images are in the current video frame. I'm using SURF feature detector and FLANN as a matcher. Using this method i noticed that the load and training of the matcher is too slow. Here is a snippet of my code:

/******* Load images *********/
UIImage *object1Image = [UIImage imageNamed:@"ticket_medium"];
UIImage *object2Image = [UIImage imageNamed:@"magazine"];
Mat object1;
Mat object2;
UIImageToMat(object1Image, object1);
UIImageToMat(object2Image, object2);

cvtColor(object1, object1, CV_BGR2GRAY);
cvtColor(object2, object2, CV_BGR2GRAY);

trainImages.push_back(object1);
trainImages.push_back(object2);

/*****Detect keypoints*******/
featureDetector = FeatureDetector::create("SURF");
featureDetector->detect(trainImages, trainKeypoints);

/*****Compute descriptor*****/
descriptorExtractor = DescriptorExtractor::create("SURF");
descriptorExtractor->compute(trainImages, trainKeypoints, trainDescriptors);

/*****prepare matcher*****/
descriptorMatcher = DescriptorMatcher::create("FlannBased");

/******train descriptor******/
descriptorMatcher->add(trainDescriptors);
descriptorMatcher->train();

Is there a way to improve the speed of the matcher training? Are there alternative way to avoid loading and training the matcher every time I launch the application? To training the matcher I have to load every time the training image and, for each one, extract keypoints and calculate the relative descriptor, adding other extra time and slowing the application down. Can I avoid this? Do i have to save the some of the data in the DB? Which format i have to choose?

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
0

answered 2013-09-20 08:11:31 -0600

Moster gravatar image

Just in general. You shouldnt expect SURF to be fast on a mobile device. SURF is already an expensive algorithm on a normal PC. Maybe you should look for another feature point algorithm. Maybe ORB or FAST + BRIEF will rather fit your task.

edit flag offensive delete link more

Comments

Thank you for your answer. Did you know if there's a sample of descriptor in OpenCV library or somewhere? I'm reading documentation about ORB and I see it uses hamilton distance so i think that ORB implentation in opencv need something related to this fact (i mean focused data structure). Can you help me?

chicio gravatar imagechicio ( 2013-09-20 10:13:53 -0600 )edit

There are implementation of the following binary descriptors in openCV: ORB, BRIEF, BRISK and FREAK:

http://docs.opencv.org/modules/features2d/doc/feature_detection_and_description.html

also, ORB uses the hamming distance, not the hamilton distance. Tell me if you need more theoretic background on BRIEF and ORB, I can give you links to some posts I wrote.

GilLevi gravatar imageGilLevi ( 2013-09-20 10:46:16 -0600 )edit

Hi GilLavi. Yes, I need some link to improve my theoretic background. In particular I need to understand how can I recognized if some matches are "good" matches. In SURF it is possible to calculate the distance between two matches getting the minimum distance * n (I get this on opencv documentation). How can get the good matches in ORB?

chicio gravatar imagechicio ( 2013-09-23 03:11:34 -0600 )edit

Hi,

Here's an introduction to binary descriptors : http://gilscvblog.wordpress.com/2013/08/26/tutorial-on-binary-descriptors-part-1/

and here's a post about the BRIEF descriptor (similar to ORB): http://gilscvblog.wordpress.com/2013/09/19/a-tutorial-on-binary-descriptors-part-2-the-brief-descriptor/

I haven't yet posted on ORB, I'll post it this week.

Note that the distance metric with binary descriptors is the hamming distance, so good matches are matches with low hamming distance.

GilLevi gravatar imageGilLevi ( 2013-09-23 03:33:27 -0600 )edit

Hi GilLevi. I read your post and try to get good matches following your advice of get the matches with the minimum distance. With a MAX DISTANCE of 40 it seems to work well. Now i'm thinking of a way to avoid to reload the descriptor matcher every time. Is it that possible? Are there other way to check if a match is a good one (I read somewhere about ratio, but i can't find it in the matches)?

chicio gravatar imagechicio ( 2013-09-24 05:19:09 -0600 )edit

The ratio test means that the ratio between the distance to the nearest neighbor to the distance to the second nearest neighbor has to be large. I can't see any way to get that value without explicitly computing it.

GilLevi gravatar imageGilLevi ( 2013-09-24 06:48:08 -0600 )edit
0

answered 2013-11-22 05:21:49 -0600

spawnrider gravatar image

Hi everyone,

The sample code published by chicio inspired me because i'm trying to do the same stuff. I am encountering an issue on the Descriptor Extractor compute method using an Mat vector argument (EXC_BAD_ACCESS below). Do you faced the same issue ?

By the way, Is it possible to see all your code ? Do you explored/tried the ORB method ?

Here is my code :

//-- Step 0: Create image references
UIImage *objectImage1 = [UIImage imageNamed:@"star.png"];
Mat object1 = [self cvMatFromUIImage:objectImage1];
cvtColor(object1, object1, CV_BGRA2BGR);


UIImage *objectImage2 = [UIImage imageNamed:@"spawn.png"];
Mat object2 = [self cvMatFromUIImage:objectImage2];
cvtColor(object2, object2, CV_BGRA2BGR);

vector<Mat> trainImgCollection;
trainImgCollection.push_back(object1);
trainImgCollection.push_back(object2);

//-- Step 1: Detect the keypoints using XXX Detector
NSLog(@"Create ORB Detector");
int minHessian = 400;

OrbFeatureDetector detector( minHessian );

vector<vector<KeyPoint> > trainPointCollection;
vector<KeyPoint> queryPoints;
detector.detect(queryImage, queryPoints );
detector.detect(trainImgCollection, trainPointCollection );

//-- Step 2: Calculate descriptors (feature vectors)
NSLog(@"Create ORB Extractor");
DescriptorExtractor *extractor = DescriptorExtractor::create("SURF");

vector<Mat> trainDescCollection;
Mat queryDescriptors;

//extractor.compute(myMat, keypoints_2, descriptors_2 );
NSLog(@"Compute descriptors");
//extractor.compute(queryImage, queryPoints, queryDescriptors );
extractor->compute(trainImgCollection, trainPointCollection, trainDescCollection); <== EXC_BAD_ACCESS HERE

Thanks in advance, Regards

edit flag offensive delete link more

Question Tools

Stats

Asked: 2013-09-20 07:57:41 -0600

Seen: 4,300 times

Last updated: Nov 22 '13