# Revision history [back]

### Multiple object recognition in Video with OpenCV using SURF and FLANN

Hi everyone,

I want to recognize in a video if an object is contained (IPHONE APP). I have a lot of different object in the training set (300+) and they will grow in the future. Following the example contained in the library (matching_to_many_images.cpp) and others on the opencv doc web site I was able to write a simple application that recognize if one of 2 training images are in the current video frame. I'm using SURF feature detector and FLANN as a matcher. Using this method i noticed that the load and training of the matcher is too slow. Here is a snippet of my code:

/******* Load iages *********/
UIImage *object1Image = [UIImage imageNamed:@"ticket_medium"];
UIImage *object2Image = [UIImage imageNamed:@"magazine"];
Mat object1;
Mat object2;
UIImageToMat(object1Image, object1);
UIImageToMat(object2Image, object2);

cvtColor(object1, object1, CV_BGR2GRAY);
cvtColor(object2, object2, CV_BGR2GRAY);

trainImages.push_back(object1);
trainImages.push_back(object2);

/*****Detect keypoints*******/
featureDetector = FeatureDetector::create("SURF");
featureDetector->detect(trainImages, trainKeypoints);

/*****Compute descriptor*****/
descriptorExtractor = DescriptorExtractor::create("SURF");
descriptorExtractor->compute(trainImages, trainKeypoints, trainDescriptors);

/*****prepare matcher*****/
descriptorMatcher = DescriptorMatcher::create("FlannBased");

/******train descriptor******/
descriptorMatcher->train();


Is there a way to improve the speed of the matcher training? Are there alternative way to avoid loading and training the matcher every time I launch the application? To training the matcher I have to load every time the training image and, for each one, extract keypoints and calculate the relative descriptor, adding other extra time and slowing the application down. Can I avoid this? Do i have to save the some of the data in the DB? Which format i have to choose?

### Multiple object recognition in Video with OpenCV using SURF and FLANN

Hi everyone,

I want to recognize in a video if an object is contained (IPHONE APP). I have a lot of different object in the training set (300+) and they will grow in the future. Following the example contained in the library (matching_to_many_images.cpp) and others on the opencv doc web site I was able to write a simple application that recognize if one of 2 training images are in the current video frame. I'm using SURF feature detector and FLANN as a matcher. Using this method i noticed that the load and training of the matcher is too slow. Here is a snippet of my code:

/******* Load iages images *********/
UIImage *object1Image = [UIImage imageNamed:@"ticket_medium"];
UIImage *object2Image = [UIImage imageNamed:@"magazine"];
Mat object1;
Mat object2;
UIImageToMat(object1Image, object1);
UIImageToMat(object2Image, object2);

cvtColor(object1, object1, CV_BGR2GRAY);
cvtColor(object2, object2, CV_BGR2GRAY);

trainImages.push_back(object1);
trainImages.push_back(object2);

/*****Detect keypoints*******/
featureDetector = FeatureDetector::create("SURF");
featureDetector->detect(trainImages, trainKeypoints);

/*****Compute descriptor*****/
descriptorExtractor = DescriptorExtractor::create("SURF");
descriptorExtractor->compute(trainImages, trainKeypoints, trainDescriptors);

/*****prepare matcher*****/
descriptorMatcher = DescriptorMatcher::create("FlannBased");

/******train descriptor******/