Ask Your Question

chicio's profile - activity

2017-11-20 19:42:36 -0600 received badge  Notable Question (source)
2016-12-09 06:49:24 -0600 received badge  Notable Question (source)
2016-05-22 15:36:32 -0600 received badge  Popular Question (source)
2015-04-09 04:07:26 -0600 received badge  Popular Question (source)
2013-09-26 05:00:08 -0600 received badge  Scholar (source)
2013-09-26 05:00:01 -0600 received badge  Supporter (source)
2013-09-26 02:17:05 -0600 commented question Same training descriptor across multiple platform

Hi moster. Yes it seems to work. I'm storing it using the FileStorage and YAML format structure contained in OpenCV framework http://docs.opencv.org/modules/core/doc/xml_yaml_persistence.html. I'm saving the vector of mat that contains the training descriptor for ORB in a yaml file generated by a OS X application. Then I use this file in a video recognition app for iPhone/iPad. Everything seemes fine. Thank you.

2013-09-25 10:21:36 -0600 asked a question Same training descriptor across multiple platform

If I generate the training descriptors using ORB on OS X and I save them using FileStorage in a file, can I use them in an iOS app to feed a FLANN based matcher?

2013-09-25 06:04:08 -0600 asked a question How to implement flann based LSH for ORB

I'm trying to implement a flann based matcher and use it with ORB. How can get it in OpenCV? Now I'm using brute force hamming matching. If I undestood well, LSH has better performance related to Brute force matching. Is it right?

2013-09-24 05:19:09 -0600 commented answer Multiple object recognition in Video with OpenCV using SURF and FLANN

Hi GilLevi. I read your post and try to get good matches following your advice of get the matches with the minimum distance. With a MAX DISTANCE of 40 it seems to work well. Now i'm thinking of a way to avoid to reload the descriptor matcher every time. Is it that possible? Are there other way to check if a match is a good one (I read somewhere about ratio, but i can't find it in the matches)?

2013-09-23 03:11:34 -0600 commented answer Multiple object recognition in Video with OpenCV using SURF and FLANN

Hi GilLavi. Yes, I need some link to improve my theoretic background. In particular I need to understand how can I recognized if some matches are "good" matches. In SURF it is possible to calculate the distance between two matches getting the minimum distance * n (I get this on opencv documentation). How can get the good matches in ORB?

2013-09-20 10:13:53 -0600 commented answer Multiple object recognition in Video with OpenCV using SURF and FLANN

Thank you for your answer. Did you know if there's a sample of descriptor in OpenCV library or somewhere? I'm reading documentation about ORB and I see it uses hamilton distance so i think that ORB implentation in opencv need something related to this fact (i mean focused data structure). Can you help me?

2013-09-20 07:59:48 -0600 received badge  Editor (source)
2013-09-20 07:57:41 -0600 asked a question Multiple object recognition in Video with OpenCV using SURF and FLANN

Hi everyone,

I want to recognize in a video if an object is contained (IPHONE APP). I have a lot of different object in the training set (300+) and they will grow in the future. Following the example contained in the library (matching_to_many_images.cpp) and others on the opencv doc web site I was able to write a simple application that recognize if one of 2 training images are in the current video frame. I'm using SURF feature detector and FLANN as a matcher. Using this method i noticed that the load and training of the matcher is too slow. Here is a snippet of my code:

/******* Load images *********/
UIImage *object1Image = [UIImage imageNamed:@"ticket_medium"];
UIImage *object2Image = [UIImage imageNamed:@"magazine"];
Mat object1;
Mat object2;
UIImageToMat(object1Image, object1);
UIImageToMat(object2Image, object2);

cvtColor(object1, object1, CV_BGR2GRAY);
cvtColor(object2, object2, CV_BGR2GRAY);

trainImages.push_back(object1);
trainImages.push_back(object2);

/*****Detect keypoints*******/
featureDetector = FeatureDetector::create("SURF");
featureDetector->detect(trainImages, trainKeypoints);

/*****Compute descriptor*****/
descriptorExtractor = DescriptorExtractor::create("SURF");
descriptorExtractor->compute(trainImages, trainKeypoints, trainDescriptors);

/*****prepare matcher*****/
descriptorMatcher = DescriptorMatcher::create("FlannBased");

/******train descriptor******/
descriptorMatcher->add(trainDescriptors);
descriptorMatcher->train();

Is there a way to improve the speed of the matcher training? Are there alternative way to avoid loading and training the matcher every time I launch the application? To training the matcher I have to load every time the training image and, for each one, extract keypoints and calculate the relative descriptor, adding other extra time and slowing the application down. Can I avoid this? Do i have to save the some of the data in the DB? Which format i have to choose?