Ask Your Question

Abu Gaseem's profile - activity

2018-04-23 15:01:07 -0600 marked best answer this is lead to memory leak?

I hope if some one can answer my question and clarify the details .

IplImage* out;
out = doPyrDown( in, IPL_GAUSSIAN_5x5 );
out = doPyrDown( out, IPL_GAUSSIAN_5x5 );
out = doCanny( out, 10, 100, 3 );
 // do whatever with ‘out’
//
...
cvReleaseImage ( &out );

I read in Learning opencv this is not a problem . how is it fine ?! every method doPyrnDown & doCanny allocate Image and return the address of the Image (it's pointer) . I know that this situation lead to memory leaks . if not i think the IplImage structure has a list of addresses dynamically updated during run-time of the program .

2017-10-03 05:05:25 -0600 received badge  Famous Question (source)
2017-10-03 05:05:25 -0600 received badge  Notable Question (source)
2017-10-03 05:05:25 -0600 received badge  Popular Question (source)
2017-10-03 05:05:14 -0600 marked best answer Clarify about BOW Module

I read about the BOW algorithm that can be used in object recognition and classification . I'm working on application to recognize shops and buildings based on Local features matching and geo-location . The data-set consist of 26 POI ,23 of them are shops and three are buildings .

The algorithm say after extracting all the descriptors for each features detected from the data-sets (Train images) to cluster them to a k group/cluster . My first question ,is the number of clusters that i should use equal to my POI in my application which is 26 ? My second question after clustering into K cluster , the algorithm represent each Ki cluster in one center descriptor it's size depends on the Descriptor extractor algorithm used ,for example in SURF 64 real , so how is that one center descriptor can be used in matching, and does the rest descriptors that belong to that center Ki descriptor related to each other in some data structure like tree , because that make sense for me .

I'm sorry if my question not clear because I'm confused with that topic . I will appreciate any help .

2015-12-27 13:38:13 -0600 received badge  Student (source)
2015-04-18 05:51:24 -0600 commented question Why does the CVSVM predict function does'nt work 100% on the same training set

Guys I don't know how many test images i need for testing ,could you suggest me please ?

2015-04-16 12:05:18 -0600 commented question Why does the CVSVM predict function does'nt work 100% on the same training set

on Friday,and saturday i will go to collect my test images and test the SVM . can you explain what the C and gamma arguments represent ? Also var_all , var_count ,what they represent?

Thanks you both guys.

2015-04-16 06:42:16 -0600 commented question Why does the CVSVM predict function does'nt work 100% on the same training set

Vocabulary size :1k ,number of training examples are vary ,from class 1-24 {110 , 14 ,19,14,11,21,10,15,15,15,14,19,45,14,18,45,42,67,70,29,25,26,46,34,82} NOTE:when i use the CVSVM.train_auto() with default parameter i got the above result ,In contrast when i use use CVSVM.train() with the following params svm_type: C_SVC kernel: { type:RBF, gamma:3. } C: 1.0000000000000000e+10 term_criteria: { epsilon:1.1920928955078125e-07, iterations:100000000 } all the Classifiers recognise the training examples 100%

2015-04-16 06:34:49 -0600 commented question Why does the CVSVM predict function does'nt work 100% on the same training set

i retrain the SVMS classifier with Cvalue = 10^10 and gama = 3 i got accuarcy 100% with same training examples and this is expected right ?

2015-04-16 06:11:46 -0600 commented question Why does the CVSVM predict function does'nt work 100% on the same training set

i want to retrain the svm with Cvalue = 10*10 and termcretiera 10*10 maybe i got better result than c = 1 and iteraion = 1000 I want to ask does the number of positive and negtive examples should be equal in general ,and in case of 1 vs n classes?

2015-04-16 04:52:04 -0600 received badge  Organizer (source)
2015-04-15 14:32:22 -0600 asked a question Why does the CVSVM predict function does'nt work 100% on the same training set

I used the opencv CVSVM with bag of visual words to classify objects ,once i finish training i test the classifier on the same training set as this supposed give me 100% accuracy isn't it ? but that not the situation can someone explain why ? Here's the code

 //load some class_1 samples into matrix
//load the associated SVM parameter file into SVM
//test (predict)
Mat test_samples;
CvSVM classifier;


FileStorage fs ("train_sample/training_samples-1000.yaml.gz" , FileStorage::READ) ;
if ( !fs.isOpened() ){
    cerr << "Cannot open file " << endl ;
    return ;
}

classifier.load("SVM_parameter_files/svm_1000_auto/SVM_classifier_class_1.yaml") ;
string class_= "class_";
for ( size_t i = 1 ; i <= 24; i++ ){
    stringstream ss ;
    ss << class_ <<i ;
    fs[ss.str()] >>test_samples;
    size_t positive  =  0 ;
    size_t negative = 0 ; 
    //test svm classifier that classify class_1 as positive and others as negative
    for ( int i = 0 ; i < test_samples.rows ; i++ ){
        float res = classifier.predict(test_samples.row(i),false  ) ;
       ( (res == 1) ? (positive++):(negative++) );

    }

    cout << ss.str() << " positive examples  = " <<positive <<" , negative examples =" << negative  << endl ;

}
fs.release();

The output class1 vs 24 class

2015-04-15 12:29:10 -0600 asked a question SVM predict return 0(negative ) label always

I do the training stage , and i want to be sure if it's work i test the SVM on the same training set (positive) label and the negative always return 0 which is the negative label in my case. what is the wrong ? i use the default parameters with CVSVM.train() method of Opencv 2.4.9 lib. anyone have an idea what is the wrong ? is the problem with SVM default parameters not sufficient iteration? or something else?

2015-04-11 03:33:40 -0600 commented question I cannot load my training descriptors in memory

I enabled openMp during building the opencv lib 2.4.9. also in my application i enabled the openMp flags .

2015-04-11 03:01:50 -0600 commented question I cannot load my training descriptors in memory

Yes you are 100% right , so that i dont save them just go to the building the dictionary streadforward and now the clustering is running . just one question , i wonder why the BOWTrainer.cluster() method is using my 4 cores , is it normal (i.e is it multithreaded computaion) ? provided that i enable openmp during compilation my program .

2015-04-11 02:16:25 -0600 commented question I cannot load my training descriptors in memory

how i can load my descriptor file ? I think i would try to implement serialize and desalinize into a txt system file .

2015-04-10 17:15:33 -0600 asked a question I cannot load my training descriptors in memory

I'm working on Bag of Visual words module .I extract SIFT descriptors from 984 Training images,and stored them on disk as .yaml file . I wonder why my descriptors file .yaml is too huge ,it's size 923MB ,although the size of my whole training images is just 146MB . My problem that i cannot load my descriptors file which is a matrix has 1.4 million rows and 128 cols in memory it's look there's a threshold used in the opencv alloc.cpp prevent me from allocate more than 730MB . is it normal that i the descriptors file .yaml is too huge according to the training image size which is 146MB ?

2015-04-05 18:07:13 -0600 asked a question Clarifying Bag of key-points module in context of training image for constructing Dictionary,and train SVM classifier

In Bag of key-points approach for object categorization,i'm confused what training images i should use for constructing the dictionary (code-words) ,and training images i should use for train the SVM classifier . same images or different one or what, I'm really confused .

2015-03-29 09:25:23 -0600 asked a question What is 0-based cluster index

opencv c++ provide the kmean interface :

double kmeans(InputArray data, int K, InputOutputArray bestLabels, TermCriteria criteria, int attempts, int flags, OutputArray centers=noArray() ) I've read the description in the documentaion page kmean interface

I wonder what is the bestLabels argument ,aslo i don't get the following paragraph:

The function kmeans implements a k-means algorithm that finds the centers of cluster_count clusters and groups the input samples around the clusters. As an output, \texttt{labels}_i contains a 0-based cluster index for the sample stored in the i^{th} row of the samples matrix. after every attempt. The best (minimum) value is chosen and the corresponding labels and the compactness value are returned by the function. Basically, you can use only the core of the function, set the number of attempts to 1, initialize labels each time using a custom algorithm, pass them with the ( flags = KMEANS_USE_INITIAL_LABELS ) flag, and then choose the best (most-compact) clustering. what they mean by 0-based cluster index?

2015-03-29 02:59:07 -0600 commented answer Bag of words training Image conditions

Thanks M.r Guanta .

2015-03-28 12:36:49 -0600 asked a question Bag of words training Image conditions

I'm planing to use the BOW approach , how much the angle / view should be differ between two Image of the same sene/object of interest ? In other world , considering these two images:

Train image 1

Train image 2

is the two image a good candidate for the training phase ?or one of them are sufficient in case of BOW with SURF Feature & descriptor Extractor Notice: to see the different between the two image , look at the upper right corner.

2015-03-19 13:10:42 -0600 commented question shops and building recognition application

ok I will try the BOW . I wonder if you can guide me with how i should capture POI Images ,in context of make zoom in, in order to get rid of the outliers , angle of capturing (i.e variants view for the same POI ) ,and the resolution .my phone can capture photos at min-mum quality 480X640 pixels.

2015-03-18 18:52:21 -0600 commented question shops and building recognition application

@berak,@Guanta I think you misunderstand the situation .,when you ask me to increase the data-set more than 20 Image. the 23 (3 buildings + 20 shops) is not the number of my data-set/training Images ,but it's the number of classes, each building and shops has a label as you can see the third Image above the system should response with Nour label . I dont think the BOW will be the right choice . I hope my note reach to you guys .

2015-03-18 03:20:24 -0600 received badge  Enthusiast
2015-03-17 18:01:43 -0600 commented question shops and building recognition application

also SIFT has high repeatability with my data-sets. Thanks very much @berak ,@Guanta,

2015-03-17 17:59:59 -0600 commented question shops and building recognition application

Yes you are right berak , I was checking the SIFT & SURF with two matcher strategy , Brute Force and FlannBased , I gain good result (inliers) when the two Images are for the same object in my data-sets , in contrast there was an negative inliers in other pairs of Images which are'nt for the same object , the reason for that my data-sets has high texture which is the stone of the buildings as you can see above I posted some training Images also they are too bad , I'm planning to take a new dataset without outliers as possible as i can , I think the BOW will be better on my dataset because you know there's a dictionary of words/features and Features are reduced as you mention , the problem is we cannot predicate how much the produced dictionary are distinctive. I will take with your advics

2015-03-17 16:38:56 -0600 commented question shops and building recognition application

sorry for my question , but i'm new-by in this field . why BOW approach instead of FLANN approach ?