Ask Your Question

GilLevi's profile - activity

2019-01-17 10:16:50 -0500 received badge  Popular Question (source)
2018-09-28 11:01:54 -0500 received badge  Popular Question (source)
2016-03-13 22:00:06 -0500 received badge  Good Answer (source)
2016-03-13 22:00:06 -0500 received badge  Enlightened (source)
2016-01-26 13:28:07 -0500 received badge  Necromancer (source)
2015-10-23 06:16:31 -0500 received badge  Nice Question (source)
2015-10-05 22:53:45 -0500 received badge  Nice Question (source)
2015-09-30 07:49:53 -0500 received badge  Nice Answer (source)
2015-05-03 09:38:30 -0500 received badge  Nice Question (source)
2015-02-17 05:29:19 -0500 commented question How to get better results with OpenCV face recognition Module
2015-01-04 07:04:22 -0500 commented question Adding rotation invariance to the BRIEF descriptor (contribution to OpenCV)

Thanks @Guanta, I'll try ICIP !

2015-01-03 14:53:21 -0500 commented question Adding rotation invariance to the BRIEF descriptor (contribution to OpenCV)

@Guanta, thanks for your comment! I'm in the process of making a pull request!

Can you recommend a small conference where I can try to publish it? I thought it's way too minor to be published. Thanks!

2015-01-02 16:16:26 -0500 commented question Adding rotation invariance to the BRIEF descriptor (contribution to OpenCV)

Not exactly. ORB uses it's own mechanism for measuring the patch orientation and also uses unsupervised learning to select what the authors claim to be an optimal set of sampling pairs. I suggest to follow the original implementation of BRIEF (random sampling pairs), but to include rotation invariance using the keypoint detector's estimation of the patch's orientation, which proves to be superior over ORB's estimation of the patch's orientation.

From experiments that I've conducted, SIFT detector coupled with the Rotation Invariant BRIEF descriptor outperform ORB detector coupled with the ORB descriptor.

2015-01-02 13:14:24 -0500 asked a question Adding rotation invariance to the BRIEF descriptor (contribution to OpenCV)


I've implemented code to add rotation invariance to the BRIEF descriptor:




The approach is explained and evaluated in my blog post:

Can someone please review my code and tell me what additional work is required in order to make a pull request?

Thanks! Gil.

2014-12-31 07:13:38 -0500 received badge  Enthusiast
2014-12-09 14:01:14 -0500 marked best answer Opencv_haartraining does not converge

I'm running OpenCV2.4.7 on Windows8.

I'm using opencv_traincascade to train a new cascade for faces. I ran the following command:

opencv_haartraining.exe -data -haarcascadeold -vec vector20.vec -bg infofile2.txt -nstages 40 -minhitrate 0.9999999 -maxfalsealarm 0.5 -npos 9000 -nneg 26946 -w 20 -h 20 -mem 1024

However, it seems to get stuck: image description

This happens every time I run it, I even tried to change the values to -minhitrate 0.8 -maxfalsealarm 0.7. The first time, it ran for 180 iterations producing the exact same values.

have about 13,000 positives, but I set the npos to be 9000 so I won't run out of positive examples.

I have to use the old function instead of traincascade as my colleague wrote his code using the old C interface.

Can someone please explain the cause of this problem and how to fix it?



2014-12-09 13:56:34 -0500 marked best answer Why do the values returned from Brisk's smoothedIntensity it are very large, much larger than intensity values?


I have a question regarding Brisk's function "smoothedIntensity".

Why do the values returned from it are very large, much larger than intensity values?

Should they be the size of intensity values (since they are smoothed intensities)? And why does Brisks uses an integral image?

I replaced the implementation with the following simple implementation that gives the sum of the 3x3 box around the pixel, could you please tell me if it's correct?

inline int
        BRISK4::smoothedIntensity(const cv::Mat& image, const cv::Mat& integral, const float key_x,
        const float key_y, const unsigned int scale, const unsigned int rot,
        const unsigned int point) const

        // get the float position
        const BriskPatternPoint& briskPoint = patternPoints_[scale * n_rot_ * points_ + rot * points_ + point];
        const float xf = briskPoint.x + key_x;
        const float yf = briskPoint.y + key_y;
        const int x = int(xf);
        const int y = int(yf);
        const int& imagecols = image.cols;

        // get the sigma:
        const float sigma_half = briskPoint.sigma;
        const float area = 4.0f * sigma_half * sigma_half;

        // calculate output:
        //Gil changes here the returned val will be the sum of patch of 3X3
        int ret_val =<uchar>(y-1,x-1) +<uchar>(y-1,x) +<uchar>(y-1,x +1) + 
            <uchar>(y,x-1) +<uchar>(y,x) +<uchar>(y,x +1) + 
            <uchar>(y+1,x-1) +<uchar>(y+1,x) +<uchar>(y+1,x +1); 

        return ret_val;

The current smoothedIntensity implementation confused me, so I'm really not sure anymore.



2014-12-09 13:55:23 -0500 marked best answer Flower Detection


I'm developing a flower detector and would be glad if anyone has some ideas I could try.

Current directions I was thinking of:

  1. Training Viola & Jones
  2. HOG + SVM
  3. Object by parts - I tried and haven't got good results.

Any other directions you can suggest?

Thanks in advance, Gil.

2014-12-09 13:53:57 -0500 marked best answer latentsvm_multidetect sample gives very bad results


I'm using the sample file latentsvm_multidetect to test LatentSvmDetector.

I'm using the models provided in OpenCVextra ("opencv_extra/testdata/cv/latentsvmdetector/models_VOC2007") and also the images provided there - one is of cars and the other is of a cat.

The code compiles and runs, but I'm getting very bad results. I'm getting various detections of all kinds of object in the images (for example, in the cars image i'm getting about 80 detections of various objects when there are only six cars in the image).

I'm running the code "as is", so I don't understand why this happens. Is there any flag I need to turn off/on or anything like that? Am I suppose to expect such results?

Thank you,


2014-12-09 13:52:30 -0500 marked best answer Using the SIFT and SURF descriptors in detector_descriptor_matcher_evaluation.cpp


I'm conducting a comparison of descriptors using the code in (Example) detector_descriptor_matcher_evaluation.cpp. I managed to get FREAK, ORB, BRISK and BRIEF running, but I can't seem to get SIFT and SURF to work. The problem is that when calling

descriptor = DescriptorExtractor::create(descriptor_name);

The function "create" doesn't have SIFT and SURF in the list of algorithms.

Can someone please explain to me how can I use SIFT and SURF in that framework?

Thanks in advance!


2014-12-09 13:52:20 -0500 marked best answer Problem accessing Mat


I'm writing a simple program that extracts descriptors from images and writes them to files.

I'm saving the descriptors in a Mat variable, but I'm getting wrong values when trying to access them.

Here is the code:

            string s = format("%s\\%s\\img%d.ppm", dataset_dir.c_str(), dsname, k);
            Mat imgK = imread(s, 0);
            if( imgK.empty() )

            detector->detect(imgK, kp);
            descriptor->compute(imgK, kp, desc);

            //writing the descriptors to a file
            char fileName[512];
            FILE * fid;
            for (int ix=0; ix< kp.size(); ix++){

                fprintf(fid,"%f \t%f", kp[ix].pt.x,kp[ix].pt.y);
                fprintf(fid, "\t1 \t0 \t1");
                //writing the descriptor
                for (int jx=0;jx<desc.cols;jx++){
                    int gil =<int>(ix,jx);
                    printf("AAAA %d", gil);

The line where I'm accessing the descriptors matrix is int gil = int(ix,jx); Is there something I'm doing wrong?

Any help will be greatly appreciated, as I'm quite stuck :)



2014-12-09 13:40:47 -0500 marked best answer Haar-cascade training took very little time and no xml was produced

I'm trying to train a new haar-cascade for faces.

I have a positive dataset of 2000 cropped face images (just the face) and 3321 negative random images.

I created positive's list using the following command:

opencv_createsamples.exe -info info.txt -vec vector.vec -num 2000 -w 10 -h 10

Where the file info.txt contains the following lines:

AJ_Cook_0001.ppm 1 0 0 64 64
AJ_Lamas_0001.ppm 1 0 0 64 64
Aaron_Eckhart_0001.ppm 1 0 0 64 64
Aaron_Guiel_0001.ppm 1 0 0 64 64
Aaron_Patterson_0001.ppm 1 0 0 64 64
Aaron_Peirsol_0001.ppm 1 0 0 64 64

Afterwords, I ran haar_training using the following command:

opencv_haartraining.exe -data harrcascade -vec vector.vec -bg infofile.txt -nstages 20 -minhitrate 0.9999 -maxfalsealarm 0.5 -npos 2000 -nneg 3321 -w 10 -h 10 -nonsym -mem 1024

Where the file infofile.txt contains the names of the background images:


Training took about only an two hours and no xml file was generated. The folder harrcascade contains 20 folder with a txt file named 'AdaBoostCARTHaarClassifier.txt' but no xml was generated.

I have two questions:

1.Why did training took so very little time?

2.Why no xml file was generated?

What am I missing here?



2014-12-09 13:16:25 -0500 marked best answer Cmake error when building OpenCV

I'm trying to build OpenCV with Cmake on Windows 7. I chose to use the Visual Studio 10 compiler.

I'm getting the following error:

CMake Error at C:/Program Files (x86)/CMake 2.8/share/cmake-2.8/Modules/CMakeCXXInformation.cmake:37 (get_filename_component):

get_filename_component called with incorrect number of arguments Call Stack (most recent call first): CMakeLists.txt:2 (PROJECT)

I'm sure the path to OpenCV is correct and I haven't made any changes to CMakeLists.txt Can anyone please guide me as to how to fix this error?

Thanks in advance!!

2014-12-09 13:09:34 -0500 marked best answer How to filter a single column mat with Gaussian in OpenCV

I have mat with only one column and 1600 rows. I want to filter it using a Gaussian.

I tried the following:

Mat AFilt=Mat(palm_contour.size(),1,CV_32F);

But I get the exact same values in AFilt (the filtered mat) and A. It looks like GaussianBlur has done nothing.

What's the problem here? How can I smooth a single-column mat with a Gaussian kernel?

I read about BaseColumnFilt, but haven't seen any usage examples so I'm not sure how to use them.

Any help given will be greatly appreciated as I don't have a clue.

I'm working with OpenCV 2.4.5 on windows 8 using Visual Studio 2012.



2014-12-09 13:08:54 -0500 marked best answer Exception when constructing BRISK in debug mode but not in release


I'm running the following simple code:

int main( int argc, const char** argv )
    // Creating the BRISK descriptor
    int Threshl=60;
    int Octaves=4; // (pyramid layer) from which the keypoint has been extracted
    float PatternScales=1.0f;
    std::vector<float> rList;
    std::vector<int> nList;

    // this is the standard pattern found to be suitable also
    const double f = 0.85 * PatternScales;

    rList[0] = (float)(f * 0.);
    rList[1] = (float)(f * 2.9);
    rList[2] = (float)(f * 4.9);
    rList[3] = (float)(f * 7.4);
    rList[4] = (float)(f * 10.8);

    nList[0] = 1;
    nList[1] = 10;
    nList[2] = 14;
    nList[3] = 15;
    nList[4] = 20;

    cv::BRISK  BRISKD(rList,nList,1000,(float)(8.2 * PatternScales));//initialize algoritm

It works fine in release mode, but in debug mode I get an exception

Unhandled exception at at 0x000007FF9EA9811C in BriskBoosting1.exe: Microsoft C++ exception: std::length_error at memory location 0x000000A211DA9D70.

How can the code work in release but not in debug? Can someone please shed some light on this problem?

If that makes any difference, I'm using visual studio 2012 and running on windows (x64).

Thanks in advance!

2014-12-09 13:08:21 -0500 marked best answer Brisk does not calculate orientation when keypoints are provided


I encountered something that looks a bit strange to me regarding Brisk's implementation.

A common way to use Brisk is:

    Ptr<FeatureDetector> detector = FeatureDetector::create(detector_name);
    Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create(descriptor_name);
    string s = format("%s\\%s\\img%d.ppm", dataset_dir.c_str(), dsname, k);
    Mat imgK=imread(s, 0);
    detector->detect(imgK, kp);
    descriptor->compute(imgK, kp, desc);

However, when using it this way, we supply the keypoints to the Brisk descriptor and the flag "useProvidedKeypoints" is true, thus Brisk does not compute orientation:

BRISK::operator()( InputArray _image, InputArray _mask, vector<KeyPoint>& keypoints,
                   OutputArray _descriptors, bool useProvidedKeypoints) const
  bool doOrientation=true;
  if (useProvidedKeypoints)
    doOrientation = false;
  computeDescriptorsAndOrOrientation(_image, _mask, keypoints, _descriptors, true, doOrientation,

Is that a bug or am I missing something here about Brisk's implementation?

Thanks in advance,


2014-12-09 13:08:15 -0500 marked best answer Training new LatentSVMDetector Models.


I haven't found any method to train new latent svm detector models using openCV. I'm currently using the existing models given in the xml files, but I would like to train my own.

Is there any method for doing so?

Thank you,


2014-12-09 13:05:32 -0500 marked best answer InitModule_nonFree() - unresolved externel symbol.


I'm writing a small program that extracts descriptors from images and writes them to files.

I'm using (example) detector_descriptor_matcher_evaluation as reference.

I guess this is a very simple problem, but I just can't solve it, I'm probably missing something:

Everything compiled and worked fine (I used FAST and ORB) but I had to use sift so I added a call to cv::initModule_nonfree(); and an include :#include "opencv2/nonfree/nonfree.hpp"

but now I'm getting a linker error:
unresolved external symbol initModule_nonfree(void).

I'm quite sure all the definitions in the project properties are ok since it worked with ORB before I added the call to initModule_nonfree().

Can someone please tell me what I might be missing here and what could be the problem?


Also, another small question: what's the purpose of

saveloadDDM( params_filename, detector, descriptor, matcher );

in the example detector_descriptor_matcher_evaluation ?

here's the code

static void saveloadDDM( const string& params_filename,
                        Ptr<FeatureDetector>& detector,
                        Ptr<DescriptorExtractor>& descriptor,
                        Ptr<DescriptorMatcher>& matcher )
    FileStorage fs(params_filename, FileStorage::READ);
    if( fs.isOpened() )
    {, FileStorage::WRITE);
        fs << "detector" << "{";
        fs << "}" << "descriptor" << "{";
        fs << "}" << "matcher" << "{";
        fs << "}";

I commented it out since it throws and exception. Is it ok not to use it?

Thanks, Gil

2014-11-29 07:29:14 -0500 commented question CVPR15 - OpenCV Vision Challenge.

@StevenPuttemans, thanks for the advice!

2014-11-28 05:20:18 -0500 received badge  Nice Question (source)
2014-11-27 08:33:24 -0500 asked a question CVPR15 - OpenCV Vision Challenge.


OpenCV is sponsoring a vision challenge in the upcoming CVPR convention:

The challenge involves 11 benchmarks of various computer vision problems, with the goal to contribute state of the art algorithms (and code) to OpenCV.

Is anyone here thinks of participating? I'll be working on some of the "recognition" benchmarks.


2014-11-11 06:13:59 -0500 commented answer Object classification (pedestrian, car, bike)

I would try Caffe.

2014-11-06 02:04:09 -0500 asked a question Regarding AKAZE features - descriptor_type enum


AKAZE features have the following enum that describes the descriptor type:

// AKAZE descriptor type
enum {
    DESCRIPTOR_KAZE_UPRIGHT = 2, ///< Upright descriptors, not invariant to rotation
    DESCRIPTOR_MLDB_UPRIGHT = 4, ///< Upright descriptors, not invariant to rotation

Just want to make sure - if I use DESCRIPTOR_MLDB (which is also the default), that means that AKAZE will be rotation invariant?



2014-10-30 10:15:32 -0500 received badge  Nice Answer (source)
2014-10-30 10:12:44 -0500 marked best answer Problems in adding a new descriptor to OpenCV


I'm trying to add the BinBoost descriptor to OpenCV. The sources can be found here: link text

It's really straightforward, as the authors already implemented the DescriptorExtractor class.

The problem is that the constructors are dependent of certain binary files as input. They use them to initialize their inner structures. So one can easily construct a BinBoostDescriptorExtractor as

BinBoostDescriptorExtractor BinBoostDescriptorExtractorInstance("D:\\OpenCV_2_4_9\\opencv\\sources\\modules\\features2d\\src\\binboost_256.bin");

But one cannot use the simpler "create" command as:

Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create("BinBoost");

What can I do about it? Will OpenCV moderators be willing to accept a new descriptor (or more precisely - a family of 3 descriptors) that can't be initialized using "create"?

Thanks in advance,


2014-10-14 08:02:13 -0500 asked a question Having problems with resize/subsample (without interpolation)


I'm trying to resize/subsample an image without interpolation.

To make things clearer, I would like to replace the following code:

int height_res=55; int width_res=55;

    int start_ind=1;
    Mat channels_sum_reduced=Mat::zeros(height_res,width_res,CV_32FC1);

    for (size_t height_ind=start_ind; height_ind<height_res+start_ind; height_ind++)
        for (size_t width_ind=start_ind; width_ind<width_res+start_ind; width_ind++)

With a simple resize statement.

I tried to following:

  Mat channels_sum_reduced;
  Rect r=Rect(1,1,height-1,width-1);


But it didn't give the exact same results (and I really need it to be precise).

Can someone please correct me?



2014-10-09 12:41:25 -0500 commented question Building a simple 3d model : Using build3dmodel.cpp

You can take a look at the blog post that I wrote which explains how to create 3D models using the Bundler and PMVS packages:

2014-09-21 16:47:23 -0500 commented answer How to add an algorithm to OpenCV?

Thanks for your help!

2014-09-19 17:48:19 -0500 asked a question How to add an algorithm to OpenCV?


I'm using OpenCV2.4.9 with Visual Studio 2012 on Windows 8.1

I added a new descriptor to OpenCV and I would like to test it outside of the OpenCV solution (in a new solution).

How do I create updated lib and hpp files? I would like to have a new "build" directory that will contain all the updated files (lib, dll and hpp) according to the updated code.

Do I need to apply CMake again? perhaps building the project "BUILD" in the OpenCV solution?

Thanks in advance,


2014-09-12 10:35:43 -0500 asked a question Extracting SIFT/SURF descriptor from pre-cropped patches


I have a set of 100K 64x64 gray patches (that are already aligned, meaning they all have the same orientation) and I would like to extract a SIFT descriptor from each one.

It is clear to me all I need to do is to define a vector with one keypoint kp such that: kp.x=32, kp.y=32.

However, I don't know how to set the kp.size parameter. From going over SIFT's code, it looks as it's doing some non-trivial calculations with that parameter instead of just assuming that it's the size of the patch.

Question 1: what should be the kp.size parameter when extracting SIFT descriptors from patches of size 64x64?

Question 2: what should be the kp.size parameter when extracting SURF descriptors from patches of size 64x64?

Thanks in advance,


2014-09-03 14:00:26 -0500 received badge  Nice Question (source)