Ask Your Question

shincodex's profile - activity

2017-07-11 08:55:47 -0600 commented question void CascadeClassifier::detectMultiScale opencv 3.2-dev parameter assumption

I'm going to put the full signature of detectMultiScale to help you.

2017-07-11 08:50:36 -0600 asked a question cv::cuda::CascadeClassifier OpenCV 3.2 using data trained in OpenCV 2.4.11

I have run the cuda version of cascade classifier in OpenCV 2.4.11 on cpu and a 860M GPU. The results can be wildly different in terms of being accurate. To a point where a detected object that was found in CPU has the same rectangle size on GPU version, but in a location that is way off. I need to know if anyone has experienced this. If you have. Did you retrain the classifier on that GPU?

I have tested my trained data(2.4.11) in OpenCV 2.4.11 cpu. I then was like well let me update the code base to use 3.2. Same, thing. Wildly different. Sometimes its correct sometimes it isn't.

I'm using LPB and blasting full 1920x1080 images at it.

This is a bug(enlisted a git issue) and a question.

2017-06-14 09:35:37 -0600 asked a question OpenCV 3.2 vs OpenCV 2.4.11 SVM trainer files and predict

I was wondering if the predict function, trainer files trained in 2.4 are incompatible with OpenCV 3.2

in this code below:

        svm->kernel->calc(sv_total, svm->var_count, svm->sv.ptr<float>(), row_sample, buffer);

which calls

   case SVM::RBF:
        calc_rbf(vcount, var_count, vecs, another, results);

which is:

void calc_rbf( int vcount, int var_count, const float* vecs,
                   const float* another, Qfloat* results )
    {
        double gamma = -params.gamma;
        int j, k;

        for( j = 0; j < vcount; j++ )
        {
            const float* sample = &vecs[j*var_count];
            double s = 0;

            for( k = 0; k <= var_count - 4; k += 4 )
            {
                double t0 = sample[k] - another[k];
                double t1 = sample[k+1] - another[k+1];

                s += t0*t0 + t1*t1;

                t0 = sample[k+2] - another[k+2];
                t1 = sample[k+3] - another[k+3];

                s += t0*t0 + t1*t1;
            }

            for( ; k < var_count; k++ )
            {
                double t0 = sample[k] - another[k];
                s += t0*t0;
            }
            results[j] = (Qfloat)(s*gamma);
        }

        if( vcount > 0 )
        {
            Mat R( 1, vcount, QFLOAT_TYPE, results );
            exp( R, R );
        }
    }

It seeks like buffer or my samples obtained are different in percision comparing opencv 3.2/2.4

2017-06-12 10:16:08 -0600 asked a question void CascadeClassifier::detectMultiScale opencv 3.2-dev parameter assumption
void CascadeClassifier::detectMultiScale( InputArray image,
calls  
void clipObjects(Size sz, std::vector<Rect>& objects,
                 std::vector<int>* a, std::vector<double>* b)
{
    size_t i, j = 0, n = objects.size();
    Rect win0 = Rect(0, 0, sz.width, sz.height);
    if(a)
    {
        CV_Assert(a->size() == n);
    }
    if(b)
    {
        CV_Assert(b->size() == n);
    }

///.....

A and b are rejectLevels and weights. It looks like this code is attempting to fix rectangles going outside image bounds(in 2.4 I remember tossing out illegal rects). This assert throws when I don't use rejectLevels/Weights, but detectedObjects is > 0 and I use detectedObjects. Don't think this is right.