Ask Your Question

blDraX's profile - activity

2018-05-30 21:10:33 -0500 received badge  Good Question (source)
2018-05-09 14:53:43 -0500 received badge  Popular Question (source)
2015-10-24 08:22:38 -0500 received badge  Supporter (source)
2015-10-11 04:33:45 -0500 commented question First use of UMat does not run on GPU?

So I tracked the problem through the debugger.

During the creation of the Kernel in ocl_bilateralFilter_8u (smooth.cpp, line 2965; Kernel is built in line 3030), OpenCV seems to build the OpenCL Program in ocl.cpp, line 3234.

The problem finally occurs in getProg (ocl.cpp, line 2580) in the line

Program prog(src, buildflags, errmsg); //ocl.cpp, line 2589

This finally calls

            retval = clBuildProgram(handle, n,
                                (const cl_device_id*)deviceList,
                                buildflags.c_str(), 0, 0);

in ocl.cpp, line 3499. The debugger doesn't let me go deeper but this operation is what takes up all the time. Since I can't go into the function I have no idea what to do or what's going wrong.

2015-10-11 03:10:38 -0500 commented question First use of UMat does not run on GPU?

Output:

1 GPU devices are detected.
name                 : Pitcairn
available            : 1
imageSupport         : 1
OpenCL_C_Version     : OpenCL C 1.2

I don't really see a problem unfortunately. Do you?

2015-10-11 02:28:31 -0500 commented question First use of UMat does not run on GPU?

Very strange... how does this even happen? I also compiled the program with VS2013. I used OpenCV 3.0 gold. I also tested the program on my desktop pc and my notebook and on both I get the same result.

I also tested your suggestion but it didn't change anything. The problem don't seem to be the images. The problem is always the first use of a library function with a specific set of parameters.

I'll try it on another different pc later.

2015-10-10 08:17:31 -0500 received badge  Editor (source)
2015-10-10 08:16:05 -0500 asked a question First use of UMat does not run on GPU?

I experience some very strange behaviour using UMats. My intent is to speedup my algorithms by running OpenCV library functions on my GPU (AMD HD 7850, OpenCL capable). In order to test this I load a set of seven images and perform a bilateral filter or a sobel operation on them.

However, it seems that every time I use one of those functions with a new set of parameters it is executed on the CPU first. Only starting from the second use of those same parameters my program uses the GPU. I compiled this with VS 2013 and OpenCV 3.0 gold.

For example, using the same bilateral filter on all images:

#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/ocl.hpp>

#include <iostream>
#include <chrono>

using namespace std;
using namespace cv;

int main()
{
    cout << "Have OpenCL?: " << cv::ocl::haveOpenCL() << endl; // returns true, OCL available
    ocl::setUseOpenCL(true);

    // Load images test1.jpg, ..., test7.jpg
    vector<UMat> images;
    for (int i = 1; i <= 7; i++)
    {
        string filename = "test" + to_string(i) + ".jpg";
        UMat input = imread(filename).getUMat(ACCESS_READ);
        images.push_back(input);
    }

    for (int i = 0; i < 7; i++)
    {
        chrono::high_resolution_clock::time_point begin = chrono::high_resolution_clock::now();

        // ---------------------- Critical section --------------------------
        UMat result;
        bilateralFilter(images.at(i), result, 0, 10, 3);
        // ------------------------------------------------------------------

        chrono::high_resolution_clock::time_point end = chrono::high_resolution_clock::now();
        long long ms = chrono::duration_cast<chrono::milliseconds>(end - begin).count();
        cout << ms << " ms" << endl;
    }
}

Output:

2251 ms
5 ms
5 ms
5 ms
5 ms
5 ms
5 ms

The GPU utilization goes up, however only after about 2 seconds (i.e. after the first iteration is completed). However, when using a different set of parameters each time:

        // ...
        UMat result;
        bilateralFilter(images.at(i), result, 0, i * 10, 3);
        // ...

Output:

2148 ms
2098 ms
1803 ms
1699 ms
1826 ms
1760 ms
1766 ms

And all of it is executed on my CPU. Also, those functions run extremely slow. Using Mat instead of UMat only takes these operations about 40ms. I guess there's some crosstalk between the program and OpenCL until the library decides to use the CPU.

The same behaviour shows when using Sobel:

        // ...
        UMat result;
        if (i == 0)
            cv::Sobel(images.at(i), result, CV_32F, 1, 0, 5);
        else if (i == 1)
            cv::Sobel(images.at(i), result, CV_32F, 0, 1, 5);
        else
            cv::Sobel(images.at(i), result, CV_32F, 1, 1, 5);
        // ...

The first three operations are executed on the CPU. Then, iterations 4 to 7 finish on the GPU almost immediately, with the GPU utilization once again going up (because they use the same parameter set as iteration 3). Output:

687 ms
567 ms
655 ms
0 ms
0 ms
1 ms
0 ms

Is this a bug? Am I doing something wrong? Just applying each operation once at the start of the program in order to prevent this feels very hacky. Also I don't know how long the parameter usages are "cached" (I use this word since I have no idea what happens in the background ... (more)

2015-07-30 07:33:41 -0500 received badge  Enthusiast
2015-07-28 07:50:15 -0500 asked a question Finding Connected Components in Natural Color Images

I've been working on an application that extracts characters from natural images, i.e. color images with a lot of structure. Up to now I've been using Canny Edge Detector and the Stroke Width Transform to extract components from the image.

For comparison I also want to use a different method based on segmentation by color. Basically, what I want is to split my image into different components consisting of neighboring pixels with similar color values. Based on popular approaches for connected component labeling I've iterated through the image and used Union-Find in order to merge similar regions. However, since I have natural images with a lot of structure, there are literally hundreds and hundreds of (mostly very small) components within one image. Note for example the structure of the trees:

image description

This makes that approach very slow (the first pass doing the raw labeling is very fast, but identifying which regions to merge for up to thousands of regions takes too much time). The problem persists even after filtering and using a coarser quantifization.

I also tried to incorporate flood fill of OpenCV which brings the great functionality of utilizing a mask. I started flood fill from each pixel that was not yet assigned, which was quite fast. However, the mask uses uchar and therefore can't be used to store labels that are bigger than 255 so I had to use multiple masks which feels quite hacky. Also, flood fill is not very flexible regarding its similarity measure.

The connected components functionality of OpenCV can of course not be used, since I don't work on binary images.

Does anybody know of a good approach that can be used for my problem? Maybe I just haven't found the right functions in OpenCV yet?

2015-07-20 04:31:31 -0500 received badge  Scholar (source)
2015-07-15 12:31:03 -0500 received badge  Nice Question (source)
2015-07-14 04:11:54 -0500 asked a question Is there any reason not to use UMat?

I'm currently writing a piece of software that uses various different modules of OpenCV (some examples are edge detection via Canny, filtering operators, optical flow and I have some own algorithms that work on the opencv matrices).

My question is, with the introduction of UMat in OpenCV3, is there any reason to still use Mat? Currently I'm still using Mat everythere (having only recently moved to 3.0), but trying out Farnebacks optical flow method, I realized it's much faster using the GPU speedup of UMat. But I love uniformity so at best I'll want to use ONLY UMat or ONLY Mat in my entire software. So I'm now thinking about using UMat everywhere so that I won't have to convert between the two of them.

Is this a good idea? Are there drawbacks of using UMat everywhere? I read of some cases where using the GPU actually led to a loss of speed. Do these problems still exist in the gold release?

2015-06-22 07:32:37 -0500 received badge  Student (source)
2015-06-22 07:24:15 -0500 asked a question NormalBayesClassifier Predict Errors

I'm trying to get a NormalBayesClassifier running and by now have the impression that I'm using something about this class fundamentally wrong.

So far, this is the (complete) code:

#include <opencv2/ml/ml.hpp>

int main()
{
    cv::Ptr<cv::ml::NormalBayesClassifier> bayes = cv::ml::NormalBayesClassifier::create();
    cv::Mat_<float> trainFeatures(2, 2);
    cv::Mat_<int> trainClasses(2, 1);

    trainFeatures.at<float>(0, 0) = 0;
    trainFeatures.at<float>(0, 1) = 0;
    trainFeatures.at<float>(1, 0) = 1;
    trainFeatures.at<float>(1, 1) = 1;
    trainClasses.at<int>(0, 0) = 0;
    trainClasses.at<int>(1, 0) = 1;

    bayes->train(trainFeatures, cv::ml::ROW_SAMPLE, trainClasses);

    cv::Mat_<float> test(2, 2);

    test.at<float>(0, 0) = 0;
    test.at<float>(0, 1) = 0;
    test.at<float>(1, 0) = 0;
    test.at<float>(1, 1) = 0;

    bayes->predict(test);
}

If I try do run this, the program crashes (in Debug mode) at the predict statement with the following exception:

Run-Time Check Failure #2 - Stack around the variable 'value' was corrupted.

Ouch.

If I only test a single data case, I don't get a crash but still weird exceptions (sorry for the German, but you should get the point):

// ...same as above...
cv::Mat_<float> test(1, 2);

test.at<float>(0, 0) = 0;
test.at<float>(0, 1) = 0;

bayes->predict(test);

Results in:

Ausnahme (erste Chance) bei 0x000007FEFCD6B3DD in Components.exe: Microsoft C++-Ausnahme: cv::Exception bei Speicherort 0x000000000013D990.
Ausnahme (erste Chance) bei 0x000007FEFCD6B3DD in Components.exe: Microsoft C++-Ausnahme: cv::Exception bei Speicherort 0x000000000013F460.

Ultimately I want to do something like this:

// ...same as above...
cv::Mat_<float> test(2, 2);
cv::Mat result;
cv::Mat resultP;

test.at<float>(0, 0) = 0;
test.at<float>(0, 1) = 0;
test.at<float>(1, 0) = 0;
test.at<float>(1, 1) = 0;

bayes->predictProb(test, result, resultP);

Needless to say this also doesn't work. OpenCV prints an error for that:

OpenCV Error: Null pointer (When the number of input samples is >1, the output vector of results must be passed) in cv::ml::NormalBayesClassifierImpl::predictProb, file C:\builds\master_PackSlave-win64-vc12-shared\opencv\modules\ml\src\nbayes.cpp, line 318

Can anyone see what I'm doing wrong? I use an SVM Classifier and an KNN Classifier in exactly the same way. They work like charms. I'm using OpenCV3 Gold Release by the way (but had the same errors in RC1).