Ask Your Question

crynsane's profile - activity

2016-02-16 09:27:25 -0600 asked a question OpenCL not using GPU with HAAR evaluator

As the title says, when I use a HAAR cascade (new or old format) cv::ocl::setUseOpenCL() command has no effect on detection speed or graphic card memory usage. When I change the cascade to an LBP one, switching OpenCL on and off using the aforementioned command yields almost 2x difference in achieved FPS and a significant difference in graphic card memory usage. Another weird thing is that this happens no matter whether I use cv::Mat or cv::UMat, the latter being a little slower due to data copying.

Specs:

General configuration for OpenCV 3.1.0-dev =====================================
  Version control:               3.1.0-117-g35014a9

  Platform:
    Host:                        Linux 3.19.0-20-lowlatency x86_64
    CMake:                       2.8.12.2
    CMake generator:             Unix Makefiles
    CMake build tool:            /usr/bin/make
    Configuration:               Release
  Other third-party libraries:
    Use IPP:                     9.0.1 [9.0.1]
         at:                     /mnt/OpenCV/opencv/3rdparty/ippicv/unpack/ippicv_lnx
    Use IPP Async:               NO
    Use VA:                      NO
    Use Intel VA-API/OpenCL:     NO
    Use Eigen:                   YES (ver 3.2.0)
    Use Cuda:                    YES (ver 6.5)
    Use OpenCL:                  YES
    Use custom HAL:              NO
  OpenCL:                        <dynamic loading="" of="" opencl="" library="">
    Include path:                /mnt/OpenCV/opencv/3rdparty/include/opencl/1.2
    Use AMDFFT:                  NO
    Use AMDBLAS:                 NO
  Graphic card:
    NVidia GTX 760
2016-02-02 02:23:18 -0600 received badge  Enthusiast
2016-02-01 07:59:33 -0600 received badge  Scholar (source)
2016-02-01 07:16:49 -0600 received badge  Student (source)
2016-02-01 07:16:46 -0600 received badge  Teacher (source)
2016-02-01 06:21:34 -0600 received badge  Necromancer (source)
2016-02-01 06:21:34 -0600 received badge  Self-Learner (source)
2016-02-01 05:36:29 -0600 answered a question detectMultiScale speed

I've probably found the reason behind this. The two versions with lower speed are using GPU, while the fast version is using CPU. The computer I'm using has an Intel HD graphic card so this is probably the explanation. I've declared the variables as cv::Mat and not cv::UMat and consequently wasn't expecting such behavior and it still sort of baffles me.

2015-12-19 10:57:42 -0600 commented question detectMultiScale speed

the results are exactly the same, except for the speed

2015-12-18 07:36:36 -0600 asked a question detectMultiScale speed

Hello,
I've recently upgraded to OpenCV 3.0 and found out, that (atleast in my case) the detectMultiScale overload that takes reject_levels and level_weights parameters is twice as fast as the other two (the one taking the num_detections vector and the default one). I've tested this many times on a large enough sample of images. Has it happened to any of you?
I've looked through the source code and it seems that the only difference it comes to is in CascadeClassifierInvoker and doesn't look like it should alter the speed in such a way it does.