Any way to have opencv_dnn run on GPU (python)?

asked 2018-02-21 10:54:45 -0600

giacomo gravatar image

Hi, I am running some caffe models on an Nvidia Jetson TX1, in python, loading the caffe model via opencv_dnn. This runs fine, even though quite slow.

In order to find some speed, I have recompiled OpenCV 3.4.0 with Halide support on Aarch64 and I am able to activate it by invoking " setPreferableBackend(dnn.DNN_BACKEND_HALIDE) ".

Unfortunately, with Halide the performance are approximately 10x slower. Furthermore, it makes no use of the GPU.

If I get it correctly, there is no plan to include cuDNN as a backend: the preferred way is Halide. This, however, lacks GPU support and/or is in its early stage of integration and has poor performances even on CPUs.

My question is: is there any way to run a caffe model, via opencv_dnn, having any form of (nvidia) GPU acceleration?

The answer after much investigation in my opinion is NO, but can someone confirm or deny this?

Thank you!

Giacomo

edit retag flag offensive close merge delete

Comments

@giacomo, There is one more setting besides preferable backend. It's preferable device for computations. Look at https://docs.opencv.org/master/db/d30... . You can run OpenCL code on NVidia's GPUs.

dkurt gravatar imagedkurt ( 2018-02-22 01:06:02 -0600 )edit

Hi @dkurt, I had tried DNN_TARGET_OPENCL, with no success. In particular:

  • DNN_TARGET_OPENCL + DNN_BACKEND_DEFAULT seems absolutely identical to DNN_TARGET_CPU. Both use CPU only and have the same performance

  • DNN_TARGET_OPENCL + DNN_BACKEND_HALIDE gives me a runtime error

"[ INFO:0] Initialize OpenCL runtime... Error: CL: clGetPlatformIDs failed: <unknown error=""> -1001 Aborted "

I'd appreciate any further pointers on how to run OpenCL code on Nvidia GPUs!

As an additional observation, when I take a look at dir(cnn), I cannot see the target "DNN_BACKEND_INFERENCE_ENGINE" mentioned in the docs

Giacomo

giacomo gravatar imagegiacomo ( 2018-02-22 02:41:40 -0600 )edit