Forward propagation of readNetFromTensorflow on GPU not CPU

asked 2018-06-05 00:30:24 -0500

khaled elmadawi gravatar image

System information (version)

OpenCV => 4.0
Windows / Platform => Windows 64 Bit
Compiler => Visual Studio 2015

Detailed description

I could forward propagate my tensorflow DNN by using the "readNetFromTensorflow " Function, but I want to have a faster Forward propagation, by Infeering on GPU instead of CPU.

I recompiled the Library,and set with Opencl, with Cuda, and with TBB to True, and in the code I used "net.setPreferableTarget(DNN_TARGET_OPENCL);" but the inference time is the same, can any one tell me how to Inferre on GPU or have a faster inference

edit retag flag offensive close merge delete


@khaled elmadawi , please complete your question with information about GPU, OpenCL drivers. Have you compiled OpenCV with options -DWITH_OPENCL=ON and -DOPENCV_DNN_OPENCL=ON?

dkurt gravatar imagedkurt ( 2018-06-05 02:44:23 -0500 )edit

Thanks for the reply, my GPU is Quadro M2000, yes I did -DWITH_OPENCL=ON and -DOPENCV_DNN_OPENCL=ON but I am not sure about the opencl drivers, can you tell me how to check it, download it?

khaled elmadawi gravatar imagekhaled elmadawi ( 2018-06-05 06:02:50 -0500 )edit

I searched for the opencl platform version 1.2

khaled elmadawi gravatar imagekhaled elmadawi ( 2018-06-05 06:34:11 -0500 )edit

@dkurt any news?

khaled elmadawi gravatar imagekhaled elmadawi ( 2018-06-07 07:44:13 -0500 )edit