System information (version)
OpenCV => 4.0
Windows / Platform => Windows 64 Bit
Compiler => Visual Studio 2015
Detailed description
I could forward propagate my tensorflow DNN by using the "readNetFromTensorflow " Function, but I want to have a faster Forward propagation, by Infeering on GPU instead of CPU.
I recompiled the Library,and set with Opencl, with Cuda, and with TBB to True, and in the code I used "net.setPreferableTarget(DNN_TARGET_OPENCL);" but the inference time is the same, can any one tell me how to Inferre on GPU or have a faster inference