Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Forward propagation of readNetFromTensorflow on GPU not CPU

System information (version)

OpenCV => 4.0
Windows / Platform => Windows 64 Bit
Compiler => Visual Studio 2015

Detailed description

I could forward propagate my tensorflow DNN by using the "readNetFromTensorflow " Function, but I want to have a faster Forward propagation, by Infeering on GPU instead of CPU.

I recompiled the Library,and set with Opencl, with Cuda, and with TBB to True, and in the code I used "net.setPreferableTarget(DNN_TARGET_OPENCL);" but the inference time is the same, can any one tell me how to Inferre on GPU or have a faster inference