Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Specify GPU device with Python API

When working on a project using Tensorflow (w/ GPU support) to process some features extracted from OpenCV (3.4.3, using python API) captures in real time, I am getting the following error from cuDNN whenever I try to read from the capture after I start the tf session:

F1028 02:37:31.456640 xxxxx cudnn_conv_layer.cu:28] Check failed: status == CUDNN_STATUS_SUCCESS (8 vs. 0) CUDNN_STATUS_EXECUTION_FAILED

I suppose that the issue is with OpenCV and Tensorflow both using GPU via CUDA at the same time and that the GPU ran out of memory. The workaround I am currently using is to start capturing with OpenCV first, and then start the tf session whenever it is actually needed. This way, TensorFlow knows that the GPU is busy and opted to use CPU only. However, the frame rate decreases significantly as a result.

Considering that I only use OpenCV for capturing and basic preprocessing, I don't think GPU support is necessary and it would be preferable to let TensorFlow use the GPU.

Is there a way to specify which GPU device to use with OpenCV Python API (or if it should use GPU at all)? I see with the C++ API there is a setDevice() method under the gpu namespace. Is there an equivalent for the Python API?

Specify GPU device with Python API

When working on a project using Tensorflow (w/ GPU support) to process some features extracted from OpenCV (3.4.3, using python Python API) captures in real time, I am getting the following error from cuDNN whenever I try to read from the capture after I start the tf session:

F1028 02:37:31.456640 xxxxx cudnn_conv_layer.cu:28] Check failed: status == CUDNN_STATUS_SUCCESS (8 vs. 0) CUDNN_STATUS_EXECUTION_FAILED

I suppose that the issue is with OpenCV and Tensorflow both using GPU via CUDA at the same time and that the GPU ran out of memory. The workaround I am currently using is to start capturing with OpenCV first, and then start the tf session whenever it is actually needed. This way, TensorFlow knows that the GPU is busy and opted to use CPU only. However, the frame rate decreases drops significantly as a result.

Considering that I only use OpenCV for capturing and basic preprocessing, I don't think GPU support is necessary and it would be preferable to let TensorFlow use the GPU.

Is there a way to specify which GPU device to use with OpenCV Python API (or if it should use GPU at all)? I see with the C++ API there is a setDevice() method under the gpu namespace. Is there an equivalent for the Python API?