Decode with multiple GPUs: How can I specify whichi gpu to use in cv::VideoCapture()?

asked 2020-02-17 03:03:49 -0500

RobinHu gravatar image

I have 2 videos to be decoded, and I have 2 available gpus.

I want to construct two video captures to decode video 0 with gpu 0 and decode video 1 with gpu 1.

Is there any ways like:

cv::VideoCapture vc0 = cv::VideoCapture(name0, "gpu_idx", 0)

cv::VideoCapture vc1 = cv::VideoCapture(name1, "gpu_idx", 1)

How can I achieve this feature? Thanks!

BTW, I set OPENCV_FFMPEG_CAPTURE_OPTIONS="video_codec;h264_cuvid" to use ffmpeg with cuvid to decode videos in opencv.

edit retag flag offensive close merge delete


unfortunately, cv::VideoCapture won't use any gpu for decoding, but maybe you can use something from here (requires building opencv with cuda support / contrib)

berak gravatar imageberak ( 2020-02-17 03:21:04 -0500 )edit

Thanks for your response! But I actually use gpu for decoding by set the environmental virable OPENCV_FFMPEG_CAPTURE_OPTIONS="video_codec;h264_cuvid". As for the doc you referred to, I learned that cv::cudacodec::VideoReader is no longer supported after cuda 6.5 from issues under github @opencv. So I use the environmental variable to use gpu for decoding.

RobinHu gravatar imageRobinHu ( 2020-02-17 03:39:51 -0500 )edit

this environment variable allows you to set any key and value and pass it to ffmpeg. now you need to figure out if ffmpeg's cuvid has any way to control which gpu is used.

crackwitz gravatar imagecrackwitz ( 2020-02-19 09:35:15 -0500 )edit