Hello! I'm trying to run test opencv_test_cudacodec but getting errors. First I get CUDA_ERROR_INVALID_SOURCE in cuvid_video_source.cpp:
CUresult cuRes = cuvidCreateVideoSource(&videoSource_, fname.c_str(), ¶ms);
if (cuRes == CUDA_ERROR_INVALID_SOURCE)
throw std::runtime_error("");
Then we catch exception in video_reader.cpp and try create FFmpegVideoSource:
catch (...)
{
Ptr<RawVideoSource> source(new FFmpegVideoSource(filename));
videoSource.reset(new RawVideoSourceWrapper(source));
}
But in the ffmpeg_video_source.cpp I get null in stream_:
stream_ = create_InputMediaStream_FFMPEG_p(fname.c_str(), &codec, &chroma_format, &width, &height);
if (!stream_)
CV_Error(Error::StsUnsupportedFormat, "Unsupported video source");
In console I see text:
Available options besides google test option:
Usage: opencv_test_cudacodecd.exe [params]
--cuda_device (value:-1)
CUDA device on which tests will be executed (-1 means all devices)
-h, --help (value:false)
Print help info
Run tests on all supported CUDA devices
[----------]
[ GPU INFO ] Run on OS Windows x64.
[----------]
*** CUDA Device Query (Runtime API) version (CUDART static linking) ***
Device count: 1
Device 0: "GeForce GTX 1050 Ti"
CUDA Driver Version / Runtime Version 9.20 / 9.20
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 4096 MBytes (4294967296 bytes)
GPU Clock Speed: 1.46 GHz
Max Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072,65536), 3D=(16384,16384,16384)
Max Layered Texture Size (dim) x layers 1D=(32768) x 2048, 2D=(32768,32768) x 2048
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Concurrent kernel execution: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support enabled: No
Device is using TCC driver mode: No
Device supports Unified Addressing (UVA): Yes
Device PCI Bus ID / PCI location ID: 1 / 0
Compute Mode:
Default (multiple host threads can use ::cudaSetDevice() with device simultaneously)
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.20, CUDA Runtime Version = 9.20,NumDevs=1
CTEST_FULL_OUTPUT
OpenCV version: 3.4.2
OpenCV VCS version: unknown
Build type: debug
Parallel framework: ms-concurrency
CPU features:
Intel(R) IPP optimization: disabled
Intel(R) IPP version: ippIP SSE4.2 (y8) 2017.0.3 (-) Jul 31 2017
[ INFO:0] Initialize OpenCL runtime...
OpenCL is disabled
[==========] Running 4 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 4 tests from CUDA_Codec/Video
[ RUN ] CUDA_Codec/Video.Reader/0, where GetParam() = (GeForce GTX 1050 Ti, "768x576.avi")
unknown file: error: C++ exception with description "OpenCV(3.4.2) D:\opencv\sources\modules\cudacodec\src\ffmpeg_video_source.cpp:110: error: (-210:Unsupported format or combination of formats) Unsupported video source in function 'cv::cudacodec::detail::FFmpegVideoSource::FFmpegVideoSource'
" thrown in the test body.
[ FAILED ] CUDA_Codec/Video.Reader/0, where GetParam() = (GeForce GTX 1050 Ti, "768x576.avi") (7076 ms)
Windows 10 x64
Visual Studio 2015 Comunity
OpenCV 3.4.2
CUDA 9.2
Video Codec SDK 8.2.15
Has anybody any ideas?