Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

OpenCV 2.4.11 GPU features

Hi all,

I have a Alienware laptop with 2 x GTX880M (SLI) with Windows 10. I have recompiled OpenCV 2.4.11 with CUDA and Cublas option. I'm using CUDA 7.0.

I do the same on another computer which have a GTX 980 on Windows 8. I use the same options and also CUDA 7.0.

On my Alienware, when running the opencv_test_gpu.exe, I have 1067 failed tests. With my own code using OpenCV CUDA, I'm experiencing crashes and performance issues, I think these strange behaviours may be linked with these tests.

[----------]
[ GPU INFO ]    Run on OS Windows x64.
[----------]
*** CUDA Device Query (Runtime API) version (CUDART static linking) *** 

Device count: 2

Device 0: "GeForce GTX 880M"
  CUDA Driver Version / Runtime Version          7.50 / 7.0
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 8192 MBytes (8589934592 bytes)
  ( 8) Multiprocessors x (192) CUDA Cores/MP:     1536 CUDA Cores
  GPU Clock Speed:                               0.99 GHz
  Max Texture Dimension Size (x,y,z)             1D=(65536), 2D=(65536,65536), 3D=(4096,4096,4096)
  Max Layered Texture Size (dim) x layers        1D=(16384) x 2048, 2D=(16384,16384) x 2048
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per block:           1024
  Maximum sizes of each dimension of a block:    1024 x 1024 x 64
  Maximum sizes of each dimension of a grid:     2147483647 x 65535 x 65535
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and execution:                 Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Concurrent kernel execution:                   Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support enabled:                No
  Device is using TCC driver mode:               No
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           7 / 0
  Compute Mode:
      Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) 

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version  = 7.50, CUDA Runtime Version = 7.0, NumDevs = 2

[...]

Run tests on all supported devices 

    [==========] Running 51682 tests from 128 test cases.
    [----------] Global test environment set-up.
    [----------] 4 tests from GPU_Video/FGDStatModel
    [ RUN      ] GPU_Video/FGDStatModel.Update/0
    E:\opencv2.4.11-recomp\opencv\sources\modules\gpu\test\test_bgfg.cpp(95): error: Value of: cap.isOpened()
      Actual: false
    Expected: true
    [  FAILED  ] GPU_Video/FGDStatModel.Update/0, where GetParam() = (GeForce GTX 880M, "768x576.avi", Channels(3)) (12 ms)
    [ RUN      ] GPU_Video/FGDStatModel.Update/1
    E:\opencv2.4.11-recomp\opencv\sources\modules\gpu\test\test_bgfg.cpp(95): error: Value of: cap.isOpened()
      Actual: false
    Expected: true
    [  FAILED  ] GPU_Video/FGDStatModel.Update/1, where GetParam() = (GeForce GTX 880M, "768x576.avi", Channels(4)) (3 ms)
    [ RUN      ] GPU_Video/FGDStatModel.Update/2
    E:\opencv2.4.11-recomp\opencv\sources\modules\gpu\test\test_bgfg.cpp(95): error: Value of: cap.isOpened()
      Actual: false
    Expected: true
    [  FAILED  ] GPU_Video/FGDStatModel.Update/2, where GetParam() = (GeForce GTX 880M, "768x576.avi", Channels(3)) (4 ms)
    [ RUN      ] GPU_Video/FGDStatModel.Update/3
    E:\opencv2.4.11-recomp\opencv\sources\modules\gpu\test\test_bgfg.cpp(95): error: Value of: cap.isOpened()
      Actual: false
    Expected: true

...

[  FAILED  ] GPU_ImgProc/WarpPerspectiveNPP.Accuracy/68, where GetParam() = (GeForce GTX 880M, 32FC4, direct, INTER_CUBIC)
[  FAILED  ] GPU_ImgProc/WarpPerspectiveNPP.Accuracy/69, where GetParam() = (GeForce GTX 880M, 32FC4, inverse, INTER_NEAREST)
[  FAILED  ] GPU_ImgProc/WarpPerspectiveNPP.Accuracy/70, where GetParam() = (GeForce GTX 880M, 32FC4, inverse, INTER_LINEAR)
[  FAILED  ] GPU_ImgProc/WarpPerspectiveNPP.Accuracy/71, where GetParam() = (GeForce GTX 880M, 32FC4, inverse, INTER_CUBIC)

1067 FAILED TESTS
  YOU HAVE 302 DISABLED TESTS

Does it look like a normal issue ?

On the computer with a GTX 980, the opencv_test_gpu produces a more strange result.

[----------]
[ GPU INFO ]    Run on OS Windows x64.
[----------]
*** CUDA Device Query (Runtime API) version (CUDART static linking) ***

Device count: 1

Device 0: "GeForce GTX 980"
  CUDA Driver Version / Runtime Version          7.50 / 7.0
  CUDA Capability Major/Minor version number:    5.2
  Total amount of global memory:                 4096 MBytes (4294967296 bytes)
  GPU Clock Speed:                               1.28 GHz
  Max Texture Dimension Size (x,y,z)             1D=(65536), 2D=(65536,65536), 3
D=(4096,4096,4096)
  Max Layered Texture Size (dim) x layers        1D=(16384) x 2048, 2D=(16384,16
384) x 2048
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per block:           1024
  Maximum sizes of each dimension of a block:    1024 x 1024 x 64
  Maximum sizes of each dimension of a grid:     2147483647 x 65535 x 65535
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and execution:                 Yes with 2 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Concurrent kernel execution:                   Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support enabled:                No
  Device is using TCC driver mode:               No
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           1 / 0
  Compute Mode:
      Default (multiple host threads can use ::cudaSetDevice() with device simul
taneously)

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version  = 7.50, CUDA Runtime Ver
sion = 7.0, NumDevs = 1

Run tests on all supported devices

[==========] Running 0 tests from 0 test cases.
[==========] 0 tests from 0 test cases ran. (1 ms total)
[  PASSED  ] 0 tests.
Press any key to continue . . .



F:\OpenCV 2.4.11\opencv\sources\build64\bin\Release>gpu_perf4au.exe
[----------]
[   INFO   ]    Implementation variant: cuda.
[----------]
[----------]
[ FAILURE  ]    Device GeForce GTX 980 is NOT compatible with current GPU module
 build.
[----------]

Does someone can help me to understand what's happen ?

Thanks for all.

OpenCV 2.4.11 GPU features

Hi all,

I have a Alienware laptop with 2 x GTX880M (SLI) with Windows 10. I have recompiled OpenCV 2.4.11 with CUDA and Cublas option. I'm using CUDA 7.0.

I do the same on another computer which have a GTX 980 on Windows 8. I use the same options and also CUDA 7.0.

On my Alienware, when running the opencv_test_gpu.exe, I have 1067 failed tests. With my own code using OpenCV CUDA, I'm experiencing crashes and performance issues, I think these strange behaviours may be linked with these tests.

[----------]
[ GPU INFO ]    Run on OS Windows x64.
[----------]
*** CUDA Device Query (Runtime API) version (CUDART static linking) *** 

Device count: 2

Device 0: "GeForce GTX 880M"
  CUDA Driver Version / Runtime Version          7.50 / 7.0
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 8192 MBytes (8589934592 bytes)
  ( 8) Multiprocessors x (192) CUDA Cores/MP:     1536 CUDA Cores
  GPU Clock Speed:                               0.99 GHz
  Max Texture Dimension Size (x,y,z)             1D=(65536), 2D=(65536,65536), 3D=(4096,4096,4096)
  Max Layered Texture Size (dim) x layers        1D=(16384) x 2048, 2D=(16384,16384) x 2048
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per block:           1024
  Maximum sizes of each dimension of a block:    1024 x 1024 x 64
  Maximum sizes of each dimension of a grid:     2147483647 x 65535 x 65535
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and execution:                 Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Concurrent kernel execution:                   Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support enabled:                No
  Device is using TCC driver mode:               No
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           7 / 0
  Compute Mode:
      Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) 

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version  = 7.50, CUDA Runtime Version = 7.0, NumDevs = 2

[...]

Run tests on all supported devices 

    [==========] Running 51682 tests from 128 test cases.
    [----------] Global test environment set-up.
    [----------] 4 tests from GPU_Video/FGDStatModel
    [ RUN      ] GPU_Video/FGDStatModel.Update/0
    E:\opencv2.4.11-recomp\opencv\sources\modules\gpu\test\test_bgfg.cpp(95): error: Value of: cap.isOpened()
      Actual: false
    Expected: true
    [  FAILED  ] GPU_Video/FGDStatModel.Update/0, where GetParam() = (GeForce GTX 880M, "768x576.avi", Channels(3)) (12 ms)
    [ RUN      ] GPU_Video/FGDStatModel.Update/1
    E:\opencv2.4.11-recomp\opencv\sources\modules\gpu\test\test_bgfg.cpp(95): error: Value of: cap.isOpened()
      Actual: false
    Expected: true
    [  FAILED  ] GPU_Video/FGDStatModel.Update/1, where GetParam() = (GeForce GTX 880M, "768x576.avi", Channels(4)) (3 ms)
    [ RUN      ] GPU_Video/FGDStatModel.Update/2
    E:\opencv2.4.11-recomp\opencv\sources\modules\gpu\test\test_bgfg.cpp(95): error: Value of: cap.isOpened()
      Actual: false
    Expected: true
    [  FAILED  ] GPU_Video/FGDStatModel.Update/2, where GetParam() = (GeForce GTX 880M, "768x576.avi", Channels(3)) (4 ms)
    [ RUN      ] GPU_Video/FGDStatModel.Update/3
    E:\opencv2.4.11-recomp\opencv\sources\modules\gpu\test\test_bgfg.cpp(95): error: Value of: cap.isOpened()
      Actual: false
    Expected: true

...

[  FAILED  ] GPU_ImgProc/WarpPerspectiveNPP.Accuracy/68, where GetParam() = (GeForce GTX 880M, 32FC4, direct, INTER_CUBIC)
[  FAILED  ] GPU_ImgProc/WarpPerspectiveNPP.Accuracy/69, where GetParam() = (GeForce GTX 880M, 32FC4, inverse, INTER_NEAREST)
[  FAILED  ] GPU_ImgProc/WarpPerspectiveNPP.Accuracy/70, where GetParam() = (GeForce GTX 880M, 32FC4, inverse, INTER_LINEAR)
[  FAILED  ] GPU_ImgProc/WarpPerspectiveNPP.Accuracy/71, where GetParam() = (GeForce GTX 880M, 32FC4, inverse, INTER_CUBIC)

1067 FAILED TESTS
  YOU HAVE 302 DISABLED TESTS

Does it look like a normal issue ?

On the computer with a GTX 980, the opencv_test_gpu produces a more strange result.

[----------]
[ GPU INFO ]    Run on OS Windows x64.
[----------]
*** CUDA Device Query (Runtime API) version (CUDART static linking) ***

Device count: 1

Device 0: "GeForce GTX 980"
  CUDA Driver Version / Runtime Version          7.50 / 7.0
  CUDA Capability Major/Minor version number:    5.2
  Total amount of global memory:                 4096 MBytes (4294967296 bytes)
  GPU Clock Speed:                               1.28 GHz
  Max Texture Dimension Size (x,y,z)             1D=(65536), 2D=(65536,65536), 3
D=(4096,4096,4096)
  Max Layered Texture Size (dim) x layers        1D=(16384) x 2048, 2D=(16384,16
384) x 2048
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per block:           1024
  Maximum sizes of each dimension of a block:    1024 x 1024 x 64
  Maximum sizes of each dimension of a grid:     2147483647 x 65535 x 65535
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and execution:                 Yes with 2 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Concurrent kernel execution:                   Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support enabled:                No
  Device is using TCC driver mode:               No
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           1 / 0
  Compute Mode:
      Default (multiple host threads can use ::cudaSetDevice() with device simul
taneously)

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version  = 7.50, CUDA Runtime Ver
sion = 7.0, NumDevs = 1

Run tests on all supported devices

[==========] Running 0 tests from 0 test cases.
[==========] 0 tests from 0 test cases ran. (1 ms total)
[  PASSED  ] 0 tests.
Press any key to continue . . .



F:\OpenCV 2.4.11\opencv\sources\build64\bin\Release>gpu_perf4au.exe
[----------]
[   INFO   ]    Implementation variant: cuda.
[----------]
[----------]
[ FAILURE  ]    Device GeForce GTX 980 is NOT compatible with current GPU module
 build.
[----------]

Does someone can help me to understand what's happen ?

Thanks for all.