2020-09-23 04:03:55 -0600
| received badge | ● Popular Question
(source)
|
2015-08-19 07:57:42 -0600
| received badge | ● Enthusiast
|
2015-08-18 07:36:09 -0600
| commented answer | OpenCV 2.4.11 Windows VS Linux We use very modern computer (less than 6 months) and the 2 operating systems are on the same computer. So It's exactly the same hardware.
As CUDA use hardware, small variation are comprehensible but surch a big one look strange |
2015-08-18 07:33:43 -0600
| commented question | OpenCV 2.4.11 Windows VS Linux I can run my code any time i want, the results are still the same. |
2015-08-14 15:47:18 -0600
| answered a question | Visual Studio Reference In Visual Studio, references are for assembly (.Net dll) or COM object.
You can't reference your C++ dependencies in that way. But for loading information about dll in the windows system, you should read this article from Microsoft:
Article |
2015-08-14 05:52:37 -0600
| commented answer | OpenCV 2.4.11 Windows VS Linux I think such a big difference deserve a deeper analysis.
50% more performance is really too much to be attributed to the operating system and its drivers.
Drivers are also up to date. |
2015-08-14 05:49:58 -0600
| commented question | OpenCV 2.4.11 Windows VS Linux LBerger, thank you for this interesting link, but I'm not sure to have exactly the same problem.
I'm aware about the OpenCL issues and my OpenCV version is built without any CL parts. To me, there is surch a big difference and it can't just be an overhead of the operating system.
This is 2 operating system on the same computer, so it's exactly the same hardware.
50% of difference if very very too much important. |
2015-08-13 11:54:19 -0600
| asked a question | OpenCV 2.4.11 Windows VS Linux It seams OpenCV with CUDA have better performance on Linux than Windows. I have compared on the same computer the performance of a GTX980 with Windows 10 and ubuntu 15.04.
Perf result on Linux are better than Windows.
I compiled OpenCV with the same compilation flags on each Operating System. Why there is surch a big difference beetwen the 2 OS with the same hardware ? for exemple: Sz_Type_KernelSz_Filters_Blur.Filters_Blur/11:
Windows 10: 295ms
Ubuntu 15.04: 281ms
Sz_Type_KernelSz_Filters_Sobel.Filters_Sobel/13:
Windows 10: 120ms
Ubuntu 15.04: 53ms
Sz_Type_Filters_Scharr.Filters_Scharr/7:
Windows 10: 170ms
Ubuntu 15.04: 91ms
Sz_Type_KernelSz_Filters_GaussianBlur.Filters_GaussianBlur/13:
Windows 10: 105ms
Ubuntu 15.04: 53ms
Sz_Type_KernelSz_Filters_Filter2D.Filters_Filter2D/41:
Windows 10: 1130ms
Ubuntu 15.04: 1075ms
Sz_Depth_Cn_MatOp_SetToMasked.MatOp_SetToMasked/32:
Windows 10: 220ms
Ubuntu 15.04: 183ms
Sz_Depth_Cn_Inter_Border_Mode_ImgProc_Remap.ImgProc_Remap/658:
Windows 10: 215ms
Ubuntu 15.04: 198ms
Sz_Depth_Cn_Inter_Scale_ImgProc_Resize.ImgProc_Resize/44:
Windows 10: 255ms
Ubuntu 15: 231ms
|
2015-08-12 08:25:07 -0600
| commented answer | Device GeForce GTX 980 is NOT compatible with current GPU module build When using cmake_gui on windows, I have 3 variables: - CUDA_ARCH_BIN: 3.0 3.5 5.0
- CUDA_ARCH_PTX: 5.0
- CUDA_GENERATION can be auto, fermi or Kepler. I set it to void value
I compiled with these values, but you say for maxwell cards, I have to use 5.2 instead of 5.0 ? |
2015-08-10 10:45:58 -0600
| asked a question | Device GeForce GTX 980 is NOT compatible with current GPU module build Hi all, I had compiled opencv 2.4.11 with CUDA 7 and Cublas, there was no error at compile time.
But, I'm still not understand why no tests are available when running opencv_test_gpu: [----------]
[ GPU INFO ] Run on OS Windows x64.
[----------]
*** CUDA Device Query (Runtime API) version (CUDART static linking) ***
Device count: 1
Device 0: "GeForce GTX 980"
CUDA Driver Version / Runtime Version 7.0 / 7.0
CUDA Capability Major/Minor version number: 5.2
Total amount of global memory: 4096 MBytes (4294967296 bytes)
GPU Clock Speed: 1.28 GHz
Max Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536,65536), 3D=(4096,4096,4096)
Max Layered Texture Size (dim) x layers 1D=(16384) x 2048, 2D=(16384,16384) x 2048
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Concurrent kernel execution: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support enabled: No
Device is using TCC driver mode: No
Device supports Unified Addressing (UVA): Yes
Device PCI Bus ID / PCI location ID: 1 / 0
Compute Mode:
Default (multiple host threads can use ::cudaSetDevice() with device simultaneously)
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.0, CUDA Runtime Version = 7.0, NumDevs = 1
Run tests on all supported devices
[==========] Running 0 tests from 0 test cases.
[==========] 0 tests from 0 test cases ran. (0 ms total)
[ PASSED ] 0 tests.
Running opencv_perf_gpu.exe also produces a strange unexpected result : [----------]
[ INFO ] Implementation variant: cuda.
[----------]
[----------]
[ FAILURE ] Device GeForce GTX 980 is NOT compatible with current GPU module build.
[----------]
Could someone helps me to find an explanation ?
Thanks. |
2015-08-07 04:51:10 -0600
| received badge | ● Scholar
(source)
|
2015-08-06 08:34:33 -0600
| commented answer | OpenCV 2.4.11 GPU features Are you using OpenCV only with Linux or also under Windows Operating System ? |
2015-08-06 08:33:21 -0600
| commented question | OpenCV 2.4.11 GPU features SLI was deactivated when tests ran ... |
2015-08-06 08:31:28 -0600
| commented answer | OpenCV 2.4.11 GPU features If I understand correctly your post, you mean that all tests failed because they can't find location of the test data ? It may be true for the Alienware computer
But what about : "Device GeForce GTX 980 is NOT compatible with current GPU module" when trying to execute opencv_perf_gpu ? Which options are you using when compiling under Linux ?
I set WITH_CUDA and WITH_CUBLAS to true and CUDA_GENERATION=Kepler then I run the build
Unfortunately, the compilation stopped with this error on Ubuntu 15 ../../lib/libopencv_core.so.2.4.11: undefined reference to `__cudaRegisterLinkedBinary_64_tmpxft_00000a3d_00000000_10_matrix_operations_compute_35_cpp1_ii_332650c4'
|
2015-08-06 08:20:53 -0600
| received badge | ● Editor
(source)
|
2015-08-06 04:36:05 -0600
| commented question | OpenCV 2.4.11 GPU features @StevenPuttemans Do you have some idea where the problem come from ? For memories with OpenCV 2.4.11 - GTX 880M / Windows 10 / >1000 GPU / tests failed but drivers are up to date
- GTX 980 / Windows 8 / No compatible GPU module, drivers are also up to date
- GTX 980 / Ubuntu 15 / I also have problem, to investigate ...
|
2015-08-05 08:36:29 -0600
| asked a question | OpenCV 2.4.11 GPU features Hi all, I have a Alienware laptop with 2 x GTX880M (SLI) with Windows 10.
I have recompiled OpenCV 2.4.11 with CUDA and Cublas option. I'm using CUDA 7.0. I do the same on another computer which have a GTX 980 on Windows 8. I use the same options and also CUDA 7.0. On my Alienware, when running the opencv_test_gpu.exe, I have 1067 failed tests.
With my own code using OpenCV CUDA, I'm experiencing crashes and performance issues, I think these strange behaviours may be linked with these tests. Run tests on all supported devices
[==========] Running 51682 tests from 128 test cases.
[----------] Global test environment set-up.
[----------] 4 tests from GPU_Video/FGDStatModel
[ RUN ] GPU_Video/FGDStatModel.Update/0
E:\opencv2.4.11-recomp\opencv\sources\modules\gpu\test\test_bgfg.cpp(95): error: Value of: cap.isOpened()
Actual: false
Expected: true
[ FAILED ] GPU_Video/FGDStatModel.Update/0, where GetParam() = (GeForce GTX 880M, "768x576.avi", Channels(3)) (12 ms)
[ RUN ] GPU_Video/FGDStatModel.Update/1
E:\opencv2.4.11-recomp\opencv\sources\modules\gpu\test\test_bgfg.cpp(95): error: Value of: cap.isOpened()
Actual: false
Expected: true
[ FAILED ] GPU_Video/FGDStatModel.Update/1, where GetParam() = (GeForce GTX 880M, "768x576.avi", Channels(4)) (3 ms)
[ RUN ] GPU_Video/FGDStatModel.Update/2
E:\opencv2.4.11-recomp\opencv\sources\modules\gpu\test\test_bgfg.cpp(95): error: Value of: cap.isOpened()
Actual: false
Expected: true
[ FAILED ] GPU_Video/FGDStatModel.Update/2, where GetParam() = (GeForce GTX 880M, "768x576.avi", Channels(3)) (4 ms)
[ RUN ] GPU_Video/FGDStatModel.Update/3
E:\opencv2.4.11-recomp\opencv\sources\modules\gpu\test\test_bgfg.cpp(95): error: Value of: cap.isOpened()
Actual: false
Expected: true
... [ FAILED ] GPU_ImgProc/WarpPerspectiveNPP.Accuracy/68, where GetParam() = (GeForce GTX 880M, 32FC4, direct, INTER_CUBIC)
[ FAILED ] GPU_ImgProc/WarpPerspectiveNPP.Accuracy/69, where GetParam() = (GeForce GTX 880M, 32FC4, inverse, INTER_NEAREST)
[ FAILED ] GPU_ImgProc/WarpPerspectiveNPP.Accuracy/70, where GetParam() = (GeForce GTX 880M, 32FC4, inverse, INTER_LINEAR)
[ FAILED ] GPU_ImgProc/WarpPerspectiveNPP.Accuracy/71, where GetParam() = (GeForce GTX 880M, 32FC4, inverse, INTER_CUBIC)
1067 FAILED TESTS
YOU HAVE 302 DISABLED TESTS
Does it look like a normal issue ? On the computer with a GTX 980, the opencv_test_gpu produces a more strange result. Run tests on all supported devices
[==========] Running 0 tests from 0 test cases.
[==========] 0 tests from 0 test cases ran. (1 ms total)
[ PASSED ] 0 tests.
Press any key to continue . . .
F:\OpenCV 2.4.11\opencv\sources\build64\bin\Release>gpu_perf4au.exe
[----------]
[ INFO ] Implementation variant: cuda.
[----------]
[----------]
[ FAILURE ] Device GeForce GTX 980 is NOT compatible with current GPU module
build.
[----------]
Does someone can help me to understand what's happen ? Thanks for all. |