2013-10-02 02:33:11 -0600 | received badge | ● Supporter (source) |
2013-10-02 02:30:10 -0600 | asked a question | GPU cascade classifier and cascade format Hi, In the OpenCV documentation I have come across the following statement, about loading cascade with GPU classifier: "Only the old haar classifier (trained by the haar training application) and NVIDIA’s nvbin are supported for HAAR and only new type of OpenCV XML cascade supported for LBP." Is this true or the documentation is simply outdated? I would try myself, but I am still waiting for my NVIDIA card so the question I am facing now is whether to tackle this problem until my card arrives. If the statement is true, is there any simple way to convert a "new" cascade based on Haar features (trained with traincascade) into the old format cascade (trained with Haar training application) Any other suggestions are much appreciated. Thanks in advance! |
2013-10-02 01:41:00 -0600 | commented answer | OpenCV with CUDA: Large opencv_gpu246.dll file Thanks, that makes a lot of sense. |
2013-10-01 10:02:43 -0600 | received badge | ● Editor (source) |
2013-10-01 09:47:42 -0600 | asked a question | OpenCV with CUDA: Large opencv_gpu246.dll file Hi, I am trying to compile OpenCV 2.4.6 with CUDA. The compilation passed fine, however I got very large opencv_gpu246.dll output file (around 300 MB). I left options regarding GPU architectures unchanged in Cmake GUI, i.e.: CUDA_ARCH_BIN 1.1 1.2 1.3 2.0 2.1(2.0) 3.0, CUDA_ARCH_PTX 2.0 Even if I try to compile for only one architecture, e.g.: CUDA_ARCH_BIN 3.0 , CUDA_ARCH_PTX <empty> I get quite a large file (around 60 MB). When I compare these sizes to pre-built dll which ships with OpenCV (less than 1MB), the difference is huge. Has anyone observed the same behavior and what may be the problem? Thanks in advance! |