Running into memory issues when using cv::cuda::sum

asked 2016-02-21 09:28:09 -0600

liquidmetal gravatar image

I want calculate run cv::sum on the GPU - so I looked to cv::cuda::sum. However, I'm running into issues. This github gist has all the details. I've launched

https://gist.github.com/liquidmetal/9...

Any help on how to fix this would be appreciated. I suspect I may be doing something wrong.

edit retag flag offensive close merge delete

Comments

On my computer, the following code works in a cpp file:

int main()
{
    // Create an identity matrix:
    // 1 0 0
    // 0 1 0
    // 0 0 1
    cv::Mat identity = cv::Mat::eye(3, 3, CV_32FC1);

    // Upload it to the GPU
    cv::cuda::GpuMat identity_gpu(identity);

    // Use the OpenCV cuda method for summation (OK)
    cv::Scalar res = cv::cuda::sum(identity_gpu);

    std::cout << "res=" << res << std::endl;

    // Be a good citizen
    return 0;
}

The result is:

res=[3, 0, 0, 0]

But as I never used cuda OpenCV before, I cannot help you more.

Eduardo gravatar imageEduardo ( 2016-02-21 13:28:36 -0600 )edit

Also, I tested your gist code without any problem (W7 x64, VS2010). The output:

Cuda runtime version = 7000
Cuda driver version = 7000
Device count = 1
CUDA initialized and ready to go
res=[3, 0, 0, 0]

PS:

Had to add the include #include "cuda_runtime.h". Works also without initialize_cuda();.

Eduardo gravatar imageEduardo ( 2016-02-21 13:39:04 -0600 )edit