Attention! This forum will be made read-only by Dec-20. Please migrate to https://forum.opencv.org. Most of existing active users should've received invitation by e-mail.
Ask Your Question

Birky's profile - activity

2015-01-07 07:22:48 -0500 received badge  Editor (source)
2015-01-07 06:28:14 -0500 asked a question What type of gpuMat is needed for gpu version of erode function?

I have a following code:

    cv::Mat klabelsMat;
    cv::Mat klabelsMat32(height, width, CV_32SC1, segMask);
    klabelsMat32.convertTo(klabelsMat, CV_8UC1);
    cv::gpu::GpuMat gpuMat;
    gpuMat.upload(klabelsMat);

    int SizeStrElem3 = 3;
    cv::Mat  strElem3 = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(SizeStrElem3, SizeStrElem3));

    cv::gpu::erode(gpuMat, gpuMat, strElem3);
    cv::gpu::dilate(gpuMat, gpuMat, strElem3);
    cv::gpu::dilate(gpuMat, gpuMat, strElem3);
    cv::gpu::erode(gpuMat, gpuMat, strElem3);

    gpuMat.download(klabelsMat);
    finish=clock();

    klabelsMat.convertTo(klabelsMat32, CV_32SC1);
    memcpy((char*)segMask, (char*)klabelsMat32.data, width*height*4 );

I have a segMask which is a integer array containing numbers from 0 to 150 I want to apply the gpu version of erode and dilate to it, but it doesn't work. The CUDA crash at the first erode. Why does it not work?