Ask Your Question

Birky's profile - activity

2015-01-07 07:22:48 -0600 received badge  Editor (source)
2015-01-07 06:28:14 -0600 asked a question What type of gpuMat is needed for gpu version of erode function?

I have a following code:

    cv::Mat klabelsMat;
    cv::Mat klabelsMat32(height, width, CV_32SC1, segMask);
    klabelsMat32.convertTo(klabelsMat, CV_8UC1);
    cv::gpu::GpuMat gpuMat;
    gpuMat.upload(klabelsMat);

    int SizeStrElem3 = 3;
    cv::Mat  strElem3 = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(SizeStrElem3, SizeStrElem3));

    cv::gpu::erode(gpuMat, gpuMat, strElem3);
    cv::gpu::dilate(gpuMat, gpuMat, strElem3);
    cv::gpu::dilate(gpuMat, gpuMat, strElem3);
    cv::gpu::erode(gpuMat, gpuMat, strElem3);

    gpuMat.download(klabelsMat);
    finish=clock();

    klabelsMat.convertTo(klabelsMat32, CV_32SC1);
    memcpy((char*)segMask, (char*)klabelsMat32.data, width*height*4 );

I have a segMask which is a integer array containing numbers from 0 to 150 I want to apply the gpu version of erode and dilate to it, but it doesn't work. The CUDA crash at the first erode. Why does it not work?