Ask Your Question

pcardoso's profile - activity

2019-10-22 04:09:30 -0500 received badge  Popular Question (source)
2017-03-30 10:56:02 -0500 commented answer Blender with transparent background

Thanks , but I ended up not using FeatherBlender at all.

I used split and merge to set my mask (after a little dilation and blurring) as the alpha channel of the source image. Seems to work well enough.

    cv::Mat kernel = cv::getStructuringElement(cv::MORPH_RECT, cv::Size(5, 5));
    cv::Mat dilatedMask;
    cv::dilate(mask, dilatedMask, kernel);

    GaussianBlur(dilatedMask, mask, cv::Size(11, 11), 0, 0, BORDER_REFLECT);

    vector<Mat> imageChannels(4);
    cv::split(foreground, imageChannels);
    imageChannels[3] = mask;

    cv::Mat res;
    cv::merge(imageChannels, res);
2017-03-30 05:36:38 -0500 asked a question Blender with transparent background

Is there a way to use a blender (e.g.: the FeatherBlender) but generating an output image with a transparent background?

I am trying to use (or abuse) the FeatherBlender to copy an image with a mask, but I want to smooth the edges a bit, like the feather tool in Photoshop.

The mask was generated with the grabcut algorithm. I want to smooth the edges a bit, since they are a bit jagged.

Thanks

2017-03-30 05:32:23 -0500 received badge  Enthusiast
2017-03-10 05:00:13 -0500 commented question Grabcut mask values

Turns out the cause of most of my issues was in the coordinates I was feeding to grabCut. After I fixed them most of my issues went away.

Other lessons learned:

  • No need to reset the models from my experience
  • The comment the initial values of the mask are indeed a typo
  • The 255 value on the mask was due to the way the python example works, to make the mask visible for for display.
  • And of course, I need to check my inputs properly

Thanks for the help @LBerger, it helped me a lot.

Here is my updated block from above: https://gist.github.com/pcardoso/5ecc901062b3ac7e5160bfb308f3fd5f (https://gist.github.com/pcardoso/5ecc...)

2017-03-09 05:47:06 -0500 commented question Grabcut mask values

I'm sure I miss a lot here, there are some things I cannot understand in the Python example (grabcut.py).

First, the mask seems to be initialized to zeros (BG), but the comment says PR_BG:

mask = np.zeros(img.shape[:2],dtype = np.uint8) # mask initialized to PR_BG

Then, on each iteration, the bgdmodel and fgdmodel are cleared. Why?

bgdmodel = np.zeros((1,65),np.float64)
fgdmodel = np.zeros((1,65),np.float64)

Thanks for the help!

2017-03-09 04:44:10 -0500 commented question Grabcut mask values

Updated the question with more information.

2017-03-09 04:43:48 -0500 received badge  Editor (source)
2017-03-08 12:41:15 -0500 asked a question Grabcut mask values

The documentation for GrabCut says this about the mask parameter:

@Param mask Input/output 8-bit single-channel mask. The mask is initialized by the function when mode is set to GC_INIT_WITH_RECT. Its elements may have one of the cv::GrabCutClasses.

The first time I invoke grabCut it runs successfully and the mask is initialised. But the second time I do it, using the mask and mode == GC_INIT_WITH_MASK, I get the following error:

OpenCV Error: Bad argument (mask element value must be equal GC_BGD or GC_FGD or GC_PR_BGD or GC_PR_FGD) in checkMask, file /Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/grabcut.cpp, line 337

The values in the mask matrix are either 0 or 255, when I was expecting 0-3. Does this make sense? I am doing anything wrong?

Update: I am using version 3.2.0 for the iPhone. The crash does not happen if I replace pixels with 255 with 1 (GC_FGD), but then all I get is a completely transparent image.

Here is my main code block, in Objective C. Some ivar and properties (i.e., _mask and self.mask are the same thing.

- (UIImage *)processImage {
    if (!self.firstRun) {
        for( int y = 0; y < _mask.rows; y++ ) {
            for( int x = 0; x < _mask.cols; x++ ) {
                uchar val = _mask.at<uchar>(y,x);
                if (val == 255) {
                    _mask.at<uchar>(y,x) = GC_FGD;
                }
            }
        }
        for (NSValue *r in self.foregroundPoints) {
            rectangle(_mask, cvRectFromCGRect(r.CGRectValue), GC_FGD);
        }
        for (NSValue *r in self.backgroundPoints) {
            rectangle(_mask, cvRectFromCGRect(r.CGRectValue), GC_BGD);
        }
    }

    grabCut(_inputImageMat,
            _mask,
            cvRectFromCGRect(_rect),
            _bgModel,
            _fgModel,
            1, // number of iterations
            _firstRun ? cv::GC_INIT_WITH_RECT : cv::GC_INIT_WITH_MASK);

    self.firstRun = false;

    Mat image = self.inputImageMat.clone();

    // Get the pixels marked as likely foreground
    cv::compare(self.mask, cv::GC_PR_FGD, self.mask, cv::CMP_EQ);
    // Generate output image
    cv::Mat foreground(image.size(), CV_8UC3, cv::Scalar(255, 0, 255));
    image.copyTo(foreground, self.mask); // bg pixels not copied

    // image has magenta on transparent sections. replace before using
    return [UIImage imageWithCVMat:foreground];
}