 Is there a way to apply a blur or median smoothing filter to an image, while supplying a mask of pixels that should be ignored?

I have a height map from a laser-scanner which I want to smooth. The map is not continuous; wherever the laser was not reflected, the map simply contains no height data.

If I arbitrarily set the height for missing values to zero (or any other value) and then blur the image, this will introduce a lot of error around these missing values, and around all edges and holes of objects.

So I need to supply a binary mask to the blur. Masked pixels should be ignored when calculating the new blur or median value of neighbor pixels.

How can I accomplish this?

edit retag close merge delete

Sort by » oldest newest most voted

assuming mask where 255 = valid pixel, 0 = invalid pixel

pseudo-code written for Python - OpenCV

image[mask == 0] = 0
blurred_image = cv2.blur(image)

more

Although you write "assuming mask where 255 = valid pixel..." the code actually seems to assume that the mask is binary with 0/1. To use 255, the last line should be result = 255 * blurred_image / blurred_mask.

Is there any mathematical explanation for this approach to prove that this is correct?

I previously claimed this didn't work. However, I missed the image[mask == 0] = 0 part at the beginning. I agree that this is mathematically correct.

Is there any mathematical explanation for this approach to prove that this is correct?

According to the documentation, for smoothing, you add up all the intensities in the neighborhood of a pixel and divide it by the size of the neighborhood, i.e. you take the average. Pixels that are black contribute 0 to the average. Thus, to adjust the average to only include non-black pixels, you need to adjust the size you used for smoothing.

This adjusted size (actually the scale) is obtained by blurred_mask = cv2.blur(mask). E.g. if you have a 3-by-3 kernel for smoothing and 4 pixels are black, the value at the kernel's center will be 5/9. Dividing by it multiplies you result by 9/5, i.e. you only divided by 5 eventually.

image_blurred = image.copy() image_blurred[mask == 0] = 0 image_blurred = cv2.blur(image_blurred, kernel_size) mask_blurred = cv2.blur(mask, kernel_size)

image_blurred[mask != 0] = image_blurred[mask != 0] / mask_blurred[mask != 0]


You will have to combine number of functions to do this, because existing blur functions don't use mask. Here an example (I assume that value of maskImage is 255 if pixel is relevant, and 0 if pixel is irrelevant):

erode(maskImage,maskImage,Mat()); // values near irrelevant pixels should not be changed
blur(sourceImage,bluredImage,Size(3,3));
bluredImage = sourceImage + ((bluredImage-sourceImage) & maskImage);


Last operation is combination of two images sourceImage and bluredImage. Values where maskImage was 255 will come from bluredImage, and values where maskImage was 0 will come from sourceImage.

more

Your proposed method is incorrect.This technique is frequently used in the field of fingerprint.For each kernel ,which is one on the border ,You have to clear values of current kernel do not belong to mask image(sum of the coefficient must be one).

You can try it yourself to see that it works properly. After apply of erode() all pixels on border will get value 0 in maskImage. And sum of coefficients is one, so no problem here either. AIso am not sure I understand your remark about how it is used for fingerprints.

In the fingerprint field ,for noise reduction in the orientation map we have to smooth it by mask.This mask is generate in the quality process that there are holes in it.

How can you do the & operation, when the number of channels is not same. Assert that your input image and mask are also numpy array, you can do this :

import numpy as np
import cv2

imageAllBlurred = cv2.medianBlur(image,ksize) # with a kernel size value 'ksize' equals to 3 for example

more

looks like cv2.medianBlur only works for U8 images as of cv2 version 4.1.1 I think this answer from Alexander Smorkalov at OpenCV4Android group, suits you:

Quoting Alexander Smorkalov, for reference:

You can use several different ways to create mask for image. If you create images your self you can use alpha-channel as a mask. You need to save image in *.png format, and then load it with flags =-1 in imread. Then split it with cv::split function and path alpha channel as a mask.

If you cannot use alpha channel you can create mask using threshold ( http://docs.opencv.org/trunk/modules/imgproc/doc/miscellaneous_transformations.html?highlight=threshold#cv.Threshold )

more

2

There is a misunderstanding here: creating the mask is no problem, I already have the mask. But how can I smooth wile using that mask? the cvSmooth function does not take a mask parameter. And simply blurring first and then erasing the masked values does not work, the values of masked pixels will "seep" to neighboring pixels.

Official site

GitHub

Wiki

Documentation