Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Unwanted preprocessing in filter2D

I am playing around with an edge detection algorithm using OpenCV in Python. For comparison I also created a simple convolution function that slides a kernel over every pixel in the image. However, the results between this 'manual' convolution and cv2::filter2D are quite different as can be seen in the attached picture. The filter2D function (middle row) is very fast compared to my implementation (bottom row) but the results miss edges. In the attached example, a simple 3x3 horizontal ([121][000][-1-2-1]) kernel is used and we can already see notable differences. This might be a possible result of filter2D flipping the kernel. However, I save the input as abs(cv2.filter2D(img, -1, kernel)) and abs((roi*kernel).sum()) to make sure no negative values are present.

Because of the results, I suspect that filter2D does some filtering or other image processing, either before or after applying the convolution. It's also possible that negative values are set to zero. I have little ambition to rewrite the whole filter2D function (since my simple implementation is too slow) and wondered whether it is possible to turn this off and if so, can the documentation be updated?

image description

Unwanted preprocessing in filter2D

I am playing around with an edge detection algorithm using OpenCV in Python. For comparison I also created a simple convolution function that slides a kernel over every pixel in the image. However, the results between this 'manual' convolution and cv2::filter2D are quite different as can be seen in the attached picture. The filter2D function (middle row) is very fast compared to my implementation (bottom row) but the results miss some edges. In the attached example, a simple 3x3 horizontal ([121][000][-1-2-1]) kernel is used and we can already see notable differences.

This might be a possible result of filter2D flipping the kernel. However, I save the input as abs(cv2.filter2D(img, -1, kernel)) and abs((roi*kernel).sum()) to make sure no seems to be caused by the filter2D function surpressing negative values values. When I flip the kernel, the missing edges are present.

Because of the displayed, so the image is equal to the difference between the images in row 2 (cv2) and row 3 (my implementation). Is there an option that allows filter2D to also record negative (or absolute magnitude) values? A simple workaround would be to do convolutions for the kernel and the 180 degrees flipped version and sum the two results, I suspect that filter2D does some filtering or other image processing, either before or after applying the convolution. It's also possible that negative values are set to zero. but this will unnecessarily complicate the code.

Since I have little ambition to rewrite the whole filter2D function filter2D function to suit my needs (since my simple implementation is too slow) and slow), I wondered whether there is an official workaround? Regardless of a solution, I think it is possible to turn this off and if so, can the documentation be updated?would be good to update the documentation.

image description

Unwanted preprocessing in filter2D

I am playing around with an edge detection algorithm using OpenCV in Python. For comparison I also created a simple convolution function that slides a kernel over every pixel in the image. However, the results between this 'manual' convolution and cv2::filter2D are quite different as can be seen in the attached picture. The filter2D function (middle row) is very fast compared to my implementation (bottom row) but the results miss some edges. In the attached example, a simple 3x3 horizontal ([121][000][-1-2-1]) kernel is used and we can already see notable differences.

This seems to be caused by the filter2D function surpressing negative values. When I flip the kernel, the missing edges are displayed, so the image is equal to the difference between the images in row 2 (cv2) and row 3 (my implementation). Is there an option that allows filter2D to also record negative (or absolute magnitude) values? A simple workaround would be to do convolutions for the kernel and the 180 degrees flipped version and sum the two results, but this will unnecessarily complicate the code.

Since I have little ambition to rewrite the filter2D function to suit my needs (since my simple implementation is too slow), I wondered whether there is an official workaround? Regardless of a solution, I think it would be good to update the documentation.

image description

Unwanted preprocessing in filter2DNegative values in filter2D convolution

I am playing around with an edge detection algorithm using OpenCV in Python. For comparison I also created a simple convolution function that slides a kernel over every pixel in the image. However, the results between this 'manual' convolution and cv2::filter2D are quite different as can be seen in the attached picture. The filter2D function (middle row) is very fast compared to my implementation (bottom row) but the results miss some edges. In the attached example, a simple 3x3 horizontal ([121][000][-1-2-1]) kernel is used and we can already see notable differences.

This seems to be caused by the filter2D function surpressing negative values. When I flip the kernel, the missing edges are displayed, so the image is equal to the difference between the images in row 2 (cv2) and row 3 (my implementation). Is there an option that allows filter2D to also record negative (or absolute magnitude) values? A simple workaround would be to do convolutions for the kernel and the 180 degrees flipped version and sum the two results, but this will unnecessarily complicate the code.

Since I have little ambition to rewrite the filter2D function to suit my needs (since my simple implementation is too slow), I wondered whether there is an official workaround? Regardless of a solution, I think it would be good to update the documentation.

A minimal example using an image img and a padded version paddedImg with kernel as described above would be:

resultsCV2 = cv2.filter2D(img, -1, kernel)
(iH, iW) = paddedImg.shape[:2]
resultsMyImplementation = np.zeros((iH,iW,1), dtype='int32')
for y in np.arange(1, iH+1):
    for x in np.arange(1, iW+1):
        roi = paddedImg[y-1,y+2, x-2:x+2]
        k = (roi*kernel).sum()
        resultsMyImplementation[y-1,x-1] = abs(k)

image description

Negative values in filter2D convolution

I am playing around with an edge detection algorithm on a .JPG image using OpenCV in Python. For comparison I also created a simple convolution function that slides a kernel over every pixel in the image. However, the results between this 'manual' convolution and cv2::filter2D are quite different as can be seen in the attached picture. The filter2D function (middle row) is very fast compared to my implementation (bottom row) but the results miss some edges. In the attached example, a simple 3x3 horizontal ([121][000][-1-2-1]) kernel is used and we can already see notable differences.

This seems to be caused by the filter2D function surpressing negative values. When I flip the kernel, the missing edges are displayed, so the image is equal to the difference between the images in row 2 (cv2) and row 3 (my implementation). Is there an option that allows filter2D to also record negative (or absolute magnitude) values? A simple workaround would be to do convolutions for the kernel and the 180 degrees flipped version and sum the two results, but this will unnecessarily complicate the code.

Since I have little ambition to rewrite the filter2D function to suit my needs (since my simple implementation is too slow), I wondered whether there is an official workaround? Regardless of a solution, I think it would be good to update the documentation.

A minimal example using an image img and a padded version paddedImg with kernel as described above would be:

resultsCV2 = cv2.filter2D(img, -1, kernel)
(iH, iW) = paddedImg.shape[:2]
resultsMyImplementation = np.zeros((iH,iW,1), dtype='int32')
for y in np.arange(1, iH+1):
    for x in np.arange(1, iW+1):
        roi = paddedImg[y-1,y+2, x-2:x+2]
        k = (roi*kernel).sum()
        resultsMyImplementation[y-1,x-1] = abs(k)

image description

Negative values in filter2D convolution

I am playing around with an edge detection algorithm on a .JPG image using OpenCV in Python. For comparison I also created a simple convolution function that slides a kernel over every pixel in the image. However, the results between this 'manual' convolution and cv2::filter2D are quite different as can be seen in the attached picture. The filter2D function (middle row) is very fast compared to my implementation (bottom row) but the results miss some edges. In the attached example, a simple 3x3 horizontal ([121][000][-1-2-1]) kernel is used and we can already see notable differences.

This seems to be caused by the filter2D function surpressing negative values. When I flip the kernel, the missing edges are displayed, so the image is equal to the difference between the images in row 2 (cv2) and row 3 (my implementation). Is there an option that allows filter2D to also record negative (or absolute magnitude) values? A simple workaround would be to do convolutions for the kernel and the 180 degrees flipped version and sum the two results, but this will unnecessarily complicate the code.

Since I have little ambition to rewrite the filter2D function to suit my needs (since my simple implementation is too slow), I wondered whether there is an official workaround? Regardless of a solution, I think it would be good to update the documentation.

A minimal example using an a .JPG image img and a padded version paddedImg with kernel as described above would be:

resultsCV2 = cv2.filter2D(img, -1, kernel)
imgYUV = cv2.cvtColor(img, cv2.COLOR_BGR2YUV)
clr1, clr2, clr3 = cv2.split(imgYUV)
pad=1
paddedImg = cv2.copyMakeBorder(clr1, pad,pad,pad,pad,cv2.BORDER_REPLICATE)
(iH, iW) = paddedImg.shape[:2]
resultsMyImplementation = np.zeros((iH,iW,1), dtype='int32')
for y in np.arange(1, iH+1):
np.arange(pad, iH+pad):
    for x in np.arange(1, iW+1):
np.arange(pad, iW+pad):
        roi = paddedImg[y-1,y+2, x-2:x+2]
paddedImg[y-pad,y+pad+1, x-pad:x+pad+1]
        k = (roi*kernel).sum()
        resultsMyImplementation[y-1,x-1] = abs(k)
resultsMyImplementation[y-pad,x-pad] = k
resultsCV2 = cv2.filter2D(clr1, -1, kernel)

image description