Ask Your Question
0

Estimate white background

asked Feb 10 '16

podlipensky gravatar image

updated Feb 10 '16

Hi,

I have image with white uneven background (due to lighting). I'm trying to estimate background color and transform image into image with true white background. For this I estimated white color for each 15x15 pixels block based on its luminosity. So I've got the following map (on the right): image description

Now I want to interpolate color so it will be more smooth transition from 15x15 block to neighboring block, plus I want it to eliminate outliers (pink dots on left hand side). Could anyone suggest good technique/algorithm for this? (Ideally within OpenCV library, but not necessary)

Preview: (hide)

Comments

1
LBerger gravatar imageLBerger (Feb 10 '16)edit

2 answers

Sort by » oldest newest most voted
2

answered Feb 11 '16

Simplest solution, convert to grayscale and do a OTSU thresholding, which will be between the letters and the background. Then simply replace the background with plain white color!

Preview: (hide)

Comments

OTSU thresholding won't work perfectly with illumination, i.e. if some drawing is light on a photo due to illumination, it will be filtered out by OTSU thresholding since it will be done in grayscale mode. Any other ideas?

podlipensky gravatar imagepodlipensky (Feb 21 '16)edit
0

answered Feb 10 '16

jeanpat gravatar image

updated Feb 10 '16

What about a top-hat filtering? From an ipython console:

kernel18 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(18,18))
filtered = cv2.morphologyEx(image, cv2.MORPH_TOPHAT, kernel18)

The image may has to be converted into a greyscaled image.

It's possible to get something like this image with a smaller kernel (clik on the link)

http://postimg.org/image/k7n2fjrfd/

Preview: (hide)

Comments

If I convert image to grayscale, I'll loose an ability to detect outliers, but let me take a closer look to this method...

podlipensky gravatar imagepodlipensky (Feb 10 '16)edit

From your image:

board = mh.imread('/home/jeanpat/Images/paperboard.jpg',-1)
cut = board[40:,10:1000]
cut = cut.max()-cut
print cut.shape, cut.max()
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(15,15))
cut_f = cv2.morphologyEx(cut,cv2.MORPH_TOPHAT, kernel)

(mh is a shortcut for mahotas, but it should possible to load the image with cv2 itself) link text

jeanpat gravatar imagejeanpat (Feb 10 '16)edit

@jeanpat, unfortunately the operation above makes resulting image black. I believe this is due to erosion. But overall approach might work just fine if I can use reverse-erosion... Playing with it.

podlipensky gravatar imagepodlipensky (Feb 10 '16)edit

A top hat with a 15x15 structuring element succeeds in extracting the characters (see the link)

jeanpat gravatar imagejeanpat (Feb 10 '16)edit

Question Tools

2 followers

Stats

Asked: Feb 10 '16

Seen: 1,648 times

Last updated: Feb 11 '16