Ask Your Question
0

Estimate white background

asked 2016-02-10 09:36:16 -0600

podlipensky gravatar image

updated 2016-02-10 10:12:57 -0600

Hi,

I have image with white uneven background (due to lighting). I'm trying to estimate background color and transform image into image with true white background. For this I estimated white color for each 15x15 pixels block based on its luminosity. So I've got the following map (on the right): image description

Now I want to interpolate color so it will be more smooth transition from 15x15 block to neighboring block, plus I want it to eliminate outliers (pink dots on left hand side). Could anyone suggest good technique/algorithm for this? (Ideally within OpenCV library, but not necessary)

edit retag flag offensive close merge delete

Comments

1
LBerger gravatar imageLBerger ( 2016-02-10 14:25:54 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
2

answered 2016-02-11 08:33:38 -0600

Simplest solution, convert to grayscale and do a OTSU thresholding, which will be between the letters and the background. Then simply replace the background with plain white color!

edit flag offensive delete link more

Comments

OTSU thresholding won't work perfectly with illumination, i.e. if some drawing is light on a photo due to illumination, it will be filtered out by OTSU thresholding since it will be done in grayscale mode. Any other ideas?

podlipensky gravatar imagepodlipensky ( 2016-02-21 12:34:18 -0600 )edit
0

answered 2016-02-10 10:06:39 -0600

jeanpat gravatar image

updated 2016-02-10 13:40:29 -0600

What about a top-hat filtering? From an ipython console:

kernel18 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(18,18))
filtered = cv2.morphologyEx(image, cv2.MORPH_TOPHAT, kernel18)

The image may has to be converted into a greyscaled image.

It's possible to get something like this image with a smaller kernel (clik on the link)

http://postimg.org/image/k7n2fjrfd/

edit flag offensive delete link more

Comments

If I convert image to grayscale, I'll loose an ability to detect outliers, but let me take a closer look to this method...

podlipensky gravatar imagepodlipensky ( 2016-02-10 10:14:01 -0600 )edit

From your image:

board = mh.imread('/home/jeanpat/Images/paperboard.jpg',-1)
cut = board[40:,10:1000]
cut = cut.max()-cut
print cut.shape, cut.max()
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(15,15))
cut_f = cv2.morphologyEx(cut,cv2.MORPH_TOPHAT, kernel)

(mh is a shortcut for mahotas, but it should possible to load the image with cv2 itself) link text

jeanpat gravatar imagejeanpat ( 2016-02-10 10:38:46 -0600 )edit

@jeanpat, unfortunately the operation above makes resulting image black. I believe this is due to erosion. But overall approach might work just fine if I can use reverse-erosion... Playing with it.

podlipensky gravatar imagepodlipensky ( 2016-02-10 12:54:01 -0600 )edit

A top hat with a 15x15 structuring element succeeds in extracting the characters (see the link)

jeanpat gravatar imagejeanpat ( 2016-02-10 13:42:31 -0600 )edit

Question Tools

2 followers

Stats

Asked: 2016-02-10 09:36:16 -0600

Seen: 1,584 times

Last updated: Feb 11 '16