Ask Your Question
1

How to remove these small glare from the image?

asked 2017-12-08 00:59:37 -0600

Santhosh1 gravatar image

updated 2017-12-08 01:43:05 -0600

This is my image

image description

I found this Matlab How to remove the glare and brightness in an image (Image preprocessing)?

I replicate it.

m_img = cv2.medianBlur(img,5)

ret,th1 = cv2.threshold(m_img,180,255,cv2.THRESH_BINARY)

timg = cv2.inpaint(cimg,th1,9,cv2.INPAINT_NS)

thresholded image

This is my result

image description

Not an exact improvement like I even lose the grids.

I even went through this How to remove glare from image

But I can't find any polarizer filter implementation in image processing.

Can anyone suggestion any improvements so that I can lose the glare without losing the grid?

edit retag flag offensive close merge delete

Comments

" polarizer filter" It's a physical filter before lens. You cannot simulate it using numerical filter

LBerger gravatar imageLBerger ( 2017-12-08 02:40:21 -0600 )edit

Can you please tell me what you want to do after this process? i.e Do you want to preserve the Surface to be exact same? Or you just want to filter/Segment the Blue regions from the Image?

Balaji R gravatar imageBalaji R ( 2017-12-08 02:42:19 -0600 )edit

@Balaji R Measuring objects placed on the grid. Since the grid surface is giving away glare edge of the object can't be detected. I want to remove the glare.

Santhosh1 gravatar imageSanthosh1 ( 2017-12-08 02:51:15 -0600 )edit

@LBerger When converted to HSV the glare pixels seem to have values of more than 90%, how can I use this to remove glare?

Santhosh1 gravatar imageSanthosh1 ( 2017-12-08 02:53:01 -0600 )edit

3 answers

Sort by ยป oldest newest most voted
3

answered 2017-12-08 09:03:35 -0600

moHe gravatar image

updated 2017-12-09 01:12:41 -0600

I've solved your problem, but the yellow is not very yellow..

import cv2
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import k_means


img = cv2.imread("./grids.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
mask = ((img_hsv > np.array([0, 0, 230])).astype(np.float32) + (img_hsv > np.array([0, 0, 230])).astype(np.float32) * (-0.5) + 0.5)
img_partly_darken = cv2.cvtColor(mask * img_hsv, cv2.COLOR_HSV2BGR)
plt.imshow(cv2.cvtColor(img_partly_darken, cv2.COLOR_BGR2RGB))
plt.show()

cv2.imwrite("t.png", img_partly_darken)
# Save the img now, and ... Surprise! You can feel the mystery:
plt.imshow(cv2.cvtColor(cv2.imread("t.png"), cv2.COLOR_BGR2RGB))
plt.show()

# Then, you can just pick out the green ones:
green_mask = img[:, :, 1] > img[:, :, 2]    # value of green channel > that of red channel
# Here is a trick, I use color space convertion to boardcast one channel to three channels
green_mask = (green_mask.astype(np.uint8)) * 255
green_mask = cv2.cvtColor(green_mask, cv2.COLOR_GRAY2BGR)
green3_mask = (green_mask > 0).astype(np.uint8) * 255
img_green = cv2.bitwise_and(green3_mask, img)
plt.imshow(cv2.cvtColor(img_green, cv2.COLOR_BGR2RGB))
plt.show()

# Back to the original img's colors:
ret, thr = cv2.threshold(cv2.cvtColor(img_green, cv2.COLOR_BGR2GRAY), 10, 255, cv2.THRESH_BINARY)
blue_mask = (cv2.cvtColor(thr, cv2.COLOR_GRAY2BGR) > 0).astype(np.uint8) * 255
kernel_open =cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5))
blue_mask = cv2.morphologyEx(blue_mask, cv2.MORPH_OPEN, kernel_open)
yellow_mask = 255 - blue_mask

# use k-means to get the two main colors -- blue and yellow
pixels = img
pixels = pixels.reshape(pixels.shape[0] * pixels.shape[1], 3)
[centroids, labels, inertia] = k_means(pixels, 2)
centroids = np.array(sorted(centroids.astype(np.uint8).tolist(), key=lambda x: x[0]))       # B channel
blue_centroid = centroids[1]
yellow_centroid = centroids[0]
blue_ones = cv2.bitwise_and(blue_mask, centroids[1])
yellow_ones = cv2.bitwise_and(yellow_mask, centroids[0])
plt.imshow(cv2.cvtColor(cv2.add(blue_ones, yellow_ones), cv2.COLOR_BGR2RGB))
plt.show()

The img_partly_darken image description

Imread the imwriten: image description

The blue_mask: image description

And the final result: image description

May this can help you:)

edit flag offensive delete link more

Comments

@moHe I have to admit as a newbee, every bit of info here will be equal to learning a new thing. Though your code isn't exactly what I was looking for my solution, it does help me work on to a solution for my problem. Thank You.

Santhosh1 gravatar imageSanthosh1 ( 2017-12-09 00:53:24 -0600 )edit

It should be blue_centroid = np.array([centriods[1]) and blue_ones = cv2.bitwise_and(blue_mask,blue_centroid) same for yellow

Santhosh1 gravatar imageSanthosh1 ( 2017-12-09 00:55:28 -0600 )edit
1

Your color shape conversion to broadcast one channel to three channels gave me an idea to find a solution for my problem. Hopefully your full code does other users as well ๐Ÿ‘

Santhosh1 gravatar imageSanthosh1 ( 2017-12-09 01:05:22 -0600 )edit

Hi, I've just had a free time to fix the problems.

Very glad that this can help you:)

moHe gravatar imagemoHe ( 2017-12-09 01:15:36 -0600 )edit

@moHe Can you tell me what exactly this code does? mask = ((img_hsv > np.array([0, 0, 230])).astype(np.float32) + (img_hsv > np.array([0, 0, 230])).astype(np.float32) * (-0.5) + 0.5) getting confused here

Santhosh1 gravatar imageSanthosh1 ( 2018-04-28 00:58:57 -0600 )edit
1

@Santhosh1 Hi, Santhosh, glad to response. Above all, img_hsv > np.array([0, 0, 230]) means I don't care what the value of Hue and Saturation, I only care the Value, 230 is a manually set value.

Suppose mk = img_hsv > np.array([0, 0, 230]) can get True on the pixels which are not too light(not glare). You can plt.imshow(mk.astype(np.float32)*255, cmap='gray') to see more clear.

Here, I firstly multiply mk by 0.5, then add 0.5 to it, so the final mask is: where there is no glare, there is 1, not affected, where there is glare, there is 0.5. As a result, img = mask * img can make the Value of glare pixels shrink half.

moHe gravatar imagemoHe ( 2018-04-28 06:53:19 -0600 )edit

@moHe Thank you, for making it more clear.

Santhosh1 gravatar imageSanthosh1 ( 2018-04-30 02:39:25 -0600 )edit
1

answered 2017-12-09 12:38:19 -0600

sjhalayka gravatar image

updated 2017-12-09 15:42:34 -0600

I found your mask using a very small number of statements. It should be fairly easy to convert it from C++ to Python:

// Load main image
Mat bgr_frame = imread("glare.png");

if (bgr_frame.empty())
{
    cout << "Error loading image file" << endl;
    return -1;
}

// Convert from BGR to HSV
Mat hsv_frame;
cvtColor(bgr_frame, hsv_frame, CV_BGR2HSV);

// Split HSV into H, S, V channels
Mat channels[3];
split(hsv_frame, channels);

// Get mask
threshold(channels[0], channels[0], 63, 255, CV_THRESH_BINARY);

// Use mask to generate a BGR image
Mat output(channels[0].rows, channels[0].cols, CV_8UC3);

for (int j = 0; j < channels[0].rows; j++)
{
    for (int i = 0; i < channels[0].cols; i++)
    {
        unsigned char val = channels[0].at<unsigned char>(j, i);

        if (255 == val)
        {
            output.at<Vec3b>(j, i)[0] = 189;
            output.at<Vec3b>(j, i)[1] = 108;
            output.at<Vec3b>(j, i)[2] = 47;
        }
        else
        {
            output.at<Vec3b>(j, i)[0] = 94;
            output.at<Vec3b>(j, i)[1] = 206;
            output.at<Vec3b>(j, i)[2] = 236;
        }
    }
}

imshow("hue", channels[0]);
imshow("output", output);
waitKey();

The mask and output look like this: image description

edit flag offensive delete link more
1

answered 2017-12-08 03:57:25 -0600

If you want to have the final segmented regions, I would go for

  • An OTSU based segmentation first
  • Then find contours
  • Define the average colour of each contour
  • Inpaint the contours with that colour

If you need to exact data, then look for stuff like how to remove specular reflection opencv c++ and I am sure some people have attempted this before.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2017-12-08 00:59:37 -0600

Seen: 22,899 times

Last updated: Dec 09 '17