I am trying to detect an object and put a red line on it, depending on its horizontal angle. Problem is that the lighting of the images changes drastically. Some of them are of good quality and some are not.
Example of good quality Image:
And that is how I want it to look after my program ran:
This is my code:
import cv2
import numpy as np
img = cv2.imread('img_in.png')
img_gs = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_gs = cv2.GaussianBlur(img_gs,(5,5),0)
_, thresh = cv2.threshold(img_gs,
120,
255,
cv2.THRESH_BINARY)
mat = np.argwhere(thresh == 255)
mat[:, [0, 1]] = mat[:, [1, 0]]
mat = np.array(mat).astype(np.float32)
m, e = cv2.PCACompute(mat, mean = np.array([]))
center = tuple(m[0])
endpoint1 = tuple(m[0] + e[0]*100)
endpoint3 = tuple(m[0] - e[0]*100)
red_color = (0, 0, 255)
cv2.circle(img, center, 3, red_color)
cv2.line(img, center, endpoint1, red_color)
cv2.line(img, center, endpoint3, red_color)
cv2.imwrite('img_out.png', img)
For an image of this quality, it works. The Problem now is that I have several thousand images that differ in lighting. Two examples:
Mediocre Lighting:
Bad Lighting:
Also, there are images on which the object is brighter than the background (like the good quality example) and images on which the object is darker than the background (Bad Lighting Image). I know that I can change the threshold of every image to change the detection and that is basically what I'm doing since several hours. But the result is not very pleasing. Also I am trying to differentiate between images with an if-loop. Problem is that the only parameters I have are Max Brightness of Image Pixel, Min Brightness of Image Pixel, Mean of Image Pixels. With that I can differentiate between Images to a certain degree but there are images that are very different despite sharing these parameters. I'd be a happy man if there is a better way to do this.