How to improve this pixel remover process?
I have a numpy binary file .npy
, containing all the BGR values.
Example
[[ 47 65 82]
[ 48 65 84]
[ 49 64 80]
...
[164 170 169]
[164 173 176]
[165 171 170]]
I can remove a specific bgr pixel value using img[np.where((img == [b,g,r]).all(axis=2))]=[255,255,255]
from the entire image.
for all the above bgr values
this is a very simple loop to filter through each pixel
#go through each pixel
for index,x in np.ndindex(__pixels.shape):
if x == 0:
__b = __pixels[index,x]
elif x == 1:
__g = __pixels[index,x]
elif x == 2:
__r = __pixels[index,x]
#Change the color to white
print(__pixels.shape[0]," ",index)
img[np.where((img == [__b,__g,__r]).all(axis=2))] = [255, 255, 255]
It there any other efficient way where in I can remove these range of pixels stored as numpy array from the entire image?
Maybe you can make a mask -- if the value of BGR in every pixel is the BGR_to_be_removed, then use cv2.bitwise_and(img, mask) to get the result.
numpy is written in C so to give you a rough idea, it takes 1100 millisecond for one BGR value to be iterated through 2.7Million Pixels using
np.where
Issue I'm having here is the I don't have a numpy function that lets me iterate through the selected pixels row wise to extract BGR values at once. So in checking
x
value every time in the loop its delaying the entire loop. I don't know how to optimise it.Try the inRange function to make a mask, then... In c++ I would use .setTo, but in python I'm not sure the next step.