How to improve this pixel remover process?

asked 2018-01-24 02:37:19 -0500

Santhosh1 gravatar image

updated 2018-01-24 03:09:12 -0500

I have a numpy binary file .npy, containing all the BGR values.

Example

[[ 47  65  82]
 [ 48  65  84]
 [ 49  64  80]
 ...
 [164 170 169]
 [164 173 176]
 [165 171 170]]

I can remove a specific bgr pixel value using img[np.where((img == [b,g,r]).all(axis=2))]=[255,255,255] from the entire image.

for all the above bgr values

this is a very simple loop to filter through each pixel

#go through each pixel
for index,x in np.ndindex(__pixels.shape):
    if x == 0:
        __b = __pixels[index,x]
    elif x == 1:
        __g = __pixels[index,x]
    elif x == 2:
        __r = __pixels[index,x]
        #Change the color to white
        print(__pixels.shape[0]," ",index)
        img[np.where((img == [__b,__g,__r]).all(axis=2))] = [255, 255, 255]

It there any other efficient way where in I can remove these range of pixels stored as numpy array from the entire image?

edit retag flag offensive close merge delete

Comments

Maybe you can make a mask -- if the value of BGR in every pixel is the BGR_to_be_removed, then use cv2.bitwise_and(img, mask) to get the result.

moHe gravatar imagemoHe ( 2018-01-24 04:06:05 -0500 )edit

numpy is written in C so to give you a rough idea, it takes 1100 millisecond for one BGR value to be iterated through 2.7Million Pixels using np.where

Santhosh1 gravatar imageSanthosh1 ( 2018-01-24 04:26:36 -0500 )edit

Issue I'm having here is the I don't have a numpy function that lets me iterate through the selected pixels row wise to extract BGR values at once. So in checking x value every time in the loop its delaying the entire loop. I don't know how to optimise it.

Santhosh1 gravatar imageSanthosh1 ( 2018-01-24 04:30:35 -0500 )edit

Try the inRange function to make a mask, then... In c++ I would use .setTo, but in python I'm not sure the next step.

Tetragramm gravatar imageTetragramm ( 2018-01-24 17:53:15 -0500 )edit