Speeding up filtering features from binary image by size

asked 2018-09-07 22:58:37 -0600

updated 2018-09-07 23:27:58 -0600

berak gravatar image

Hello! I have an input image, "img_in", which contains white features on a black background. I'd like to remove features smaller than a certain threshold. I have the code posted below, which works but is by far the slowest part of my script. Any suggestions on how to speed it up?

1: img_out = np.zeros(img_in.shape, dtype=np.uint8)
2: threshold = 20
3:
4: nfeatures, img_labeled = cv.connectedComponents(img_in)
5: for i in range(1,nfeatures):
6:     if np.sum(img_labeled == i) >= threshold:
7:         img_out[img_labeled == i] = 255

For reference, the image size is 968x1424 and I'm processing ~5,000 features. connectedComponents() takes no time, but the loop takes 16s. If I time around line 6, the sum+comparison operation takes 3s, while timing around the "img_out" assignment at line 7 gives 1.7s. Timing from after line 7 (outside the if block) back around to line 5 gives 0.03s. I even timed exiting an empty if block 5000 times and got 0.0s.

So, I guess I have two related questions. Where is all my time being spent, and how do I do this operation faster?

edit retag flag offensive close merge delete