Attention! This forum will be made read-only by Dec-20. Please migrate to Most of existing active users should've received invitation by e-mail.
Ask Your Question

OpenCV Segmentation of Largest contour

asked 2020-11-15 15:59:51 -0500

StefanCepa995 gravatar image

updated 2020-11-15 16:06:25 -0500


This might be a bit too "general" question, but how do I perform GRAYSCALE image segmentation and keep the largest contour? I am trying to remove background noise (i.e. labels) from breast mammograms, but I am not successful. Here is the original image:

image description

First, I applied AGCWD algorithm (based on paper "Efficient Contrast Enhancement Using Adaptive Gamma Correction With Weighting Distribution") in order to get better contrast of the image pixels, like so: image description

Afterwards, I tried executing following steps: Image segmentation using OpenCV's KMeans clustering algorithm:

enhanced_image_cpy = enhanced_image.copy()
reshaped_image = np.float32(enhanced_image_cpy.reshape(-1, 1))

number_of_clusters = 10
stop_criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.1)

ret, labels, clusters = cv2.kmeans(reshaped_image, number_of_clusters, None, stop_criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
clusters = np.uint8(clusters)

Canny Edge Detection:

removed_cluster = 1

canny_image = np.copy(enhanced_image_cpy).reshape((-1, 1))
canny_image[labels.flatten() == removed_cluster] = [0]

canny_image = cv2.Canny(canny_image,100,200).reshape(enhanced_image_cpy.shape)

Find and Draw Contours:

initial_contours_image = np.copy(canny_image)
initial_contours_image_bgr = cv2.cvtColor(initial_contours_image, cv2.COLOR_GRAY2BGR)
_, thresh = cv2.threshold(initial_contours_image, 50, 255, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(initial_contours_image_bgr, contours, -1, (255,0,0), cv2.CHAIN_APPROX_SIMPLE)

Here is how image looks after I draw 44004 contours:

image description

I am not sure how can I get one BIG contour, instead of 44004 small ones. Any ideas how to fix my approach, or possibly any ideas on using alternative approach to get rid of label in top right corner.

Thanks in advance!

edit retag flag offensive close merge delete


U can't write like that canny_image[labels.flatten() == removed_cluster] = [0] It for only if/else condition if canny_image[labels.flatten() == removed_cluster] = [0]:

Found answer from stackoverflown

supra56 gravatar imagesupra56 ( 2020-11-16 04:38:38 -0500 )edit

Easier for u to fix it. U don't need roi. Used ur code to apply this in below

  • Threshold
  • findcontours
  • grabcut is best
supra56 gravatar imagesupra56 ( 2020-11-16 06:15:36 -0500 )edit

ignore those comments. using a mask as an index is valid, and assigning to the result of that is also valid. I'd only question the [0] but that may be due to numpy's broadcasting rules and do the right thing.

crackwitz gravatar imagecrackwitz ( 2020-11-16 06:49:23 -0500 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2020-11-15 17:00:13 -0500

crackwitz gravatar image

updated 2020-11-15 17:09:36 -0500

the k-means clustering will only give you grayscale bands. that operation is not helpful here. canny makes no sense either.

now I can think of two ways to handle this. if the orientation label is clearly more dense than the imaged tissue, you can identify the label with a high brightness threshold. then apply some morphological dilate and then erase that area (img[mask] = 0).

your orientation label is not clearly denser/brighter than the imaged tissue (there's a spot in the tissue that rivals the label).

since it isn't clearly denser/brighter, you'll have to go the contours/components way:

  • crop away the white border, if there is any
  • otsu thresholding to binarize
  • morphological close to link up the tissue sufficiently
  • findContours or connectedComponentsWithStats
  • sort the contours/components by area

now you can erase just the label, leaving tissue and background, or select the tissue and erase everything else.

your label also has some kind of faint halo (you can make out a rectangular carrier for the text), so I'd suggest the latter.

all this presupposes that your noise (the orientation labels) aren't near the imaged tissue. if they are, you'd have to identify them some other way and then erase them using inpainting.

edit flag offensive delete link more


Thank you for clarification. This helped me a lot.

StefanCepa995 gravatar imageStefanCepa995 ( 2020-11-16 15:21:38 -0500 )edit
Login/Signup to Answer

Question Tools

1 follower


Asked: 2020-11-15 15:59:51 -0500

Seen: 72 times

Last updated: Nov 15