Ask Your Question
0

Calculating image moments after connected component labeling function

asked 2017-07-20 15:25:21 -0600

yulz gravatar image

I need to calculate the Hu moments from an input image. The input image input consists of several objects so I need to pre-process it using the connected components labeling function:

# input image is thresholded
(T, thresh) = cv2.threshold(input, 90, 255, cv2.THRESH_BINARY)

# getting the labels of the connected components
output = cv2.connectedComponentsWithStats(thresh, 4, cv2.CV_32S)
num_labels = output[0]
labels = output[1]
stats = output[2]
centroids = output[3]

# for every component in the output image
for c in centroids[1:num_labels]:
img_moments = cv2.moments(c)
hu = cv2.HuMoments(img_moments)

However this is not giving me the correct Hu moments values of the components. Originally I used the thresholded image for getting the moments cv2.moments(thresh), but this is not useful when they’re multiple components within the image. I’m using Python 2 with OpenCV 3.

Just for the record, I already obtained the correct number of labels of the image, in this case input image has 10 components + 1 label for the background, that's 11 labels, I know the first label is for the background, therefore the array values are all zeros. I want the get the values of the rest of the labels (from 1 to n-labels) and parse those values to a Numpy array for computing the moments individually.

edit retag flag offensive close merge delete

2 answers

Sort by » oldest newest most voted
1

answered 2017-07-23 23:20:20 -0600

yulz gravatar image

I managed to solve my problem by using the information provided by the statistics output for each label stats = output[2].

With this information I created a ROI (Region of Interest) for every component in the input image, and then I calculated the image moments and the Hu moments of the ROI getting the desired output values.

The solution proposed is the following:

# for every component in the output image
for label in range(num_labels):

# retrieving the width of the bounding box of the component
width = stats[label, cv2.CC_STAT_WIDTH]
# retrieving the height of the bounding box of the component
height = stats[label, cv2.CC_STAT_HEIGHT]
# retrieving the leftmost coordinate of the bounding box of the component
x = stats[label, cv2.CC_STAT_LEFT]
# retrieving the topmost coordinate of the bounding box of the component
y = stats[label, cv2.CC_STAT_TOP]

# creating the ROI using indexing
roi = thresh[y:y+height, x:x+width]

# calculating the image moments and Hu moments of the ROI
img_moments = cv2.moments(roi)
hu = cv2.HuMoments(img_moments)

I’m sure there are other approaches for segmenting the connected components in an input image, but at the moment this one is doing the work for me.

edit flag offensive delete link more
0

answered 2017-07-20 23:27:30 -0600

berak gravatar image

for multiple components / labels you will have to make a binary mask for each single label as in:

for l in labels[1:]: # ignore background
    mask = labels[labels==l] # select pixels for label l
    cv2.moments(mask)
edit flag offensive delete link more

Comments

cv2.moments(mask) launches an exception of type: OpenCV Error: Unsupported format or combination of formats () in cv::moments

yulz gravatar imageyulz ( 2017-07-21 11:55:16 -0600 )edit

apologies, you're right. the labels array is int, and moments only accepts uint8 for images.

for l in labels[1:]: # ignore background
    mask_i = labels[labels==l] # select pixels for label l
    mask_u = np.asarray(mask_i, np.uint8)
    cv2.moments(mask_u)
berak gravatar imageberak ( 2017-07-23 23:48:17 -0600 )edit

Question Tools

2 followers

Stats

Asked: 2017-07-20 15:25:21 -0600

Seen: 7,251 times

Last updated: Jul 23 '17