1 | initial version |
I managed to solve my problem by using the information provided by the statistics output for each label stats = output[2]
.
With this information I created a ROI (Region of Interest) for every component in the input image, and then I calculated the image moments and the Hu moments of the ROI getting the desired output values.
The solution proposed is the following:
# for every component in the output image
for label in range(num_labels):
# retrieving the width of the bounding box of the component
width = stats[label, cv2.CC_STAT_WIDTH]
# retrieving the height of the bounding box of the component
height = stats[label, cv2.CC_STAT_HEIGHT]
# retrieving the leftmost coordinate of the bounding box of the component
x = stats[label, cv2.CC_STAT_LEFT]
# retrieving the topmost coordinate of the bounding box of the component
y = stats[label, cv2.CC_STAT_TOP]
# creating the ROI using indexing
roi = thresh[y:y+height, x:x+width]
# calculating the image moments and Hu moments of the ROI
img_moments = cv2.moments(roi)
hu = cv2.HuMoments(img_moments)
I’m sure there are other approaches for segmenting the connected components in an input image, but at the moment this one is doing the work for me.