Ask Your Question
0

Identify object on a conveyor belt

asked 2018-10-26 01:02:17 -0600

Hatmpatn gravatar image

updated 2018-10-26 16:39:51 -0600

Hello! I'm thinking of trying out openCV for my robot.

I want the program to be able to identify the metal parts on a conveyor belt that are single ones, and not the ones lying in clusters.

I will buy a raspberry pie with the raspberry pie camera module(is this a good idea for this project?).

I want the program to return the X-Y coordinate(or position of the pixel on the image) on a specific place on the metal part(so that the robot can lift it where it is supposed to be lifted). I would also want the program to have a adjustable degree of freedom of the orientation(rotation) of the single metal part to be localized.

Where do I even start?

A simple drawing of the robot

image description

An image of what the images could look like the program will process(have not bought the final camera yet and lighting).

image description

Here is the metal part I want to pick up from the conveyor belt.

image description

edit retag flag offensive close merge delete

Comments

Binarize the image with color or greyscale thresholding and then do a cv::findContours(). Loop through contours filtering by contour size. For those contours within the accepted size bounds, compare moments, either Hu or Flusser, to the canonical item, maybe using Mahalnobis distance for a conscious false negative rate.

Der Luftmensch gravatar imageDer Luftmensch ( 2018-10-26 18:45:12 -0600 )edit

Thank you! I will start with your advice! Will I need to put a glare filter on the camera lens? Is raspberry pi camera module a good choice?

Hatmpatn gravatar imageHatmpatn ( 2018-10-29 07:05:23 -0600 )edit

Try to make your lighting more diffuse, there will be much less glare if there is ambient light and not a point light source. A single color highly saturated background would likely help as well, in which case you could try HSV thresholding.

Der Luftmensch gravatar imageDer Luftmensch ( 2018-11-13 12:02:17 -0600 )edit

I'm getting really close now!

I've changed the background to a bright orange.

My code is as follows:

-Take an image -Convert image from BGR to HSV -Threshold the image to filter out the orange -FindContours -Filter out the contours that doesnt match my wanted area. -Process the Moments and HuMoments of the contours that are left -Calculate the centroid from the Moments -Draw the contours and centroid in the original image

In the image I have 2 objects that are with their faces down and 2 object are face up. I only want the program to recognize the 2 object faced upwards. Im trying to print the Moments and HuMoments Array but dont know how to filter out values so that the face-down ones will be filtered out?

Hatmpatn gravatar imageHatmpatn ( 2018-11-27 09:04:12 -0600 )edit

area_min=3000 area_max=4000

for i in range(len(contours)):
    cnt=contours[i]
    area=cv2.contourArea(cnt)
    centres=[]
    if (area > area_min) and (area < area_max):
            moments=cv2.moments(contours[i])
            hu=cv2.HuMoments(moments)
            centres.append((int(moments['m10']/moments['m00']),int(moments['m01']/moments['m00'])))
            cv2.circle(image, centres[-1],3,(0,0,255),-1)
            cv2.drawContours(image, contours, i, (0,255,0), 2)

            #print(centres)
            print(moments)
Hatmpatn gravatar imageHatmpatn ( 2018-11-27 09:05:25 -0600 )edit

Resulting image: link text

The returned moments values are too long to attach. And also the HuMoments values.

Hatmpatn gravatar imageHatmpatn ( 2018-11-27 09:08:24 -0600 )edit

The sign of the final Hu moment can discriminate mirror images of contours (face-down vs. face-up in your case). Also, have a look at cv::Mahalanobis()for obtaining a probability (0-1) that a contour belongs to canonical class.

Der Luftmensch gravatar imageDer Luftmensch ( 2018-11-27 10:26:23 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2018-11-27 13:45:58 -0600

Hatmpatn gravatar image

Thanks for the answers Der Luftmensch. I solved it by using the 7th argument of HuMoments which shows a - sign for the mirror images. Now I just want to move the centroids so that they are on typ of the big flat part where they will be picked up by the robot.

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2018-10-26 01:02:17 -0600

Seen: 1,927 times

Last updated: Oct 26 '18