Ask Your Question
0

improve my findContours for a square detection?

asked 2017-10-09 10:32:05 -0600

glukon gravatar image

Hi guys,

for a robotic project I want to detect a black square in real time (Resolution: 720p) using ROS and OpenCV. I'm working with such a high resolution because I used VGA first and then figured out that findContours() is finding contours > 3 meters distance quite better! With VGA I can only detect the marker with a max distance of 3 meters. With 720p I am able to detect it up to 6 meters and 'some' more.

I used python code to find the contours and did some evaluating next. Here the problems occur. Due to the dynamic changes of the background while driving the robot, it's hard to detect the square clearly. I get long lines, especially from shadows and so on.

Please accept that there are no unsharp images or so. I sorted them already, so OpenCV cal deal with nice InputImages.

Here I tried to remove some contours...

 image = cv2_img
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    gray = cv2.GaussianBlur(gray, (5,5), 0)
    edges = cv2.Canny(gray, 60, 255)
    cnts, hierarchy = cv2.findContours(edges.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    contours = sorted(cnts, key=cv2.contourArea, reverse=True)[:10]

    for contour in contours:
        area = cv2.contourArea(contour)
        if area > 100000 and area < 1000:
            contours.remove(contour)

        perimeter = cv2.arcLength(contour, True)
        approx = cv2.approxPolyDP(contour, 0.01*perimeter, True)
        if len(approx) == 4:

            cv2.circle(cv2_img, (720, 360), 5, (255,0,0), 5)
            cv2.drawContours(cv2_img, [approx], -1, (0, 255, 0), 2)
            M = cv2.moments(approx)
            centers = []
            if M["m00"] != 0:
                self.cX = int(M["m10"] / M["m00"])
                self.cY = int(M["m01"] / M["m00"])
            else:
                self.cX, self.cY = 0, 0

            P1 = approx[0]
            P1x = P1[0][0]
            P1y = P1[0][1]

            P2 = approx[1]
            P2x = P2[0][0]
            P2y = P2[0][1]

            P3 = approx[2]
            P3x = P3[0][0]
            P3y = P3[0][1]

            P4 = approx[3]
            P4x = P4[0][0]
            P4y = P4[0][1]

            cv2.circle(cv2_img, (P1x, P1y), 1, (50,0,255), 4)       # left top corner
            cv2.circle(cv2_img, (P2x, P2y), 1, (50,0,255), 4)       # bottom left
            cv2.circle(cv2_img, (P3x, P3y), 1, (50,0,255), 4)       # bottom right
            cv2.circle(cv2_img, (P4x, P4y), 1, (50,0,255), 4)       # top right

            centers.append([self.cX, self.cY])

            cv2.circle(cv2_img, (self.cX, self.cY), 2, (255,0,0), 1)
            cv2.line(cv2_img, (self.cX, self.cY), (1280/2, 720/2), (255,0,0))


    cv2.imshow("Image window", cv2_img)
    cv2.waitKey(3)

because contourArea(contour) is about 210000 @ near distance (< 0.5 meters) and < 1000 at long distance I removed small and larg contour areas afore. Also, a kernel size of 5,5 using is better than increasing it to 7,7 (even more small noise and distortions).

please let me know or discuss about the dynamic background and how to deal with the pre-processing in such situations. thanks !

edit retag flag offensive close merge delete

Comments

Is there a reason why you would want to use a square and not a marker like the aruco markers?

StevenPuttemans gravatar imageStevenPuttemans ( 2017-10-11 07:32:34 -0600 )edit

Hi Steven, well there is a simple reason. Since running OpenCV4Tegra on the Jetson TK1 by nvidia, I'm not using opencv_contrib. Why would you prefer to use aruco markers? Because they handle with the problem having a dyn. changing background in general?

glukon gravatar imageglukon ( 2017-10-11 09:08:50 -0600 )edit
1

@StevenPuttemans Right now I'm testing the same Marker with another square inside (color white). Like so, I'm able to detect the 'child' square, intercepting the big square I already detected. So, intersection of two squares should be successfull given by defining areas of detected_square (as done before) plus detected_child_square. I will test if you can validate the interception by something like if detected_square.area && detected_child_square.area. If so, then doStuff(). Will add an answer as soon as having a solution.

glukon gravatar imageglukon ( 2017-10-11 09:15:58 -0600 )edit

Why would you prefer to use aruco markers? Because they handle with the problem having a dyn. changing background in general? --> because they are much more robust to scenic variation due to their uniqueness. You should read the original paper describing their performance. Good luck with your custom solution!

StevenPuttemans gravatar imageStevenPuttemans ( 2017-10-12 04:23:48 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2017-10-12 10:07:36 -0600

glukon gravatar image

Hey Steven, I will take a more deeper look at this great paper! Today I noticed that I did not think about colorspace thresholding... was time to do so. The resulting mask I get after pre-processing and thresholding is nearly ideal for detecting since I performed light dilatation. Thanks for now, think it's getting better now.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2017-10-09 10:31:12 -0600

Seen: 6,024 times

Last updated: Oct 09 '17