Edge based alignment

asked 2018-03-20 08:48:26 -0600

Simonn gravatar image

updated 2018-03-23 10:01:08 -0600

Hi everyone,

I'm currently working on a project were I need to detect a marker in an image with Python&OpenCV and I can't seem to get the detection right. The marker is used for aligning a substrate in our writing machine. I'm new to computer vision so I hope that someone here can push me in the right direction.

This is a (raw) camera image with the marker: https://imgur.com/lCATqHK

The problem is that feature matching and template matching don't work well with these kinds of markers. (As far as I know/tested)

Marker properties:

  • For good alignment the position of the anchor point/center point needs to be accurate. (1/10th to 1/20th of a pixel)
  • The edges of the marker can erode so the detection still needs to work for small changes in the edge.
  • Scale is constant
  • Rotation can vary ±10°, accuracy should be <0.06° so that the pixels in the marker don't shift more than 0.5 pixel
  • Brightness/Contrast can vary
  • Preferably the marker should be located within a second.

What I tried so far:

  • I tried template matching with bicubic interpolation to get subpixel accuracy: https://imgur.com/0Qm9846
    I compared the results with Cognex and from my tests the difference was max 1/8th and on average 1/20th of a pixel. The problem with TM is that it doesn't work well with rotated markers. Even when I rotate the template the matching is very poor and I get a lot of wrong matches.
  • Feature matching doesn't work with the markers because the markers don't have strong, unique features so between images it won't find the same features.

TM and FM both seem to not work well for my application. So another option is to go for edge detection like in the Cognex software: Edge trained image: https://imgur.com/OkO4A71 Zoomed in: https://imgur.com/PELw4Bo

The problem with this is that I don't know how to reliably find the same edges like in these two images. One edge detection tool I tried was Canny but that produces unreliable results and double edges depending on the parameters. As long as I can find the edges within a pixel accuracy I think I should be able to do the rest. (image gradient, homography, sub pixel anchor point)

Tl;dr: I want to locate the marker edges as accurately as possible but I can't find a way to do this reliably.

edit retag flag offensive close merge delete

Comments

Idle thought before I finished reading your whole post. Not totally worthless, but...

Try Phase Correlation. It's got a nice function HERE, and it's simple enough to understand. Doesn't do well with rotation, but if you look up log-polar, there's some things that can help.

Tetragramm gravatar imageTetragramm ( 2018-03-20 21:11:31 -0600 )edit

Better solution. You can threshold the marker reliably, yes? Do that, and perform connectedComponentsWithStats. It should be relatively easy to filter out the noisy specks by shape and location. You can also calculate the moments, to get the center, and from the two centers, you get rotation.

Tetragramm gravatar imageTetragramm ( 2018-03-20 21:15:42 -0600 )edit

Thank you for your comments! I'll take a look at those two functions. I would upvote you but apparently I can't do that yet.

Filtering is not a problem. I already made a script which can reliably make a ROI and a mask around the marker so that only the marker is checked. The problem is that the result is binary while the transition from background to marker is about 6/7 pixels wide. https://imgur.com/6MdKv9a The threshold doesn't consider the real edge so it wouldn't reliably pick the right pixel. How accurate would the rotation be from the moments?

Simonn gravatar imageSimonn ( 2018-03-22 03:23:30 -0600 )edit

Well, the threshold should chop off the same amount on either side, so it wouldn't bias the center too much.

The rotation is easy. You've actually got 2 markers, and 2 centers. One is the cross, and the other is the square. Get the centers from both, and then the rotation is simple.

Tetragramm gravatar imageTetragramm ( 2018-03-22 15:50:41 -0600 )edit

I've made a script with Otsu binarization and connectedComponentsWithStats to get the angle from the centroids like you said. I did this for 6 images and the centroids seems to vary by max ± 2 pixels and the angle by max ± 0.5 degrees. So pretty decent accuracy but not enough. Might be useful as a first step since it only take 25ms to run.

Here is the script: import numpy as np import cv2

img = cv2.imread(r'\Desktop\ttest\22.bmp',0)

img = 255-img

blur = cv2.GaussianBlur(img,(5,5),0) ret3,img = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)

output = cv2.connectedComponentsWithStats(img, 4, cv2.CV_32S)

stats = output[2] centroids = output[3]

print(stats) print(centroids)

Simonn gravatar imageSimonn ( 2018-03-23 09:54:37 -0600 )edit

Also I compared the results from the Canny edge detection and edges from the Otsu threshold with the sub pixel edge detection from Cognex.

Green is Cognex, white and black are found edges: Canny: https://imgur.com/EyU49Bk Otsu: https://imgur.com/V7GaevW

The edges seem to always be within ± 1 pixel. I think I'm going to try and iterate over all the edge pixels to find the sub pixel position and angle of all the edges. Not yet sure though how I would do this in a smart way.

Simonn gravatar imageSimonn ( 2018-03-23 10:08:15 -0600 )edit

First, that Gaussian Blur might account for most of the difference.

Not sure on subpixel edges.

You can calculate moments from the canny edges too. You may want to turn it into a contour and have it approximate the line, which should smooth things out. APPROX_SIMPLE should be good, since you only have straight edges..

Tetragramm gravatar imageTetragramm ( 2018-03-23 14:36:02 -0600 )edit

I tried the script with median blur and without blur and the results were the same. (Without blur was even slightly worse) The variance in position was actually ± 1 pixel instead of ± 2. Angle accuracy is still ±0.5/1 degrees. I'll give calculating the moment from canny a go.

Simonn gravatar imageSimonn ( 2018-03-26 04:13:34 -0600 )edit

Initial thoughts on sub-pixel. Sobel X and Y, the use the cartToPolar function to get gradient magnitude and orientation at each location. Figure out the four orientations that matter, then fit a straight line to maximize the gradient magnitude included, but only for the one orientation. Just have to be consistent with scoring the lines.

Or perhaps lay a line along the orientation, and figure out where the half-way point between above and below the edge is.

Not an easy problem.

Tetragramm gravatar imageTetragramm ( 2018-03-26 18:16:04 -0600 )edit