As the title says, I'm trying to find a way to generate a transformation matrix to best best align two images (the solution with the smallest error value computed with an arbitrary metric - for example the SAD of all distances between corresponding points). Example provided below:
This is just an example in the sense that the outer contour can be any shape, the "holes" can be any shape, any size and any number.
The "from" image was drawn by hand in order to show that the shape is not perfect, but rather a contour extracted from a camera acquired image.
The API function that seems to be what I need is Video.estimateRigidTransform
but I ran into a couple of issues and I'm stuck:
The transformation must be rigid in the strongest sense, meaning it must not do any kind of scaling, only translation and rotation.
Since the shapes in the "from" image are not perfect, the number of points in the contour are not the same as the ones in the "To" image, and the function above need two sets of corresponding points. In order to bypass this I have tried another approach: I have calculated the centroids of the holes and outer contour and tried aligning those. There are two issues here:
I need alignment even if one of the holes is missing in the "from" image.
- The points must be in the same order in both lists passed to
Video.estimateRigidTransform
and there is no guarantee that function findContours will provide them in the same order in both shapes. - I have yet to try to run a feature extractor and matcher to obtain some corresponding points but I'm not very confident in this method, especially since the "From" image is a natural image with irregularities.
Any ideas would be greatly appreciated.