Ask Your Question

Revision history [back]

I don't think you can improve your result by fine tuning the parameters of your edge detection. You could only learn the fitting parameters for this situation but your algorithm would fail anywhere else.

I would try to do it RANSAC style: you choose randomly four lines, compute the intersections and call T = getPerspectiveTransform(intersections, goal)

where intersections are your points and goal the expected new positions (in your case a rectangle). To check if you have chosen the right points, you compute projected = T*intersections . In the good case, the projected points are very close to the goal points (1 or 2 pixels). If you have chosen the wrong intersections, these distances are much higher (and your projected points don't form a rectangle) and you start the next iteration with another group of four lines.

You don't have to many lines so that this should terminate rather fast (especially if include some intelligence into the random choice (e.g. adjoint sides should have an angle of about 90deg (+-20 even in the original image,...)