Ask Your Question
1

Perspective transform without crop

asked 2017-04-27 07:57:27 -0600

rgov gravatar image

updated 2017-04-27 08:38:10 -0600

I have two images, src and dst. I'm trying to perform some transformations on src to make it align better with dst.

One of the first transformations I'm applying is a perspective transform. I have some landmark points on both images, and I'm assuming that the landmarks fall on a plane and that all that has changed is the camera's perspective. I'm using cv2.findHomography to find the transformation matrix which represents the change in the camera.

However, if I then apply this transformation to src, some of the image might be transformed outside of my viewport, causing the image to be cropped. For instance, the top left corner (0, 0) might be transformed to (-10, 10), which means this part of the image is lost.

So I'm trying to perform the transformation and get an uncropped image.

I've played around with using cv2.perspectiveTransform on a list of points representing the corners of src, and then getting the cv2.boundingRect which tells me the size of the array I need to store the uncropped image. But I can't figure out how to translate the image so that none of its points get transformed out of bounds.

If I translate src before I apply the transformation, then I think I've "invalidated" the transformation. So I think I have to modify the transformation matrix somehow to apply the translation at the same time.


An answer on StackOverflow from Matt Freeman, on a question titled "OpenCV warpperspective" (I cannot link to it due to stupid karma rules), seemed promising, but didn't quite work.

# Compute the perspective transform matrix for the points
ph, _ = cv2.findHomography(pts_src, pts_dst)

# Find the corners after the transform has been applied
height, width = src.shape[:2]
corners = np.array([
  [0, 0],
  [0, height - 1],
  [width - 1, height - 1],
  [width - 1, 0]
])
corners = cv2.perspectiveTransform(np.float32([corners]), ph)[0]

# Find the bounding rectangle
bx, by, bwidth, bheight = cv2.boundingRect(corners)

# Compute the translation homography that will move (bx, by) to (0, 0)
th = np.array([
  [ 1, 0, -bx ],
  [ 0, 1, -by ],
  [ 0, 0,   1 ]
])

# Combine the homographies
pth = ph.dot(th)

# Apply the transformation to the image
warped = cv2.warpPerspective(src, pth, (bwidth, bheight), flags=cv2.INTER_CUBIC, borderMode=cv2.BORDER_CONSTANT)

If this is correct, then I should be able to find the bounding rectangle again and it should be (0, 0, bwidth, bheight):

corners = np.array([
  [0, 0],
  [0, height - 1],
  [width - 1, height - 1],
  [width - 1, 0]
])
corners = cv2.perspectiveTransform(np.float32([corners]), pth)[0]    
bx2, by2, bwidth2, bheight2 = cv2.boundingRect(corners)

print bx, by, bwidth, bheight
print bx2, by2, bwidth2, bheight2

Instead I get

-229 -82 947 1270
265 134 671 1096

Ugh due to more stupid karma rules I can't even answer my own question.

Matt Freeman wrote,

Homographies can be combined using matrix multiplication (which is why they are so powerful). If A and B are homographies, then AB represents the homography ...

(more)
edit retag flag offensive close merge delete

Comments

Hello ! I have the same problem. Could you solved it in Python OpenCV?

guilleeecha gravatar imageguilleeecha ( 2018-10-03 14:15:08 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2017-04-27 08:43:35 -0600

kbarni gravatar image

updated 2017-04-27 08:54:25 -0600

Calculating the bounding box of the transformed image was a good step :)

Now, as you said, you have to "translate" the final image to fit the bounding box. So if the tranformed top-left corner is at (-10,-10), you have to translate it 10 pixels on each direction.

The translation matrix has the following form:

    | 1 0 tx |
A = | 0 1 ty |
    | 0 0  1 |

where tx=-bb.left and ty=-bb.top (bb is the bounding box).

If you have a P transformation matrix, calculate the final matrix F = A x P. The size of the transformed image is (bb.width,bb.height).

Code:

Mat A = Mat::eye(3,3,CV_64F); A.at<double>(0,2)= -bb.top; A.at<double>(1,2)= -bb.left;
Mat F = A*P;
warpPerspective(image,result,F,Size(bb.width,bb.height));

Note: if there's a problem, check if I didn't mess up the x/y and width/height order... (I looked at some old code I had lying around).

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2017-04-27 07:57:27 -0600

Seen: 10,129 times

Last updated: Apr 27 '17