# Perspective transform without crop

I have two images, *src* and *dst*. I'm trying to perform some transformations on *src* to make it align better with *dst*.

One of the first transformations I'm applying is a perspective transform. I have some landmark points on both images, and I'm assuming that the landmarks fall on a plane and that all that has changed is the camera's perspective. I'm using `cv2.findHomography`

to find the transformation matrix which represents the change in the camera.

However, if I then apply this transformation to *src*, some of the image might be transformed outside of my viewport, causing the image to be cropped. For instance, the top left corner (0, 0) might be transformed to (-10, 10), which means this part of the image is lost.

So I'm trying to perform the transformation and get an uncropped image.

I've played around with using `cv2.perspectiveTransform`

on a list of points representing the corners of *src*, and then getting the `cv2.boundingRect`

which tells me the size of the array I need to store the uncropped image. But I can't figure out how to translate the image so that none of its points get transformed out of bounds.

If I translate *src* before I apply the transformation, then I think I've "invalidated" the transformation. So I think I have to modify the transformation matrix somehow to apply the translation at the same time.

An answer on StackOverflow from Matt Freeman, on a question titled "OpenCV warpperspective" (I cannot link to it due to stupid karma rules), seemed promising, but didn't quite work.

```
# Compute the perspective transform matrix for the points
ph, _ = cv2.findHomography(pts_src, pts_dst)
# Find the corners after the transform has been applied
height, width = src.shape[:2]
corners = np.array([
[0, 0],
[0, height - 1],
[width - 1, height - 1],
[width - 1, 0]
])
corners = cv2.perspectiveTransform(np.float32([corners]), ph)[0]
# Find the bounding rectangle
bx, by, bwidth, bheight = cv2.boundingRect(corners)
# Compute the translation homography that will move (bx, by) to (0, 0)
th = np.array([
[ 1, 0, -bx ],
[ 0, 1, -by ],
[ 0, 0, 1 ]
])
# Combine the homographies
pth = ph.dot(th)
# Apply the transformation to the image
warped = cv2.warpPerspective(src, pth, (bwidth, bheight), flags=cv2.INTER_CUBIC, borderMode=cv2.BORDER_CONSTANT)
```

If this is correct, then I should be able to find the bounding rectangle again and it should be (0, 0, `bwidth`

, `bheight`

):

```
corners = np.array([
[0, 0],
[0, height - 1],
[width - 1, height - 1],
[width - 1, 0]
])
corners = cv2.perspectiveTransform(np.float32([corners]), pth)[0]
bx2, by2, bwidth2, bheight2 = cv2.boundingRect(corners)
print bx, by, bwidth, bheight
print bx2, by2, bwidth2, bheight2
```

Instead I get

```
-229 -82 947 1270
265 134 671 1096
```

Ugh due to more stupid karma rules I can't even answer my own question.

Matt Freeman wrote,

Homographies can be combined using matrix multiplication (which is why they are so powerful). If

AandBare homographies, thenABrepresents the homography ...

Hello ! I have the same problem. Could you solved it in Python OpenCV?