Projecting points onto image generated by warpPerspective

asked 2020-03-25 17:17:49 -0600

badams gravatar image

I am working on a project that uses an external program to stitch multiple images together into a single mosaic image using several types of transformations to achieve the best approximation of a single continuous image.

In addition to basic scale, rotate, and translate operations, the algorithm also applies perspective transformations by moving the corners of the source image independently. It does this via getPerspectiveTransform() and warpPerspective(). The program outputs the compiled image and a CSV file containing for each source image:

  • The center point of image in the mosaic
  • X and Y offsets for each corner of the source image applied before it is rendered into the mosaic image

The issue is that we have coordinates of features detected in the source images that we need to be able to map onto the resulting mosaic image. It seems as though the matrix generated by getPerspectiveTransform cannot be applied as-is to map the point as I attempted in the code sample below.

def mark_transformed_point(mosaic_image, source_size, point, corner_offsets):
    image_quad = np.array([
        (0, 0),
        (source_size[0], 0),
        (source_size[0], source_size[1]),
        (0, source_size[1]),
    ], dtype='float32')
    distorted_quad = np.array([
        (x + dx, y + dy) for (x, y), (dx, dy) in zip(image_quad, corner_offsets)
    ], dtype='float32')

    transform = cv2.getPerspectiveTransform(image_quad, distorted_quad)
    transformed_point = np.array([point[0], point[1], 1]).dot(transform.T)

    draw = ImageDraw.Draw(mosaic_image)
    x = transformed_point[0]
    y = transformed_point[1]
    radius = 10
    draw.ellipse([(x - radius, y - radius), (x + radius, y + radius)], fill='red')

Can anyone point me in the right direction regarding how I can achieve this mapping? Thanks.

edit retag flag offensive close merge delete