Related question: https://stackoverflow.com/questions/53907633/how-to-warp-an-image-using-deformed-mesh
I want to deform an image using the process described in the section 3 of this paper.
I'm using the following to generate a randomly deformed mesh:
def mesh_perturb(image):
rows = image.shape[0]
columns = image.shape[1]
xx = np.arange(0, rows, 1)
yy = np.arange(0, columns, 1)
[Y, X] = np.meshgrid(xx, yy)
ms = np.transpose(np.asarray([X.flatten('F'), Y.flatten('F')]), (1, 0))
perturbed_mesh = ms
nv = np.random.randint(20) - 1
for k in range(nv):
# Choosing one vertex randomly
vidx = np.random.randint(np.shape(ms)[0])
vtex = ms[vidx, :]
# Vector between all vertices and the selected one
xv = perturbed_mesh - vtex
# Random movement
mv = (np.random.rand(1, 2) - 0.5) * 20
hxv = np.zeros((np.shape(xv)[0], np.shape(xv)[1] + 1))
hxv[:, :-1] = xv
hmv = np.tile(np.append(mv, 0), (np.shape(xv)[0], 1))
d = np.cross(hxv, hmv)
d = np.absolute(d[:, 2])
d = d / (np.linalg.norm(mv, ord=2))
wt = d
curve_type = np.random.rand(1)
if curve_type > 0.3:
alpha = np.random.rand(1) * 50 + 50
wt = alpha / (wt + alpha)
else:
alpha = np.random.rand(1) + 1
wt = 1 - (wt / 100)**alpha
msmv = mv * np.expand_dims(wt, axis=1)
perturbed_mesh = perturbed_mesh + msmv
return perturbed_mesh
I used cv2.remap
function with coordinates generated by the above function. But the generated images are cropped, they go out of bounds. So I used cv2.copyMakeBorder
and to pad the image before passing the image to mesh_perturb
function but the problem here is I have to use the coordinates as ground truth for a neural network and training a neural network with very large images is impossible with my current hardware and unnecessary. And if the crop the image the ground truth gets polluted.
Can someone suggest me a way to do deform the image without needing to crop the image.