# Bilinear sampling from a GpuMat

Hi everyone,

I'm writing a GPU-based shape/appearance model, for which I have to crop patches centered on given key points. The patches are square but not necessarily aligned with the image axes, so I cannot just use a rowRange/colRange. My plan is to create a fixed matrix of coordinate offsets, O:

```
O = [x1, x2, ..., xn;
y2, y2, ..., yn;
1, 1, ..., 1]
```

In Homogeneous coordinates. I will store this matrix on the GPU. When I want to sample a patch around X = [x, y, 1]^T, I simply transform the coordinates by a similarity transformation matrix M (which performs translation, rotation and scaling).

```
P = M * O
```

So P will again have the same layout as O, but with transformed coordinates.

**Now for the question**:
Given a matrix P of coordinates, how can I sample an image f(x,y) at the coordinates in P in an efficient manner? The output should be a vector or matrix with the pixel values at the coordinates in P. I want to use bilinear sampling, which is a built in operation on the GPU (so it should be efficient). I suppose I could write a custom kernel for this, but I would think this is already in opencv somewhere. I searched the documentation but didn't find anything.

Alternatively, I could rotate/scale the whole image and then crop an axis-aligned patch, but this seems less efficient.

Thanks in advance

"Alternatively, I could rotate/scale the whole image and then crop an axis-aligned patch, but this seems less efficient." - This exactly what I'm doing in my project, it might not be the most efficient way to do it, but it takes less than 2 millisecond to execute on my computer while image being 640x480.