# Translation transform with depth image

Hi,

**Summary**:

- Translate depth-map by <x,y,z>, which are known
- The new depth map has same size as the one which was input
- A smoother estimate than -
*subtraction by Z and translation by warpPerspective*- would provide is thought to be needed

**Explanation**:
I'm trying to perform a translation transformation on a depth map (only depth no intensities), so that I'm able to zoom in on a particular part of the image, while keeping the size of the matrix the same.

I.E. if my input matrix was mat_inp with size (rows, cols) of type float then I'd like my output to be mat_out with size (rows, cols) of type float with the origin translated by (X,Y,Z). I know what the translation(X, Y, Z) is. So, I'd like to move my perspective to the point (X,Y,Z), coordinates are in the frame of the initial frame(perspective). Does anyone know if an existing function exists that lets me do that?

I thought of a way to do it, but am not sure if its correct:

- Subtract all pixels by Z
- Replace negatives by zero
- Use warpPerspective to translate X,Y,Z

The only problem is that in case of occlusions due to a something closer than Z, I don't think I'd get a smooth new depth map. If there was a small object close by, it might make the depth map closer than what a smooth estimate should be. It seems to be that these warp methods are optimized for intensity values and that makes me wonder if a function exists that could do the translation shift for a depth map.

Sorry about the long post, any help is appreciated.