# Create stitched image from known calibrated points

Hello

I'm starting to work with image stitching for the first time.

I'm working on a setup in which two depth sensors are aligned looking down onto a table. In the image above, you see the two infra-red images from both cameras side by side.

The sensors FoV overlap a bit in the middle, and in this overlapping area I glued 10 black squares. I can know the positions of each of this black squares through a calibration process.

My goal is to stitch these two images so I end up with one full image that is consistent with my scenario. I looked a lot into the stitching classes, but everything I could find assumes that I'm going to find keypoints in color image to stitch, whereas in this case I can use calibrated points.

Can anyone give me some hints of how to proceed?

Have you tried just using the standard stitching algorithms? I understand that you want to explicitly use these calibration points, but I think the usual algorithm will also find them just fine as they are very characteristic.

If the black squares lie on the same plane, you can estimate the homography using the 2D image coordinates. If not but you know the 3D object coordinates, you can estimate the camera displacement and after get the homography.

@KjMag, there would be no need to compute them, it would be a waste of CPU resources since in my scenario I can calibrate the.

@Eduardo . Yes, I can compute homography from this points, but how do I go from the homography matrix to the stitched image?

@KjMag, there would be no need to compute them, it would be a waste of CPU resources since in my scenario I can calibrate them.

@Eduardo . Yes, I can compute homography from this points, but how do I go from the homography matrix to the stitched image?

I guess I have to transform one of the images to the others "perspective" using the homography matrix, is that it?

I know it would be a waste of resources in theory, but unless you care about performance very much it might just be easier to implement, and in some cases ease of implementation > performance. Depends on the requirements and constraints - don't know what are yours.

You should be able to use

`warpPerspective()`

for that. Maybe there is a way with the stitching module to reuse the transformation and perform only the warping / blending?Correction for my first comment: homography from the camera displacement needs the plane equation (see here). In both cases, we assume that the homography transformation can be applied under the assumption of a planar scene.

These are my first results using warpPerspective() to apply transformation on the image to the right. The results are not very good, but I guess I can do a more careful work in selecting points. I'll add more points and try to get a better result.