Get position of source points in destination after stitching
Hi all,
I have some region of interest in my source images which I would like to visualise in the final panorama after stitching.
I have tried drawing them on the source images before stitching but they can get covered in the overlap between images and disappear.
So, I tried to warp the points using the warpPoint function so that I could re-draw these ROIs on the final image.
Basically, I am going through the stitch detailed example and trying to warp the points before the images are warped in the example :
cameras[img_idx].K().convertTo(K, CV_32F);
p1Warp = warper->warpPoint(p1, K, cameras[img_idx].R);
p2Warp = warper->warpPoint(p2, K, cameras[img_idx].R);
cout << "p1 warp: " << p1Warp << " p2 warp: " << p2Warp << "\n";
// Warp the current image
warper->warp(img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped);
This also does not seem to work as I am getting negative coordinates for my points.
TLDR: How to find position of a region of interest in the destination image after stitching?
Thanks!
Huh. That's how I would have done it, but I'm getting negative points too.
I tried to print out the new warped image size:
That is what I get:
orig size: [720 x 1280]warped size: [789 x 1020 from (-704, 372)]
Does this mean the warped photo has an origin different that 0,0 and so I need to offset the points based on that?
It's worth a shot.
But how do I get the coordinates after the stitching has completed? This just gives me the coordinates in the warped image, but then if I draw rectangles they will still disappear during the compositing/blending phase... How can I get the coordinates of a point in the final stitched image?
Hmm. I don't know what's going on. I'm going to have to look at the code to see. It'll take a bit. I'll try to see tomorrow, could be longer.
Thanks a lot! I think I can get the warped point coordinates now by subtracting the offset, still struggling in understanding how to map this point after the images get fed into the blender.
My first guess, when you have your ROI you have coordinate system1, after warp, there is no guarantee that this coordinate system makes sense. How about trying to find keypoint inside your ROI and try to using those?
Hi Steven, thanks for replying! I can get the coordinates after the warp, which I could get with the key points as well, good idea. My issue is that when the images go into the blender they get stitched and have some variable overlap, so even if I use key points after the warp, how can I get the corresponding coordinates on the final stitched image?
Hi all! There is one thing worth mentioning, in the detailed example they load images twice (imread) and what you are warping and where you draw is probably not what they are using there for stitching.