Ask Your Question

aPiso's profile - activity

2017-01-26 20:38:24 -0600 commented answer Applying homography on non planar surface

Homography does not detect any object. The link you posted shows object detection through feature matching, which can also be used to compute homography. If you want to do object detection you should look into feature matching, HOG, or the newer and more effective deep-learning systems such as convolutional neural networks.

2017-01-25 09:44:18 -0600 commented question Get position of source points in destination after stitching

Hi Steven, thanks for replying! I can get the coordinates after the warp, which I could get with the key points as well, good idea. My issue is that when the images go into the blender they get stitched and have some variable overlap, so even if I use key points after the warp, how can I get the corresponding coordinates on the final stitched image?

2017-01-25 00:03:59 -0600 commented question Get position of source points in destination after stitching

Thanks a lot! I think I can get the warped point coordinates now by subtracting the offset, still struggling in understanding how to map this point after the images get fed into the blender.

2017-01-24 21:49:37 -0600 commented question Get position of source points in destination after stitching

But how do I get the coordinates after the stitching has completed? This just gives me the coordinates in the warped image, but then if I draw rectangles they will still disappear during the compositing/blending phase... How can I get the coordinates of a point in the final stitched image?

2017-01-24 21:23:18 -0600 commented question Get position of source points in destination after stitching

I tried to print out the new warped image size:

 Rect roi = warper->warpRoi(sz, K, cameras[i].R); //line 802 of stitching detailed example cpp
 cout << "orig size: " << sz << "warped size: " << roi << "\n";

That is what I get: orig size: [720 x 1280]warped size: [789 x 1020 from (-704, 372)]

Does this mean the warped photo has an origin different that 0,0 and so I need to offset the points based on that?

2017-01-24 18:54:02 -0600 received badge  Editor (source)
2017-01-24 18:53:39 -0600 asked a question Get position of source points in destination after stitching

Hi all,

I have some region of interest in my source images which I would like to visualise in the final panorama after stitching.

I have tried drawing them on the source images before stitching but they can get covered in the overlap between images and disappear.

So, I tried to warp the points using the warpPoint function so that I could re-draw these ROIs on the final image.

Basically, I am going through the stitch detailed example and trying to warp the points before the images are warped in the example :

    cameras[img_idx].K().convertTo(K, CV_32F);
    p1Warp = warper->warpPoint(p1, K, cameras[img_idx].R);
    p2Warp = warper->warpPoint(p2, K, cameras[img_idx].R);
    cout << "p1 warp: " << p1Warp << " p2 warp: " << p2Warp << "\n";
    // Warp the current image

    warper->warp(img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped);

This also does not seem to work as I am getting negative coordinates for my points.

TLDR: How to find position of a region of interest in the destination image after stitching?

Thanks!