I have identified a rectangular area in my image space ( programmetically I have for points in a vector representing vertices of a rectangle or quadrilateral ) in live camera feed. The shape is unknown in advance but it is known that it is a polygon with 4 vertices. I would like to display an image within that area. Here is my code.
vector<Point2f> knownLiveFeedPoints; // this vector contains four Point2f Coordinates
Mat repImage = imread("replace1.jpg");
vector<Point2f> imagePoints = { Point2f(0, 0), Point2f(repImage.cols, 0),
Point2f(repImage.cols, repImage.rows), Point2f(0, repImage.rows), };
Mat transmix = getPerspectiveTransform(imagePoints, knownLiveFeedPoints);
warpPerspective(repImage, cameraFeed, transmix, cameraFeed.size(), cv::INTER_LINEAR, cv::BORDER_TRANSPARENT);
This is working fine. Here I would like to know what is being done by getPrespectiveTransform ( would really appreciate a sample matrix calculation ) and wrapPrespective ? I went through openCV documentation many times but got lost in many alternatives and generic explanation. Can I achieve exactly same functionality using findHomography() ? what would be the difference ? Thanks in advance.