1 | initial version |
You need to pass the four pixel coordinates in your original image and the four corresponding pixel coordinates in your goal image (the second set could look somethink like)
vector<Point2f> training_corners;
training_corners.push_back(Point2f(0,0));
training_corners.push_back(Point2f(rgb.cols,0));
training_corners.push_back(Point2f(rgb.cols,rgb.rows));
training_corners.push_back(Point2f(0,rgb.rows));
Then you use warpPerspektive:
warpPerspective(rgb, unwarped_image, H2,cv::Size(rgb.cols, rgb.rows));
2 | No.2 Revision |
You need to pass the four pixel coordinates in your original image and the four corresponding pixel coordinates in your goal image (the second set could look somethink like)
vector<Point2f> training_corners;
training_corners.push_back(Point2f(0,0));
training_corners.push_back(Point2f(rgb.cols,0));
training_corners.push_back(Point2f(rgb.cols,rgb.rows));
training_corners.push_back(Point2f(0,rgb.rows));
You get your transformation:
Mat H2 = findHomography(measured_pixels,training_corners);
Then you use warpPerspektive:
warpPerspective(rgb, unwarped_image, H2,cv::Size(rgb.cols, rgb.rows));