Strange Result Perspective Transform [closed]
I have a xCode Project, where I train Features from an Reference Image and Match them against another Image. I dont use the standard openCV functions for Extraction and Matching, but they are highly influenced by it. (I use openCV FAST detection and SIFT description and match that through a KD-Tree).
AFAIK my extraction and Matching should work good. Now, I want to draw a Rectangle (with cv::Lines) around my detected Image with perspective Transform, but that is not working. The Code is influenced by this Tutorial
Before calculating the Transformation I'll do Outlier removal through KeyPoint Orientation(1), Line_Check(2), and Ransac(3).
In the Outlier removal through RANSAC I get the Homography and try to use OpenCV's perspectiveTransform()
with the Object corners and the Scene corners. So here is my Code:
double ransacThreshold = 0;
int result = 0;
cv::Mat mask;
std::vector<cv::Point2f> points1, points2;
if(reference.size() > 0 && match.size() > 0){
for(int i = 0; i < reference.size(); i++){
points1.push_back(reference[i].pt);
points2.push_back(match[i].pt);
}
cv::Mat h = cv::findHomography(points1, points2, CV_RANSAC, ransacThreshold, mask);
//Get resulting Inliers.
for(int i = 0; i < mask.rows; i++){
unsigned int inlier =(unsigned int) mask.at<uchar>(i);
if(inlier){
//std::cout << "Reference Point: " << points1[i] << std::endl;
//std::cout << "Matched Point: " << points2[i] << std::endl;
result++;
ransacs.push_back(match[i].pt);
}
}
cv::perspectiveTransform(obj_corners, scene_corners, h);
Here the usage of cv::Line
:
cv::line( matchImg1, scene_corners[0] + cv::Point2f(img_object.cols, 0), scene_corners[1] + cv::Point2f(img_object.cols, 0), cv::Scalar(0, 255, 0), 4 );
cv::line( matchImg1, scene_corners[1] + cv::Point2f(img_object.cols, 0), scene_corners[2] + cv::Point2f(img_object.cols, 0), cv::Scalar( 0, 255, 0), 4 );
cv::line( matchImg1, scene_corners[2] + cv::Point2f(img_object.cols, 0), scene_corners[3] + cv::Point2f(img_object.cols, 0), cv::Scalar( 0, 255, 0), 4 );
cv::line( matchImg1, scene_corners[3] + cv::Point2f(img_object.cols, 0), scene_corners[0] + cv::Point2f(img_object.cols, 0), cv::Scalar( 0, 255, 0), 4 );
And here the Result Image. I also drawn the matched KeyPoints (red Circle) in it:
Update: Here the H Matrix and the Scene corners after the Transformation:
Homogrpahy
[0.9926559777122818, 0.003679393359410963, -0.3786763959271373;
-0.005592576716175476, 0.9984426374264187, 1.010771765484227;
-1.527591055381564e-05, 3.985719175127628e-06, 1]
scene_corners
[-0.378676, 1.01077]
[1032.25, -4.79097]
[1031.8, 1029.59]
[3.37525, 1019.26]
Well, I found the fault, which was by getting the Obj_corners from the initial Reference Image, which is wrong. In my detection and description logic, I extract the KeyPoints from different scales of the Image. So when my Matches are all in the Image of the Size (511x511) and I take the corners from (1024x1024) it makes a wrong transformation.