1 | initial version |
Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.
I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.
I'm attempting to transform image coordinates to plane coordinates in the following setup:
Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:
TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
So, now I want a new coordinate system, so that
TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.
I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:
translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners).
If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve.