I want to get the transformation from the global coordinate system to the coordinate system of the camera image. I have a stationary camera pointing at the ground at an approximate angle of 20 degrees. I have already obtained the cameras intrinsic and distortion parameters. My current setup is as follows. I have placed my camera at the middle of a chessboard edge (my (0,0) coordinate in the global coordinate system). I measured the distances of the chessboard intersections in mm. I then used cv::findChessboardCorners to find the before mentioned corners in the image. I then used cv::solvePnP to get rvec and tvec to generate the transformation matrix. I then used generated the transformation matrix using rvec and tvec.
Mat cameraMatrix = (Mat_<float>(3,3) <<
715.18604574311325, 0.0, 319.5,
0.0, 715.18604574311325, 239.5,
0.0, 0.0, 1.0);
Mat distCoeffs = (Mat_<float>(5,1) <<
-0.013535583817766943, 0.10657613007692497, 0.0, 0.0, -1.2272218410276732);
vector<Point2f> pointBuf;
vector<Point3f> boardPoints;
Mat rvec, tvec;
bool found;
//...
//code for declaring intersection coordinates (in mm) in the global coordinate system
//...
found = cv::findChessboardCorners(source, size, pointBuf);
if (found == true) {
solvePnP(boardPoints, pointBuf, cameraMatrix, distCoeffs, rvec, tvec, false);
Rodrigues(rvec,R);
R = R.t();
tvec = -R * tvec;
Mat T = cv::Mat::eye(4, 4, R.type());
T( cv::Range(0,3), cv::Range(0,3) ) = R * 1;
T( cv::Range(0,3), cv::Range(3,4) ) = tvec * 1;
Am I correct in assuming that if I just multiply a 4 by 1 vector in global coordinates, for instance
Mat p1 = (Mat_<float>(4, 1) << 100, 200,0,1); //the units are mm
where the coordinates are in mm, with matrix T, I get the corresponding x,y coordinates on the image plane in pixles.
result = T*p1;
The results I get with the current code are wrong, I just don't know if I either missed something, got the units wrong or if I my code is just completely wrong.