world coordinates to camera coordinates to pixel coordinates cv::projectpoints

asked 2020-03-21 13:26:55 -0500

Suom gravatar image

Hello, I am trying to project a giving 3D point to image plane,

I have a 3d point (-455,-150,0) where x is the depth axis , z is the upwards axis and y is the horizontal one. I have roll: Rotation around the front-to-back axis (x) , pitch: Rotation around the side-to-side axis (y) and yaw:Rotation around the vertical axis (z) also I have the position on the camera (x,y,z)=(-50,0,100) so I am doing the following first I am doing from world coordinates to camera coordinates using the extrinsic parameters:

double pi = 3.14159265358979323846;
double yp = 0.033716827630996704* pi / 180; //roll
double thet = 67.362312316894531* pi / 180; //pitch
double k = 89.7135009765625* pi / 180;      //yaw
double rotxm[9] = { 1,0,0,0,cos(yp),-sin(yp),0,sin(yp),cos(yp) };
double rotym[9] = { cos(thet),0,sin(thet),0,1,0,-sin(thet),0,cos(thet) };
double rotzm[9] = { cos(k),-sin(k),0,sin(k),cos(k),0,0,0,1};
cv::Mat rotx = Mat{ 3,3,CV_64F,rotxm };
cv::Mat roty = Mat{ 3,3,CV_64F,rotym };
cv::Mat rotz = Mat{ 3,3,CV_64F,rotzm };
cv::Mat rotationm = rotz * roty * rotx; //rotation matrix 
cv::Mat mpoint3(1, 3, CV_64F, { -455,-150,0 }); //the 3D point location 
mpoint3 = mpoint3 * rotationm; //rotation 
cv::Mat position(1, 3, CV_64F, {-50,0,100}); //the camera position 
mpoint3=mpoint3 - position; //translation

and now I want to move from camera coordinates to image coordinates the first solution was: as I read from some sources

Mat myimagepoint3 = mpoint3 * mycameraMatrix;

This didn't work and I believe that is normal

The second solution was

double fx =<double>(0, 0);
double fy =<double>(1, 1);
double  cx1 =<double>(0, 2);
double cy1=<double>(1, 2);
xt = mpoint3 .at<double>(0) /<double>(2);
yt = mpoint3 .at<double>(1) /<double>(2);
double u = xt * fx + cx1;
double v = yt * fy + cy1;

but also didn't work

so now I tried to use opencv method fisheye::projectpoints(from world to image coordinates)

Mat recv2;
cv::Rodrigues(rotationm, recv2);
//inputpoints a vector contains one point which is the 3d world coordinate of the point
//outputpoints a vector to store the output point
 cv::fisheye::projectPoints(inputpoints,outputpoints,recv2,position,mycameraMatrix,mydiscoff );

but it didn't work as I read from the documentations this can find the 2d position of the 3d object or am I wrong?

by didn't work I mean: I know (in the image) where should the point appear but when I draw it, it is always in another place (not even close) sometimes I even got a negative values

note: there is no syntax errors or exceptions but may I made typos while I am writing code here so can any one suggest if I am doing something wrong?

edit retag flag offensive close merge delete



cv::fisheye:: -- that's really ONLY for fisheye lenses, and you have to do a fisheye calibration before

berak gravatar imageberak ( 2020-03-24 03:45:42 -0500 )edit

I already did the calibration .. and also I am working on fisheye lenses

Suom gravatar imageSuom ( 2020-03-24 05:58:46 -0500 )edit