1 | initial version |
Take a look at THIS unfinished contrib module. It's far enough along to have what you are asking for.
You put in the 2d image point from each image, along with the camera matrix, distortion matrix, rvec and tvec, and you get out the 3d location of the point.
I should really add some sample code to the readme now that I have it working, but for now here's a sample.
vector<Mat> localt; //vector of tvecs, one for each image
vector<Mat> localr; //vector of rvecs, one for each image
vector<Point2f> trackingPts; //location of the point of interest in each image
Mat cameraMatrix; //For a single camera application, you only need one camera matrix, for multiple cameras use vector<Mat>, one for each image
Mat distMatrix; //For a single camera application, you only need one distortion matrix, for multiple cameras use vector<Mat>, one for each image
Mat state; //Output of the calculation
Mat cov; //Optional output, uncertainty covariance
mapping3d::calcPosition(localt, localr, trackingPts, cameraMatrix, distMatrix, state, cov);