I have a set of data that is as follows:
std::vector<cv::Point3d> points3d; //3d points in world space
std::vector<cv::Point2d> points2d; //2d points in a camera view
cv::Mat intrinsics; // camera intrinsics
cv::Mat dist; //distortion parameters.
This equates to some white circles on a black wall (which I have the 3d positions of), and some camera frames of the markers/wall.
What I want to do is use a PnP solve to localize the camera. however, due to the lack of image features, I cannot use feature matching or descriptors, as I normally would.
How can I calculate correspondences in this case? I was thinking something like:
For each three markers in camera view, run p3p, return a pose, reproject the 3d points to 2d and measure the distance. repeat until the distance is small enough that we have the correct pose.
is there a better way? Can I use RANSAC to calculate the correspondences without giving it any prior matches? Can I use KNN somehow?
Thank you.