1 | initial version |
Yes, it is certainly possible. I'm not quite sure what it is you want to do with the two points or what measurements you're trying to get, but I do know the first step is getting the depth or 3D coordinates of points in the photos. To do that you need to find the correspondence between one image and the other. Either Dense or Sparse Optical Flow can be useful here. Sparse if you only need a few specific points. Dense if you need the whole image.
This is similar to a stereo image problem, but it might not be best to treat it that way. Stereo algorithms make assumptions that might not be valid in this case, and you'd be solving for information you sort of already have. It's certainly available (though I don't know too much about it). The place to start for that is the OpenCV Stereo module.
Now, OpenCV doesn't include any algorithms that use the information you have there directly, but I've been working on a project (only partially done) that does. The mapping3d module can take a set of inputs with camera position and orientation plus a 2d point in each image (as many images as you have) and give you the 3d location of that point. It can account for different cameras and distortions if you have images from more than one.
2 | No.2 Revision |
Yes, it is certainly possible. I'm not quite sure what it is you want to do with the two points or what measurements you're trying to get, but I do know the first step is getting the depth or 3D coordinates of points in the photos. To do that you need to find the correspondence between one image and the other. Either Dense or Sparse Optical Flow can be useful here. Sparse if you only need a few specific points. Dense if you need the whole image.
This is similar to a stereo image problem, but it might not be best to treat it that way. Stereo algorithms make assumptions that might not be valid in this case, and you'd be solving for information you sort of already have. It's certainly available (though I don't know too much about it). The place to start for that is the OpenCV Stereo Stereo module and the calib3d module.
Now, OpenCV doesn't include any algorithms that use the information you have there directly, but I've been working on a project (only partially done) that does. The mapping3d module can take a set of inputs with camera position and orientation plus a 2d point in each image (as many images as you have) and give you the 3d location of that point. It can account for different cameras and distortions if you have images from more than one.