1 | initial version |
One way to do it, would be: Part1: use SolvePnP to map the points of your ' known 3D point locations' into it's image coordinates for one of your two camera - say cameraA. This will give you an SE3 (aka: rigid body transform, or a translation + rotation) from your wanted 'user defined coordinate system' into the coordinate system of cameraA.
Part2: Then, when you do your triangulation, you'll get point3D's in either the coordinate system of CameraA or CameraB. If it's cameraB ---> use the calibration data to get the transform to reach cameraA. Once the point3D's (from your triagulation) are in cameraA, use that result from Part1 to reach your desired result