# Stereo solvePnP routine

Hi all!

I have a question on applied use of OpenCV, I hope it's ok to ask here.

If we have:

• A calibrated stereo camera pair (mounted to a robot arm)
• Corner markers (think 1x1 checkerboards) in the scene which we can find to subpixel precision with known 3D position.

Is there an routine to find the extrinsics of the stereo pair relative to the scene?

(the extrinsics of the stereo pair to each other is already known)

With a single camera I could use solvePnP/RANSAC(...). But this wouldn't make the most of the stereo pair.

Is there a routine which could simultaneously solve reprojection for 2 cameras to find the translation+rotation of the cameras to the scene? (I presume such a routine might be used by a Stereo SLAM routine)

Template for the function could be:

stereoSolvePnp(vector<cv::Point3f> objectPoints1
, vector<cv::Point2f> imagePoints1
, vector<cv::Point3f> objectPoints2
, vector<cv::Point3f> imagePoints2
, cv::Mat cameraMatrix1 // from calibrateCamera
, cv::Mat distortionCoefficients1 // from calibrateCamera
, cv::Mat cameraMatrix2 // from calibrateCamera
, cv::Mat distortionCoefficients2 // from calibrateCamera
, cv::Mat translationCamera1ToCamera2 // from stereoCalibrate
, cv::Mat rotationCamera1ToCamera2// from stereoCalibrate

, cv::Mat outputTranslation // object to camera1
, cv::Mat outputRtoation // object to camera2
);


Bonus points for either:

1. Using previous frames of extrinsics data to encourage smooth filtered motion.
2. A (non-realtime) routine for refining the scene data (3D position of the features).

NB : Presuming I know where all these markers are in 3D space, then I can use the robot arm pose to estimate where they will be seen in the image space of each camera, and then use cornerSubPix(...) to find them accurately.

Thank you

Elliot

--EDIT--

I believe the pseudocode could be

//refine extrinsic parameters using iterative algorithm
CvLevMarq solver( 6 parameters);

while (solver.update(parameters, error, jacobian) != COMPLETED)
{
rotationObjectToCamera1 = parameters[0..2];
translationObjectToCamera1 = parameters[3..5];

error = 0;

cvProjectPoints2( objectPoints
, rotationObjectToCamera1
, translationObjectToCamera1
, cameraMatrix1
, distortionCoefficients1
, calculatedImagePoints1, jacobian);
error += distance(imagePoints1 - calculatedImagePoints1);

rotationObjectToCamera2 = f(rotationObjectToCamera1, translationObjectToCamera1, rotationStereoPair, translationStereoPair);
translationObjectToCamera2 = g(rotationObjectToCamera1, translationObjectToCamera1, rotationStereoPair, translationStereoPair);

cvProjectPoints2( objectPoints
, rotationObjectToCamera2
, translationObjectToCamera2
, cameraMatrix2
, distortionCoefficients2
, calculatedImagePoints2, jacobian);
error += distance(imagePoints2 - calculatedImagePoints2);
}


The next step is to find functions f and g. (i.e. how to chain together rotations and translations). Perhaps I can put that as a separate question on here.

(more detailed notes at https://paper.dropbox.com/doc/KC35-st... )

edit retag close merge delete

Sort by ยป oldest newest most voted

This would be the stereoCalibrate function. It doesn't get bonus points, but it does get you what you need.

more

I'm sorry but it doesn't get what I want :). I wish there was a way to hack stereoCalibrate to do what I need (e.g. if there were flags like USE_STEREO_EXTRINSICS_GUESS | FIX_STEREO_EXTRINSICS, and it output the extrinsics to the object rather than just between the cameras). Or perhaps it wasn't clear from my question that I want the extrinsics of the stereo pair RELATIVE TO THE SCENE, not to each other.. Thank you

( 2017-03-07 19:43:30 -0500 )edit

Ah, I see what you mean. The math on this would actually be fairly complicated. Also, I apologize, I thought stereoCalibrate output a vector of tvec and rvec along with the R and T between cameras.

Your best bet is to modify the stereoCalibrate function. It's old C code though, so it'll be a lot of work. You shouldn't need to add anything but output variables. The rest is just deleting code that doesn't do what you need.

( 2017-03-07 22:32:05 -0500 )edit

current thinking : adding another line after https://github.com/opencv/opencv/blob... to include the error from the second camera would result in a simultaneous solve. Just would need to be able to transform _r and _t by either R+T,E,F from stereoCalibrate.

( 2017-03-15 20:52:20 -0500 )edit

Take a look at composeRT

( 2017-03-20 17:50:50 -0500 )edit