Online estimation of euler angels (camera pose) using images from camera and opencv library
I'm working on a android application and I need to estimate online camera rotation in 3D-plan using images from camera and opencv library. I like to calculate Euler angles.
I have read this and this page and I can estimate homography matrix like here.
My first question is, should I really know the camera intrinsic matrix from camera calibrtion or is the homography matrix (camera extrinsic) enough to estimate euler angles (pitch, roll, yaw)?
If homography matrix is enough, how can I do it exactly?
Sorry, I am really beginner with opencv and cannot decompose "Mat" of homography to rotation matrix and translation matrix like describes here. How can I implement euler angles in android?
You can see my code using solvePnPRansac() and decomposeProjectionMatrix to calculate euler angles. But it returns just a null-vector as double[] eulerArray = {0,0,0}!!! Can somebody help me?! What is wrong there? Thank you very much for any response!
public double[] findEulerAngles(MatOfKeyPoint keypoints1, MatOfKeyPoint keypoints2, MatOfDMatch matches){
KeyPoint[] k1 = keypoints1.toArray();
KeyPoint[] k2 = keypoints2.toArray();
List<DMatch> matchesList = matches.toList();
List<KeyPoint> referenceKeypointsList = keypoints2.toList();
List<KeyPoint> sceneKeypointsList = keypoints1.toList();
// Calculate the max and min distances between keypoints.
double maxDist = 0.0;
double minDist = Double.MAX_VALUE;
for(DMatch match : matchesList) {
double dist = match.distance;
if (dist < minDist) {
minDist = dist;
}
if (dist > maxDist) {
maxDist = dist;
}
}
// Identify "good" keypoints based on match distance.
List<Point3> goodReferencePointsList = new ArrayList<Point3>();
ArrayList<Point> goodScenePointsList = new ArrayList<Point>();
double maxGoodMatchDist = 1.75 * minDist;
for(DMatch match : matchesList) {
if (match.distance < maxGoodMatchDist) {
Point kk2 = k2[match.queryIdx].pt;
Point kk1 = k1[match.trainIdx].pt;
Point3 point3 = new Point3(kk1.x, kk1.y, 0.0);
goodReferencePointsList.add(point3);
goodScenePointsList.add( kk2);
sceneKeypointsList.get(match.queryIdx).pt);
}
}
if (goodReferencePointsList.size() < 4 || goodScenePointsList.size() < 4) {
// There are too few good points to find the pose.
return;
}
MatOfPoint3f goodReferencePoints = new MatOfPoint3f();
goodReferencePoints.fromList(goodReferencePointsList);
MatOfPoint2f goodScenePoints = new MatOfPoint2f();
goodScenePoints.fromList(goodScenePointsList);
MatOfDouble mRMat = new MatOfDouble(3, 3, CvType.CV_32F);
MatOfDouble mTVec = new MatOfDouble(3, 1, CvType.CV_32F);
//TODO: solve camera intrinsic matrix
Mat intrinsics = Mat.eye(3, 3, CvType.CV_32F); // dummy camera matrix
intrinsics.put(0, 0, 400);
intrinsics.put(1, 1, 400);
intrinsics.put(0, 2, 640 / 2);
intrinsics.put(1, 2, 480 / 2);
Calib3d.solvePnPRansac(goodReferencePoints, goodScenePoints, intrinsics, new MatOfDouble(), mRMat, mTVec);
MatOfDouble rotCameraMatrix1 = new MatOfDouble(3, 1, CvType.CV_32F);
double[] rVecArray = mRMat.toArray();
// Calib3d.Rodrigues(mRMat, rotCameraMatrix1);
double[] tVecArray = mTVec.toArray();
MatOfDouble projMatrix = new MatOfDouble(3, 4, CvType.CV_32F); //projMatrix 3x4 input projection matrix P.
projMatrix.put(0, 0, rVecArray[0]);
projMatrix.put(0, 1, rVecArray[1]);
projMatrix.put(0, 2, rVecArray[2]);
projMatrix.put(0, 3, 0);
projMatrix.put(1, 0, rVecArray[3]);
projMatrix.put(1, 1, rVecArray[4]);
projMatrix.put(1, 2, rVecArray[5]);
projMatrix.put(1, 3, 0);
projMatrix.put(2, 0, rVecArray[6]);
projMatrix.put(2, 1, rVecArray[7]);
projMatrix.put(2, 2, rVecArray[8]);
projMatrix.put(2, 3, 0);
MatOfDouble cameraMatrix = new MatOfDouble(3, 3, CvType.CV_32F); //cameraMatrix Output 3x3 camera matrix K.
MatOfDouble ...