OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Mon, 26 Nov 2018 09:38:03 -0600creating a rotation matrix in OpenCVhttp://answers.opencv.org/question/204092/creating-a-rotation-matrix-in-opencv/ I am trying to recreate a rotation matrix in OpenCV using one of the direction vectors along the z-axes but have not been having any success with it. I am also not sure if this can be done in a unique fashion but I was hoping to get some help from the forum.
So, initially, I try and create the rotation matrix that I want to recover. I do this as following:
import cv2
import numpy
def generate_fake_opencv_calibrations(num_calibs=10):
calibs = list()
cm = np.asarray([[300.0, 0.0, 400], [0, 300.0, 700.0], [0., 0., 1.]])
for i in range(num_calibs):
image_c = np.random.rand(6, 2) * 100.0
world_c = np.random.rand(6, 3)
_, r, t = cv2.solvePnP(world_c, image_c, cm, None, flags=cv2.SOLVEPNP_ITERATIVE)
# Create the rotation matrix
r, _ = cv2.Rodrigues(r)
calibs.append(r)
return calibs
So, the rotation matrix has all the properties that the rows and columns are orthonormal wrt to each other. So, what I was hoping to do is recreate this rotation matrix when I have the direction vector for the z-axes.
So, I have:
z = c[:, 2]
Now I wrote a function to create the other two axes as:
def create_orthonormal_basis(v):
v = v / np.linalg.norm(v)
if v[0] > 0.9:
b1 = np.asarray([0.0, 1.0, 0.0])
else:
b1 = np.asarray([1.0, 0.0, 0.0])
b1 -= v * np.dot(b1, v)
b1 *= np.reciprocal(np.linalg.norm(b1))
b2 = np.cross(v, b1)
return b1, b2, v
I can then create the matrix as:
x, y, z = create_orthonormal_basis(z)
mat = np.asarray([[x[0], y[0], z[0]],
[x[1], y[1], z[1]],
[x[2], y[2], z[2]]])
I was expecting this matrix to map a given point to approximately the same location, however this was not the case. So for a random case, I am getting the following:
For the input matrix, given by:
[-0.5917787 -0.69902414 0.40145141]
[ 0.76717701 -0.64127655 0.01427625]
[ 0.24746193 0.31643267 0.91576905]]
The output is:
[ 0.91588032 0. 0.40145141]
[-0.00625761 0.99987851 0.01427625]
[-0.40140263 -0.01558747 0.91576905]]
I take a random input like:
[0.33385406 0.91243684 0.33755828]
and map it using the original and reconstructed matrices and the output is quite different:
Original: [-0.69986985 -0.32418012 0.68046643]
Reconstructed: [0.44128362 0.91505592 0.16089295]
pamparanaMon, 26 Nov 2018 09:38:03 -0600http://answers.opencv.org/question/204092/OpenCV SolvePnP strange valueshttp://answers.opencv.org/question/176922/opencv-solvepnp-strange-values/Hello,
I experiment on a project.
I use **SolvePnP** to find rotation vector on an object.
Since the values are hard to understand, I used 3D software to define specific values that I am trying to find with OpenCV.
I've got a plane in the center on my scene. I apply rotations on X, Y or Z.
In example bellow, rotations are defined on :
**x=30°
y=0°
z=30°**
I've got good values for focalLength, fov, etc.
![image description](/upfiles/1508940197829013.jpg)
As you can see, the **cv2.projectPoints** works perfectly on my image.
When I call **SolvePnP**, the **rvecs returns strange values**.
For rotation X, I've got 28.939°
For rotation X, I've got 7.916°
For rotation Z, I've got 29.02031°
So when I try to map a plane with WebGL, I've got the result on image bellow (red plane)
![image description](/upfiles/15089407414127149.jpg)
**So here is my question.
Why SolvePnP doesn't return x:30°, y:0° and z:30° !
It's very strange no ???**
Do I have to use **Rodrigues** somewhere? If yes, how ?
Is there a lack of precision somewhere?
Thanks
Loïc
kopacabana73Wed, 25 Oct 2017 09:18:24 -0500http://answers.opencv.org/question/176922/SolvePnp determines well the translation but not the rotationhttp://answers.opencv.org/question/161150/solvepnp-determines-well-the-translation-but-not-the-rotation/I am trying to extract the pose of a camera knowing five points correspondence between the image and the world.
I know the intrinsics and the distortion coefficients and in order to determine the camera pose I am using the solvePnP method.
My problem is that the translation vector is right but the rotation does not seem right. I inverted the joint matrix that contains tvec and rvec.
If you suspect what could be happening and you need more details please ask. Thank you in advance.
My code:
vector<Point2f> contoursCenter; //image points
RNG rng(12345);
for (size_t i = 0; i < contours.size(); i++)
{
Moments m = moments(contours[i], false);
Point2f center = Point2f(m.m10 / m.m00, m.m01 / m.m00);
contoursCenter.push_back(center);
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255));
drawContours(drawing, contours, i, color, 2, 8);
circle(drawing, center, 4, color, -1, 8, 0);
}
vector<Point2f> origCenters;
//convert back from bird eye view
perspectiveTransform(contoursCenter, origCenters, homography.inv());
for (auto center : origCenters)
{
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255));
circle(image, center, 4, color, -1, 8, 0);
}
namedWindow("drawing", WINDOW_NORMAL);
resizeWindow("drawing", 1366, 768);
imshow("drawing", image);
waitKey();
//solvePnP
vector<Point3f> worldCoordinates;
// camera perspective
worldCoordinates.push_back(Point3f(0.995, -0.984, 0.005 + 0.945)); //left down circle
worldCoordinates.push_back(Point3f(-0.954, -0.984, 0.005 + 0.945)); //left upper circle
worldCoordinates.push_back(Point3f(0, 0, 1.4)); //ball
worldCoordinates.push_back(Point3f(-0.954, 0.987, 0.005 + 0.945)); // right upper circle
worldCoordinates.push_back(Point3f(0.995, 0.987, 0.005 + 0.945)); // right down circle
cv::Mat intrinsic(cameraInfo_.intrinsicMatrix());
cv::Mat distortion(cameraInfo_.distortionCoeffs());
float x = 7;
float y = 0;
float z = 4.5;
tf::Vector3 pos(x, y, z);
double roll = -TFSIMD_HALF_PI - 0.39;
double pitch = 0;
double yaw = TFSIMD_HALF_PI;
tf::Quaternion quat;
quat.setRPY(roll, pitch, yaw);
tf::Transform transform;
transform.setOrigin(pos);
transform.setRotation(quat);
tf::Transform invTrans = transform.inverse();
tf::Matrix3x3 rotMat(invTrans.getRotation());
rotMat.getRPY(roll, pitch, yaw);
x = invTrans.getOrigin().x();
y = invTrans.getOrigin().y();
z = invTrans.getOrigin().z();
cv::Mat rvec = getRotationVectorFromTaitBryanAngles(roll, pitch, yaw);
cv::Mat tvec = (cv::Mat_<float>(3, 1) << x, y, z);
bool useInitialGuess = false;
solvePnP(worldCoordinates, origCenters, intrinsic, distortion, rvec, tvec, useInitialGuess, CV_EPNP);
//L2 Error using projectPoints
double errPnP = calculateError(rvec, tvec, intrinsic, distortion, origCenters, worldCoordinates);
std::cout << "PnP mean error = " << errPnP << std::endl; //error=1.2
//tf::Transform transform;
transform.setOrigin(tf::Vector3(tvec.at<double>(0), tvec.at<double>(1), tvec.at<double>(2)));
tf::Quaternion q;
//get roll,pitch and yaw from rotation vector
correctRVec(rvec, roll, pitch, yaw);
q.setRPY(roll, pitch, yaw);
//q.setRPY(rvec.at<double>(0), rvec.at<double>(1), rvec.at<double>(2));
transform.setRotation(q);
//change back to camera transformation
tf::Transform invTransform = transform.inverse();
double xInv = invTransform.getOrigin().x();
double yInv = invTransform.getOrigin().y();
double zInv = invTransform.getOrigin().z();
tf::Matrix3x3 invRotMat(invTransform.getRotation());
double rollInv, pitchInv, yawInv;
invRotMat.getRPY(rollInv, pitchInv, yawInv);
std::cout << "trans = [" << xInv << " " << yInv << " " << zInv << "]" << std::endl;
std::cout << "rot = [" << rollInv << " " << pitchInv << " " << yawInv << "]" << std::endl;
std::cout << "rot (deg) [" << rollInv * 180. / M_PI << " " << pitchInv * 180. / M_PI << " " << yawInv * 180. / M_PI
<< "]" << std::endl;
The world frame has a x that points onwards, a y that points to the left and a z that points up.
In this frame the real pose of the camera is 7 meters in x, 4.5 in z and with a pitch of 0.39 radians and a yaw of PI radians.
My results are the following:
trans = [7.0834 -0.0451012 4.65986]
rot = [-1.97556 -0.00664226 1.56302]
rot (deg) [-113.191 -0.380573 89.5545]
agbjMon, 19 Jun 2017 11:08:57 -0500http://answers.opencv.org/question/161150/Simplified pose estimation on collinear points, like solvePNPhttp://answers.opencv.org/question/128137/simplified-pose-estimation-on-collinear-points-like-solvepnp/ Let's say I have a pole that I can identify points on in the image, like a vertical post painted in contrast colors with a known pattern. And I know the distances between those points in world coordinates. But since it is a pole all the points belong to a single line. I tried to use solvePNP but it seems to be not very happy with collinear points. Is there any other methods in OpenCV similar to solvePNP but for collinear points? Apparently since this is a pole I can live without the rotation around the pole if it simplifies the problem. Thank you.MikhaWed, 15 Feb 2017 11:33:20 -0600http://answers.opencv.org/question/128137/