OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 23 Nov 2018 02:24:57 -0600Reverse camera angle w/ aruco trackinghttp://answers.opencv.org/question/203907/reverse-camera-angle-w-aruco-tracking/I have the Aruco tracking working and from cobbling together stuff from various code samples, ended up with the code below, where `final` is the view matrix passed to the camera. The problem is that the rotation of the camera isn't exactly what I need... not sure exactly which axis is wrong, but you can see in the following video that I want the base of the model to be sitting on the marker- but instead it's not oriented quite right. Any tips to get it right would be great! I'm open to re-orienting it in blender too if that's the right solution. Just not sure exactly _how_ it's wrong right now.
Video example:
https://youtu.be/-7WDxa-e2Oo
Code:
const inverse = cv.matFromArray(4,4, cv.CV_64F, [
1.0, 1.0, 1.0, 1.0,
-1.0,-1.0,-1.0,-1.0,
-1.0,-1.0,-1.0,-1.0,
1.0, 1.0, 1.0, 1.0
]);
cv.estimatePoseSingleMarkers(markerCorners, 0.1, cameraMatrix, distCoeffs, rvecs, tvecs);
cv.Rodrigues(rvecs, rout);
const tmat = tvecs.data64F;
const rmat = rout.data64F;
const viewMatrix = cv.matFromArray(4,4,cv.CV_64F, [
rmat[0],rmat[1],rmat[2],tmat[0],
rmat[3],rmat[4],rmat[5],tmat[1],
rmat[6],rmat[7],rmat[8],tmat[2],
0.0,0.0,0.0,1.0
]);
const output = cv.Mat.zeros(4,4, cv.CV_64F);
cv.multiply(inverse, viewMatrix, output);
cv.transpose(output, output);
const final = output.data64F; dakomFri, 23 Nov 2018 02:24:57 -0600http://answers.opencv.org/question/203907/OpenCV + OpenGL: proper camera pose using solvePnPhttp://answers.opencv.org/question/23089/opencv-opengl-proper-camera-pose-using-solvepnp/I've got problem with obtaining proper camera pose from iPad camera using OpenCV.
I'm using custom made 2D marker (based on [AruCo library](http://www.uco.es/investiga/grupos/ava/node/26) ) - I want to render 3D cube over that marker using OpenGL.
In order to recieve camera pose I'm using solvePnP function from OpenCV.
According to [THIS LINK](http://stackoverflow.com/questions/18637494/camera-position-in-world-coordinate-from-cvsolvepnp) I'm doing it like this:
<!-- language: c++ -->
cv::solvePnP(markerObjectPoints, imagePoints, [self currentCameraMatrix], _userDefaultsManager.distCoeffs, rvec, tvec);
tvec.at<double>(0, 0) *= -1; // I don't know why I have to do it, but translation in X axis is inverted
cv::Mat R;
cv::Rodrigues(rvec, R); // R is 3x3
R = R.t(); // rotation of inverse
tvec = -R * tvec; // translation of inverse
cv::Mat T(4, 4, R.type()); // T is 4x4
T(cv::Range(0, 3), cv::Range(0, 3)) = R * 1; // copies R into T
T(cv::Range(0, 3), cv::Range(3, 4)) = tvec * 1; // copies tvec into T
double *p = T.ptr<double>(3);
p[0] = p[1] = p[2] = 0;
p[3] = 1;
camera matrix & dist coefficients are coming from *findChessboardCorners* function, *imagePoints* are manually detected corners of marker (you can see them as green square in the video posted below), and *markerObjectPoints* are manually hardcoded points that represents marker corners:
<!-- language: c++ -->
markerObjectPoints.push_back(cv::Point3d(-6, -6, 0));
markerObjectPoints.push_back(cv::Point3d(6, -6, 0));
markerObjectPoints.push_back(cv::Point3d(6, 6, 0));
markerObjectPoints.push_back(cv::Point3d(-6, 6, 0));
Because marker is 12 cm long in real world, I've chosed the same size in the for easier debugging.
As a result I'm recieving 4x4 matrix T, that I'll use as ModelView matrix in OpenCV.
Using GLKit drawing function looks more or less like this:
<!-- language: c++ -->
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
// preparations
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
float aspect = fabsf(self.bounds.size.width / self.bounds.size.height);
effect.transform.projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(39), aspect, 0.1f, 1000.0f);
// set modelViewMatrix
float mat[16] = generateOpenGLMatFromFromOpenCVMat(T);
currentModelMatrix = GLKMatrix4MakeWithArrayAndTranspose(mat);
effect.transform.modelviewMatrix = currentModelMatrix;
[effect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, 36); // draw previously prepared cube
}
I'm not rotating everything for 180 degrees around X axis (as it was mentioned in previously linked article), because I doesn't look as necessary.
The problem is that it doesn't work! Translation vector looks OK, but X and Y rotations are messed up :(
I've recorded a video presenting that issue:
[http://www.youtube.com/watch?v=EMNBT5H7-os](http://www.youtube.com/watch?v=EMNBT5H7-os)
I've tried almost everything (including inverting all axises one by one), but nothing actually works.
What should I do? How should I properly display that 3D cube? Translation / rotation vectors that come from solvePnP are looking reasonable, so I guess that I can't correctly map these vectors to OpenGL matrices.axadiwSat, 26 Oct 2013 17:49:13 -0500http://answers.opencv.org/question/23089/Proper way of rotating 3D points around axishttp://answers.opencv.org/question/169888/proper-way-of-rotating-3d-points-around-axis/Hello!
I have a problem with apply rotation to a set of 3D points. I use depth map, which store Z coordinates of points, also I use reverse of camera intrinsic matrix to obtain X and Y coords of point. I need to rotate those 3D points aorund Y axis and compute depth map after rotation. The code I use is here:
for (int a = 0; a < depthValues.rows; ++a)
{
for (int b = 0; b < depthValues.cols; ++b)
{
float oldDepth = depthValues.at<cv::Vec3f>(a, b)[0];
if (oldDepth > EPSILON)
{
cv::Mat pointInWorldSpace = cameraMatrix.inv() * cv::Mat(cv::Vec3f(a, b , 1), false);
pointInWorldSpace *= oldDepth;
cv::Mat rotatedPointInWorldSpace = rotation * pointInWorldSpace;
float newDepth = rotatedPointInWorldSpace.at<cv::Vec3f>(0, 0)[2];
cv::Mat rotatedPointInImageSpace = cameraMatrix * rotatedPointInWorldSpace;
int x = rotatedPointInImageSpace.at<cv::Vec3f>(0, 0)[0] / newDepth;
int y = rotatedPointInImageSpace.at<cv::Vec3f>(0, 0)[1] / newDepth;
x = x < 0 ? 0 : x;
y = y < 0 ? 0 : y;
x = x > depthValues.rows - 1 ? depthValues.rows - 1 : x;
y = y > depthValues.cols - 1 ? depthValues.cols - 1 : y;
depthValuesAfterConversion.at < cv::Vec3f >(x, y) = cv::Vec3f(newDepth, newDepth, newDepth);
}
}
}
Here's how I compute rotation matrix:
float angle = (15.0 * 3.14159265f) / 180.0f;
float rotateYaxis[3][3] =
{
{ cos(angle), 0, -sin(angle) },
{ 0, 1, 0 },
{ sin(angle), 0, cos(angle) }
};
cv::Mat rotation(3, 3, CV_32FC1, rotateYaxis);
Unfortunately, after applying this rotation to my depth map it looks like it's rotated around X axis. I discovered that when I compute rotation matrix as it was rotation around X axis - my code works lke expected.
My question is: could you point me out where I made mistake to my code? Using matrix I've described I expected my depth map to be rotated around Y axis, not X.
Thank you for your help!
seaxgastFri, 28 Jul 2017 15:36:05 -0500http://answers.opencv.org/question/169888/camera rotation and translation based on two imageshttp://answers.opencv.org/question/68023/camera-rotation-and-translation-based-on-two-images/Hello,
I'm just starting my little project in OpenCV and I need your help :)
I would like to calculate rotation and translation values of the camera basing on two views of the same planar, square object.
I have already found functions such as: getPerspectiveTransform, decomposeEssentialMat, decomposeHomographyMat. Plenty of tools, but I'm not sure which of them to use in my case.
I have a square object of known real-world dimensions [meters]. After simple image processing I can extract pixel values of the vertices and the center of the square.
Now I would like to calculate the relative rotation and translation of the camera which led to obtain the second of two images:<br>
"Reference view" and "View #n"<br>
(please see below).
Any suggestions will be appreciated :)
1. Reference view:<br>
![image description](/upfiles/1438854857209.png)
<br>(center of the object is on the optical axis of camera, the camera-object distance is known)
2. View #1:<br>
![image description](/upfiles/14388548769288926.png)
3. View #2:<br>
![image description](/upfiles/14388548834324958.png)
4. View #3:<br>
![image description](/upfiles/1438854889587757.png)
AliceThu, 06 Aug 2015 05:40:19 -0500http://answers.opencv.org/question/68023/Is the rotation matrix R described in Camera Calibration the same as the rotation matrix R used in stitching?http://answers.opencv.org/question/59807/is-the-rotation-matrix-r-described-in-camera-calibration-the-same-as-the-rotation-matrix-r-used-in-stitching/
I believe there is an inconsistency between the camera rotation matrix defined in the camera calibration module (documented here: http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#camera-calibration-and-3d-reconstruction) and the camera rotation matrix defined in the stitching module (documented here: http://docs.opencv.org/modules/stitching/doc/camera.html).
The OpenCV rotation matrix described in the calibration module is the generally known rotation matrix from camera extrinsics, which I illustrate with an example. Suppose in real world coordinates, there is a red dot at point (-1, -1, 1). Suppose there is also a camera (at origin) whose principal axis also points to (-1, -1, 1). Then this rotation matrix R would be a transformation that turns (-1, -1, 1) into (0, 0, 1), since the red dot should appear in the middle of the photo taken by that camera.
However, the rotation matrix used in stitching, CameraParams.R, appears to be the inverse of the matrix R, mentioned above, based on the way CameraParams.R is calculated in motion_estimators.cpp. In the code, it looks like CameraParams.R is a matrix that takes the vector (0, 0, 1) and turns it into the principal axis of the camera in question, unless I am terribly mistaken. Using the example above, CameraParams.R is a transformation that turns (0, 0, 1) into (-1, -1, 1); in other words, the inverse of the previous rotation matrix R.
Is this indeed the case? And if yes, why are the two definitions of a rotation matrix different? Why have two definitions of a rotation matrix in OpenCV?
crisTue, 14 Apr 2015 19:45:44 -0500http://answers.opencv.org/question/59807/A question about relation of (K R T H)http://answers.opencv.org/question/26821/a-question-about-relation-of-k-r-t-h/H is homography matrix. R(r1,r2,r3)is a rotation matrix ,t is a translation matrix , k is a intrinsic matrix.I want to get H by R K T,so I use the equation (H = K(R|T).But I want to get the Homography matrix between two 2D images,and I only use r1 r2 just like H = K(r1,r2,T).Is that right?? Thank you for your reply!!我干过豪哥Mon, 20 Jan 2014 11:55:01 -0600http://answers.opencv.org/question/26821/Android camera image rotationhttp://answers.opencv.org/question/6930/android-camera-image-rotation/Does OpenCV support different device orientations (i.e. portrait and lanscape etc.) on Android?
I am capturing camera frames on the native side. If the device orientation is not landscape, the images are rotated. Is there a way to fix the rotation without manually rotating the captured image?arsalank2Tue, 05 Feb 2013 07:43:55 -0600http://answers.opencv.org/question/6930/