OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Tue, 17 Nov 2020 11:41:28 -0600Transforming global coordinates to camera coordinateshttp://answers.opencv.org/question/237933/transforming-global-coordinates-to-camera-coordinates/ I want to get the transformation from the global coordinate system to the coordinate system of the camera image. I have a stationary camera pointing at the ground at an approximate angle of 20 degrees. I have already obtained the cameras intrinsic and distortion parameters.
My current setup is as follows. I have placed my camera at the middle of a chessboard edge (my (0,0) coordinate in the global coordinate system). I measured the distances of the chessboard intersections in mm. I then used cv::findChessboardCorners to find the before mentioned corners in the image. I then used cv::solvePnP to get rvec and tvec to generate the transformation matrix. I then used generated the transformation matrix using rvec and tvec.
Mat cameraMatrix = (Mat_<float>(3,3) <<
715.18604574311325, 0.0, 319.5,
0.0, 715.18604574311325, 239.5,
0.0, 0.0, 1.0);
Mat distCoeffs = (Mat_<float>(5,1) <<
-0.013535583817766943, 0.10657613007692497, 0.0, 0.0, -1.2272218410276732);
vector<Point2f> pointBuf;
vector<Point3f> boardPoints;
Mat rvec, tvec;
bool found;
//...
//code for declaring intersection coordinates (in mm) in the global coordinate system
//...
found = cv::findChessboardCorners(source, size, pointBuf);
if (found == true) {
solvePnP(boardPoints, pointBuf, cameraMatrix, distCoeffs, rvec, tvec, false);
Rodrigues(rvec,R);
R = R.t();
tvec = -R * tvec;
Mat T = cv::Mat::eye(4, 4, R.type());
T( cv::Range(0,3), cv::Range(0,3) ) = R * 1;
T( cv::Range(0,3), cv::Range(3,4) ) = tvec * 1;
Am I correct in assuming that if I just multiply a 4 by 1 vector in global coordinates, for instance
Mat p1 = (Mat_<float>(4, 1) << 100, 200,0,1); //the units are mm
where the coordinates are in mm, with matrix T, I get the corresponding x,y coordinates on the image plane in pixles.
result = T*p1;
The results I get with the current code are wrong, I just don't know if I either missed something, got the units wrong or if I my code is just completely wrong.lipa1242Tue, 17 Nov 2020 11:41:28 -0600http://answers.opencv.org/question/237933/Perspective transformation - Deriving formula for single Camerahttp://answers.opencv.org/question/221482/perspective-transformation-deriving-formula-for-single-camera/I would like to solve an equation and of that formulate a formula that takes two inputs and gives one output:
**Input:**
- (u,v) - Pixel coordinates
- (t) - Translation of the camera with respect to the plane, in one dimension (z)
**Output**
- (x,y) - World coordinate
And for the rotation of the camera, this is set to static. So the only part that can alternate is the height of the camera with respect to the plane.
I've successfully solved an equation system, but for when the camera has fixed rotation and fixed height as described here: https://dsp.stackexchange.com/a/46591/46122
But now I want to express a formula but that takes one additional parameter (height in [mm]).
However, I'm not sure how this equation system would look like described here, https://dsp.stackexchange.com/a/46591/46122
to reflect my additional parameter.
My goal is to have a camera mounted on a linear rail (that moves in the z-direction and is vertical to the plane) that can detect objects on the plane. To my help, I have a laser sensor that constantly measures the height from the plane to the camera, which can be given as an input to the transform.
Any help is appreciated!r.anderssonMon, 11 Nov 2019 03:32:02 -0600http://answers.opencv.org/question/221482/Transform 2D Point into 3D Linehttp://answers.opencv.org/question/207274/transform-2d-point-into-3d-line/Im looking for a function that transforms a 2D image-point into a 3D line in my model (specific coord system, belonging to translation and rotation vectors received by function solvePnP() ).
I have: cameraMatrix, rotation and translation vectors, distortionCoefficients.
It would be the inverse function of projectPoints(), which takes a 3D point and transforms it into a 2d image point.
Is there any solution for this issue?
LuisKThu, 17 Jan 2019 11:24:49 -0600http://answers.opencv.org/question/207274/Is this geometric transformation exists in OpenCV ?http://answers.opencv.org/question/191861/is-this-geometric-transformation-exists-in-opencv/Hi,
I want to know if a function that makes a such transformation exists in OpenCV :
![image description](/upfiles/15266542978353376.png)
Thanks for any help,KamelFri, 18 May 2018 08:44:09 -0500http://answers.opencv.org/question/191861/Using warpPerspective to simulate virtual camera issueshttp://answers.opencv.org/question/189944/using-warpperspective-to-simulate-virtual-camera-issues/Hi guys,
Apologies if this seems trivial - relatively new to openCV.
Essentially, I'm trying to create a function that can take in a camera's image, the known world coordinates of that image, and the world coordinates of some other point 2, and then transform the camera's image to what it would look like if the camera was at point 2. From my understanding, the best way to tackle this is using a homography transformation using the warpPerspective tool.
The experiment is being done inside the Unreal Game simulation engine. Right now, I essentially read the data from the camera, and add a set transformation to the image. However, I seem to be doing something wrong as the image is looking something like this (original image first then distorted image):
**Original Image**
![original image](/upfiles/15244292176869851.png)
**Distorted Image**
![distorted image](/upfiles/15244292432039685.png)
This is the current code I have. Basically, it reads in the texture from Unreal engine, and then gets the individual pixel values and puts them into the openCV Mat. Then I try and apply my warpPerspective transformation. Interestingly, if I just try a simple warpAffine transformation (rotation), it works fine. I would really appreciate any help or guidance any of you may have? Thanks in advance!
ROSCamTextureRenderTargetRes->ReadPixels(ImageData);
cv::Mat image_data_matrix(TexHeight, TexWidth, CV_8UC3);
cv::Mat warp_dst, warp_rotate_dst;
int currCol = 0;
int currRow = 0;
cv::Vec3b* pixel_left = image_data_matrix.ptr<cv::Vec3b>(currRow);
for (auto color : ImageData)
{
pixel_left[currCol][2] = color.R;
pixel_left[currCol][1] = color.G;
pixel_left[currCol][0] = color.B;
currCol++;
if (currCol == TexWidth)
{
currRow++;
currCol = 0;
pixel_left = image_data_matrix.ptr<cv::Vec3b>(currRow);
}
}
warp_dst = cv::Mat(image_data_matrix.rows, image_data_matrix.cols, image_data_matrix.type());
double rotX = (45 - 90)*PI / 180;
double rotY = (90 - 90)*PI / 180;
double rotZ = (90 - 90)*PI / 180;
cv::Mat A1 = (cv::Mat_<float>(4, 3) <<
1, 0, (-1)*TexWidth / 2,
0, 1, (-1)*TexHeight / 2,
0, 0, 0,
0, 0, 1);
// Rotation matrices Rx, Ry, Rz
cv::Mat RX = (cv::Mat_<float>(4, 4) <<
1, 0, 0, 0,
0, cos(rotX), (-1)*sin(rotX), 0,
0, sin(rotX), cos(rotX), 0,
0, 0, 0, 1);
cv::Mat RY = (cv::Mat_<float>(4, 4) <<
cos(rotY), 0, (-1)*sin(rotY), 0,
0, 1, 0, 0,
sin(rotY), 0, cos(rotY), 0,
0, 0, 0, 1);
cv::Mat RZ = (cv::Mat_<float>(4, 4) <<
cos(rotZ), (-1)*sin(rotZ), 0, 0,
sin(rotZ), cos(rotZ), 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
// R - rotation matrix
cv::Mat R = RX * RY * RZ;
// T - translation matrix
cv::Mat T = (cv::Mat_<float>(4, 4) <<
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, dist,
0, 0, 0, 1);
// K - intrinsic matrix
cv::Mat K = (cv::Mat_<float>(3, 4) <<
12.5, 0, TexHeight / 2, 0,
0, 12.5, TexWidth / 2, 0,
0, 0, 1, 0
);
cv::Mat warp_mat = K * (T * (R * A1));
//warp_mat = cv::getRotationMatrix2D(srcTri[0], 43.0, 1);
//cv::warpAffine(image_data_matrix, warp_dst, warp_mat, warp_dst.size());
cv::warpPerspective(image_data_matrix, warp_dst, warp_mat, image_data_matrix.size(), CV_INTER_CUBIC | CV_WARP_INVERSE_MAP);
cv::imshow("distort", warp_dst);
cv::imshow("imaage", image_data_matrix)
rpSun, 22 Apr 2018 15:41:42 -0500http://answers.opencv.org/question/189944/Apply quadratic function to an imagehttp://answers.opencv.org/question/173072/apply-quadratic-function-to-an-image/ Hello.
I have a quadratic function I would like to apply to my image.
The transformation has the following form:
* x' = A + Bx + Cy + Dxx + Exy + Fyy
* y' = G + Hx + Iy + Jxx + Kxy + Lyy
For a linear transformation:
* x' = A + B*x + C*y
* y' = D + E*x + F*y
it was easy because I could use WarpAffine by putting the coefficients in a 3x2 matrix.
Moreother, is it easy to get the value of rotation angle and scale ?
Again, with the linear transformation, rotation angle is easily computed with atan2(B, C). Scale is (sqrt(B*B + C*C) and shifts are A and D.
lock042Mon, 28 Aug 2017 09:08:26 -0500http://answers.opencv.org/question/173072/Move pixel coordinatehttp://answers.opencv.org/question/151682/move-pixel-coordinate/ Hi guys, i want to implement this formula :
![formula](/upfiles/1495704707594347.png)
can anyone suggest function in opencv to do this ? Kenny KarnamaThu, 25 May 2017 04:32:35 -0500http://answers.opencv.org/question/151682/The cropped roi image is just tilted.I wanted it to be tilted to correct rectangular axes.What method should be used to do.Pls help me with a solutionhttp://answers.opencv.org/question/135486/the-cropped-roi-image-is-just-tiltedi-wanted-it-to-be-tilted-to-correct-rectangular-axeswhat-method-should-be-used-to-dopls-help-me-with-a-solution/ ![image description](/upfiles/14902477553778958.png)Ashiq KSThu, 23 Mar 2017 00:46:12 -0500http://answers.opencv.org/question/135486/Perspective transformation between 3D pointshttp://answers.opencv.org/question/73555/perspective-transformation-between-3d-points/ Hello,
I need to find a transformation from one camera to another (stereo), so i think i need a perspective transformation in 3D. Am i right? How can i find such transformation?
Thanks a lot,
OlegOleg_kSun, 18 Oct 2015 19:41:01 -0500http://answers.opencv.org/question/73555/Around View Monitorhttp://answers.opencv.org/question/62585/around-view-monitor/Is anyone having opencv code for around view monitor. We have four views of images and we have to perform the following process.
1.Read Image
2.Lens Distortion and Correction
3.Image Transformation
4.Image Alignment
5.Image Stitching
6.AVM view
The AVM view is the top view of combining all the images after image stitching which is shown in below. Please help us.
![image description](/upfiles/14326127746866943.jpg)
asifbashaMon, 25 May 2015 23:04:07 -0500http://answers.opencv.org/question/62585/