OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sat, 04 Apr 2020 11:38:14 -0500ArUco: Getting the coordinates of a marker in the coordinate system of another markerhttp://answers.opencv.org/question/228502/aruco-getting-the-coordinates-of-a-marker-in-the-coordinate-system-of-another-marker/Hi,
I have a question regarding pose estimation of ArUco markers and coordinate system translation. What I would basically like to accomplish is to get the coordinates of a marker using a coordinate system that is based on another marker.
Let me explain. The plan is to use one stationary marker as a calibration point. After getting the rvec and tvec vectors of that calibration marker I would then like to use this information to calculate the coordinates of the others, in the calibration marker’s coordinate system. (note that the camera and calibration tag are stationary, while the others move)
![image description](/upfiles/1586018710893619.png)
The end goal is to get the x and y coordinates of the markers in mm on the plane of interest. I could then move the origin point, to suite my application, with some simple addition/subtraction as the absolute position of my calibration marker is known.
![image description](/upfiles/15860178122705141.png)
I’ve already figured out how to get the coordinates of markers in the camera’s coordinate system and how to get the camera’s coordinates in the coordinate system of a marker, but I haven’t been able to understand **how can I get the coordinates of a marker, in the coordinate system of another marker.**
I’m not really familiar with all of these computer vision concepts, so I have been reading things that have been previously suggested (like [this](https://docs.opencv.org/4.2.0/d9/d0c/group__calib3d.html#details), and [this](https://docs.opencv.org/4.2.0/d9/d0c/group__calib3d.html#ga549c2075fac14829ff4a58bc931c033d)), but I haven’t been able to find a solution yet.
I would really appreciate if you could verify that what I’m trying to do is possible and to point me in the right direction.
ThanksbalourdosSat, 04 Apr 2020 11:38:14 -0500http://answers.opencv.org/question/228502/Does opencv itself support augmented reality without need to additional programs?http://answers.opencv.org/question/227333/does-opencv-itself-support-augmented-reality-without-need-to-additional-programs/ I need to draw 3D objects on my aruco markers through opencv c++, all i found use opengl or any additional programs. Can opencv do so itself?
If it can, i need to draw the object on the first marker, then when i reach that marker and it is hidden the object goes to the next marker, and when i release my hand from first one and put it on the second the object goes to the third marker not to the first again, etc...
Any help?maha mohyMon, 09 Mar 2020 23:53:44 -0500http://answers.opencv.org/question/227333/test transform between two cameras with projectPoints?http://answers.opencv.org/question/219028/test-transform-between-two-cameras-with-projectpoints/ I have a stereo calibrated camera pair, so i have known intrinsic and extrinsics.
I have a chessboard visible in both cameras.
What i want to do is:
Find the chessboard in both frames.
Run SolvePnP from camera A.
Using the stereo calibration extrinsics, project the 3d points from camera B, using the extrinsics as the camera position.
Use the distance between the projected points on camera B frame, and the chessboard points in camera B frame to assess the extrinsic validity.
I have the following function, which displays the points, but they seem much further off than i would expect from the extrinsic values. Is this the correct approach here?
void main()
{
//build object points
int grid_size_x = 7;
int grid_size_y = 5;
double squareSize = 4.5;
std::vector<cv::Point3f> object_points_;
//set up nboard
for (int y = 0; y < grid_size_y; y++)
{
for (int x = 0; x < grid_size_x; x++)
{
object_points_.push_back(cv::Point3f(x * squareSize, y * squareSize, 1));
}
}
cv::Mat camMatrixB, distCoeffsB, camMatrixA, distCoeffsA, offsetT, offsetR;
bool readOk = readCameraParameters("CalibrationData.xml", camMatrixB, distCoeffsB, camMatrixA, distCoeffsA, offsetT, offsetR);
if (!readOk) {
cerr << "Invalid camera file" << endl;
}
//solvepnp from A frame to get board position.
std::vector<cv::Point2f> corners_z;
cv::Mat imagez = cv::imread("FrameA.jpg");
cv::Mat imageGrayz;
cv::cvtColor(imagez, imageGrayz, cv::COLOR_BGR2GRAY);
cv::Mat cv_translationA, cv_rotationA;
//find board in image A
bool found_chessboardA = cv::findChessboardCorners(imageGrayz, cv::Size(grid_size_x, grid_size_y), corners_z, 0);
if (found_chessboardA)
{
cv::cornerSubPix(imageGrayz, corners_z, cv::Size(3, 3), cv::Size(-1, -1), cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS, 50, 0.01));
//solve for board pose in A
cv::solvePnP(object_points_, corners_z, camMatrixA, distCoeffsA, cv_rotationA, cv_translationA);
//set the B pose to be the calculated offset amount, relative to the A pose.
cv::Mat offsetPos = cv::Mat(1, 3, CV_64F);
offsetPos.at<double>(0,0) = cv_translationA.at<double>(0, 0) - (offsetT.at<double>(0,0) /10);
offsetPos.at<double>(0, 1) = cv_translationA.at<double>(0, 1) - (offsetT.at<double>(0,1) / 10);
offsetPos.at<double>(0, 2) = cv_translationA.at<double>(0, 2) - (offsetT.at<double>(0, 2) / 10);
cv::Mat offsetR2;
cv::Rodrigues(offsetR, offsetR2);
cv:Mat offsetRot = cv::Mat(1, 3, CV_64F);
offsetRot.at<double>(0, 0) = cv_rotationA.at<double>(0, 0) - offsetR2.at<double>(0, 0);
offsetRot.at<double>(0, 1) = cv_rotationA.at<double>(0, 1) - offsetR2.at<double>(0, 1);
offsetRot.at<double>(0, 2) = cv_rotationA.at<double>(0, 2) - offsetR2.at<double>(0,2);
//reproject the object points and look at error.
cv::Mat imageSD = cv::imread("FrameB.jpg");
cv::Mat imageGray;
cv::cvtColor(imageSD, imageGray, cv::COLOR_BGR2GRAY);
cv::Mat cv_translationD, cv_rotationD;
std::vector<cv::Point2f> corners_;
//find board in image
bool found_chessboardB = cv::findChessboardCorners(imageGray, cv::Size(grid_size_x, grid_size_y), corners_, 0);
if (found_chessboardB)
{
cv::cornerSubPix(imageGray, corners_, cv::Size(3, 3), cv::Size(-1, -1), cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS, 50, 0.01));
//test repro
std::vector<cv::Point2f> projectedPointsSD;
cv::projectPoints(object_points_, offsetRot, offsetPos, camMatrixB, distCoeffsB, projectedPointsSD);
for (int p = 0; p < projectedPointsSD.size(); p++)
{
cv::drawMarker(imageSD, projectedPointsSD[p], cv::Scalar(255, 255, 255), cv::MARKER_TILTED_CROSS, 8, 1, 8);
}
for (int p = 0; p < corners_.size(); p++)
{
cv::drawMarker(imageSD, corners_[p], cv::Scalar(255, 255, 255), cv::MARKER_SQUARE, 8, 1, 8);
}
cv::imshow("repro", imageSD);
cv::waitKey(0);
}
}
}
antithingMon, 30 Sep 2019 08:25:59 -0500http://answers.opencv.org/question/219028/How to compute the 3D location of a 2D point on the ground?http://answers.opencv.org/question/177170/how-to-compute-the-3d-location-of-a-2d-point-on-the-ground/I know the focal length f and the principal point of a camera P(x,y) and I can assume the ground plane is orthogonal to the image plane.
I have a 2D point on the picture that I know is on the ground, how do I get it on 3D using camera coordinates.
However I am not sure how to approach this? Any advice is appreciated.danoc93Sun, 29 Oct 2017 11:41:51 -0500http://answers.opencv.org/question/177170/OpenCV to OpenGL or WebGLhttp://answers.opencv.org/question/176796/opencv-to-opengl-or-webgl/ Hello,
Sorry for my awful english !
I use OpenCV to detect image in image. Everything works perfectly.
![image description](/upfiles/15087636962778099.jpg)
Now I want to use values returns by OpenCV with SolvePnp and Rodrigues in a WebGL project. I read many articles, books, posts, etc but that's not working and I don't understand why :((
I create a matrix 'projectionMatrix' with this function :
function openCVCameraMatrixToProjectionMatrix(fx, fy, cx, cy, zfar, znear, width, height){
var m = [
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
];
m[0][0] = 2.0 * fx / width;
m[0][1] = 0.0;
m[0][2] = 0.0;
m[0][3] = 0.0;
m[1][0] = 0.0;
m[1][1] = -2.0 * fy / height;
m[1][2] = 0.0;
m[1][3] = 0.0;
m[2][0] = 1.0 - 2.0 * cx / width;
m[2][1] = 2.0 * cy / height - 1.0;
m[2][2] = (zfar + znear) / (znear - zfar);
m[2][3] = -1.0;
m[3][0] = 0.0;
m[3][1] = 0.0;
m[3][2] = 2.0 * zfar * znear / (znear - zfar);
m[3][3] = 0.0;
return m;
}
After that, I create a **cameraMatrix** with values returns by **SolvePnp** (rVec and tVec) :
var cameraMatrix = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1];
cameraMatrix = m4.xRotate(cameraMatrix, rVecs[0]);
cameraMatrix = m4.yRotate(cameraMatrix, rVecs[1]);
cameraMatrix = m4.zRotate(cameraMatrix, rVecs[2]);
cameraMatrix = m4.translate(cameraMatrix, tVecs[0], tVecs[1], tVecs[2]);
I'm not sure calculations matrix order are OK ! And I'm sure I made errors in this part (but not only in this part) !
----------
After I inverse camera matrix in a new variable
var viewMatrix = m4.inverse(cameraMatrix);
I compute a **viewProjectionMatrix**
var viewProjectionMatrix = m4.multiply(projectionMatrix, viewMatrix);
----------
After all these operations, I use **uniformMatrix4fv** with a plane (-0.5, -0.5, 1.0, 1.0):
gl.uniformMatrix4fv(matrixLocation, false, modelMatrix);
----------
I also try to use Rodrigues values directly on **modelMatrix** but that didn't work...
----------
None of these attempts work, that never match.
**Can someone help me? Please**
I post on stack (code, explanations, etc) : https://stackoverflow.com/questions/46833757/opencv-to-webgl
Loïc
kopacabana73Mon, 23 Oct 2017 08:24:24 -0500http://answers.opencv.org/question/176796/Projection matrices in OpenCV vs Multiple View Geometryhttp://answers.opencv.org/question/170947/projection-matrices-in-opencv-vs-multiple-view-geometry/ I am trying to follow "Multiple View Geometry in Computer Vision" formula 13.2 for computing the homography between for a calibrated stereo rig. It should be simple math
H = K' (R - t. transpose(n) / d) inv(K)
Where H is the 3x3 homography. K and K' are 3x3 camera intrinsic matrices, R is a 3x3 rotation between the cameras. t is column vector translation between the cameras. n is a plane normal vector and d is the constant of the plane equation of a plane that both cameras are viewing. The idea here is that going right to left, a homogenous pixel coordinate is unprojected to a ray, the ray is intersected with a plane and the intersection point is projected to the other image.
When I plug numbers in and try to match up two captured images I can't get the math as shown to work. My main problem is that I don't understand how simply multiplying a 3D point by a camera matrix K can project the point. In the opencv documentation for the calibration module:
x' = x / z<p>
y' = y / z<p>
u = fx * x' + cx<p>
v = fy * y' + cy<p>
By the above math, x and y are divided by z before the focal length is multiplied. But I don't see how the math from the text accomplishes the same thing by merely multiplying a point by K. Where is the divide by z in the formula for H?
Can someone help with this problem that is probably just notation?
Scott
MilkboyFri, 04 Aug 2017 11:17:47 -0500http://answers.opencv.org/question/170947/Create a stereo projection matrix using rvec and tvec?http://answers.opencv.org/question/162932/create-a-stereo-projection-matrix-using-rvec-and-tvec/I am setting up Projection Matrices in a stereo camera rig, using the intrinsic matrix. Like so:
// Camera 1 Projection Matrix K[I|0]
cv::Mat P1(3, 4, cv::DataType<float>::type);
K.copyTo(P1.rowRange(0, 3).colRange(0, 3)); //K is the camera matrix
//stereo
float tx = 0.12; //Stereo baseline
float ty = 0;
float tz = 0;
//rotation ( images are rectified, so this is zero)
float rots[9] = { 1,0,0,0,1,0,0,0,1 };
cv::Mat R = cv::Mat(3, 3, CV_32F, rots);
//translation. (stereo camera, rectified images, 12 cm baseline)
float trans[3] = { tx,ty,tz };
cv::Mat t = cv::Mat(3, 1, CV_32F, trans);
// Camera 2 Projection Matrix K[R|t]
cv::Mat P2(3, 4, CV_32F);
R.copyTo(P2.rowRange(0, 3).colRange(0, 3));
t.copyTo(P2.rowRange(0, 3).col(3));
P2 = K*P2;
This give me:
matrix P1
[333.02081, 0, 318.51651, 0;
0, 333.02081, 171.93558, 0;
0, 0, 1, 0]
matrix P2
[333.02081, 0, 318.51651, 39.962498;
0, 333.02081, 171.93558, 0;
0, 0, 1, 0]
which seems to work well.
Now, I need to update these matrices using the current camera pose (`rvec` and `tvec`).
I am doing this with:
//camera pose
cv::Mat R(3, 3, cv::DataType<float>::type);
cv::Rodrigues(rvec, R); // R is 3x3
R = R.t(); // rotation of inverse
cv::Mat tvecInv = -R * tvec; // translation of inverse
//translation. (stereo camera, rectified images, 12 cm baseline)
float trans[3] = { tx,ty,tz };
cv::Mat t2 = cv::Mat(3, 1, CV_32F, trans);
//add the baseline to cam2
t2.at<float>(0) = tvecInv.at<float>(0) + tx;
t2.at<float>(1) = tvecInv.at<float>(1);
t2.at<float>(2) = tvecInv.at<float>(2);
// Camera 1 Projection Matrix K[I|0]
cv::Mat P1(3, 4, CV_32F, cv::Scalar(0));
R.copyTo(P1.rowRange(0, 3).colRange(0, 3));
tvecInv.copyTo(P1.rowRange(0, 3).col(3));
P1 = K*P1;
// Camera 2 Projection Matrix K[R|t]
cv::Mat P2(3, 4, CV_32F);
R.copyTo(P2.rowRange(0, 3).colRange(0, 3));
t2.copyTo(P2.rowRange(0, 3).col(3));
P2 = K*P2;
Which gives me:
update P1
[-362.74387, -131.39442, 252.00798, -272.50296;
-9.197648, -374.57056, 8.7753143, -74.796822;
-0.098709576, -0.4356361, 0.89469415, -0.89352131]
update P2
[-362.74387, -131.39442, 252.00798, 37.901237;
-9.197648, -374.57056, 8.7753143, 421.92899;
-0.098709576, -0.4356361, 0.89469415, -0.0064714332]
This is very wrong. What is the correct way to update stereo projection matrices using `rvec` and `tvec` values?
thank you.
antithingFri, 30 Jun 2017 05:18:01 -0500http://answers.opencv.org/question/162932/How to tranform 2D image coordinates to 3D world coordinated with Z = 0?http://answers.opencv.org/question/150451/how-to-tranform-2d-image-coordinates-to-3d-world-coordinated-with-z-0/Hi everyone,
I currently working on my project which involves vehicle detection and tracking and estimating and optimizing a cuboid around the vehicle. For that I am taking the center of the detected vehicle and I need to find the 3D world coodinate of the point and then estimate the world coordinates of the edges of the cuboid and the project it back to the image to display it.
So, now I am new to computer vision and OpenCV, but in my knowledge, I just need 4 points on the image and need to know the world coordinates of those 4 points and use solvePNP in OpenCV to get the rotation and translation vectors (I already have the camera matrix and distortion coefficients). Then, I need to use Rodrigues to transform the rotation vector into a rotation matrix and then concatenate it with the translation vector to get my extrinsic matrix and then multiply the extrinsic matrix with the camera matrix to get my projection matrix. Since my z coordinate is zero, so I need to take off the third column from the projection matrix which gives the homography matrix for converting the 2D image points to 3D world points. Now, I find the inverse of the homography matrix which gives me the homography between the 3D world points to 2D image points. After that I multiply the image points [x, y, 1]t with the inverse homography matrix to get [wX, wY, w]t and the divide the entire vector by the scalar w to get [X, Y, 1] which gives me the X and Y values of the world coordinates.
My code is like this:
image_points.push_back(Point2d(275, 204));
image_points.push_back(Point2d(331, 204));
image_points.push_back(Point2d(331, 308));
image_points.push_back(Point2d(275, 308));
cout << "Image Points: " << image_points << endl << endl;
world_points.push_back(Point3d(0.0, 0.0, 0.0));
world_points.push_back(Point3d(1.775, 0.0, 0.0));
world_points.push_back(Point3d(1.775, 4.620, 0.0));
world_points.push_back(Point3d(0.0, 4.620, 0.0));
cout << "World Points: " << world_points << endl << endl;
solvePnP(world_points, image_points, cameraMatrix, distCoeffs, rotationVector, translationVector);
cout << "Rotation Vector: " << endl << rotationVector << endl << endl;
cout << "Translation Vector: " << endl << translationVector << endl << endl;
Rodrigues(rotationVector, rotationMatrix);
cout << "Rotation Matrix: " << endl << rotationMatrix << endl << endl;
hconcat(rotationMatrix, translationVector, extrinsicMatrix);
cout << "Extrinsic Matrix: " << endl << extrinsicMatrix << endl << endl;
projectionMatrix = cameraMatrix * extrinsicMatrix;
cout << "Projection Matrix: " << endl << projectionMatrix << endl << endl;
double p11 = projectionMatrix.at<double>(0, 0),
p12 = projectionMatrix.at<double>(0, 1),
p14 = projectionMatrix.at<double>(0, 3),
p21 = projectionMatrix.at<double>(1, 0),
p22 = projectionMatrix.at<double>(1, 1),
p24 = projectionMatrix.at<double>(1, 3),
p31 = projectionMatrix.at<double>(2, 0),
p32 = projectionMatrix.at<double>(2, 1),
p34 = projectionMatrix.at<double>(2, 3);
homographyMatrix = (Mat_<double>(3, 3) << p11, p12, p14, p21, p22, p24, p31, p32, p34);
cout << "Homography Matrix: " << endl << homographyMatrix << endl << endl;
inverseHomographyMatrix = homographyMatrix.inv();
cout << "Inverse Homography Matrix: " << endl << inverseHomographyMatrix << endl << endl;
Mat point2D = (Mat_<double>(3, 1) << image_points[0].x, image_points[0].y, 1);
cout << "First Image Point" << point2D << endl << endl;
Mat point3Dw = inverseHomographyMatrix*point2D;
cout << "Point 3D-W : " << point3Dw << endl << endl;
double w = point3Dw.at<double>(2, 0);
cout << "W: " << w << endl << endl;
Mat matPoint3D;
divide(w, point3Dw, matPoint3D);
cout << "Point 3D: " << matPoint3D << endl << endl;
I have got the image coordinates of the four known world points and hard-coded it for simplification. The vector image_points contain the image coordinates of the four points and the vector world_points contain the world coordinates of the four points. I am considering the the first world point as the origin (0, 0, 0) in the world axis and using known distance calculating the coordinates of the other four points. Now after calculating the inverse homography matrix, I multiplied it with [image_points[0].x, image_points[0].y, 1]t which is related to the world coordinate (0, 0, 0). Then I divide the result by the third component w to get [X, Y, 1]. But after printing out the values of X and Y, it turns out they are not 0, 0 respectively. What am doing wrong?
The result showing is
[21.0400429;
135.683;
1]
My camera matrix is
[ 5.1700368817095330e+02, 0., 320., 0., 5.1700368817095330e+02,
212., 0., 0., 1. ]
Distortion Coefficients matrix is
[ 1.1286636797980941e-01, -1.4877900799224317e+00, 0., 0.,
2.3005718967610673e+00 ]
IndySupertrampSat, 20 May 2017 23:03:08 -0500http://answers.opencv.org/question/150451/Dissecting Extracting Camera Projection Matrix to position and rotationhttp://answers.opencv.org/question/134437/dissecting-extracting-camera-projection-matrix-to-position-and-rotation/I have 6 points in space with known coordinates in mm and corresponding 2D pixel coordinates in the image (image size is 640x320 pixels and points coordinates have been measured from upper left of the image. also I have the focal length of the camera to be 43.456mm. trying to find the camera position and orientation(x, y,z in mm and yaw pitch roll of the camera in degrees) . My matlab code here will give me the camera location as -572.8052 -676.7060 548.7718 and seems correct but I am having a hard time finding the orientation values (yaw pitch roll of the camera in degrees) I know that the rotation values should be 60.3,5.6,-45.1
Does open CV have any tools to this?
I would really really appreciate your help on this. Thanks.
Here is my matlab code:
Points_2D= [135 183 ; 188 129 ; 298 256 ; 301 43 ; 497 245; 464 110];
Points_3D= [-22.987 417.601 -126.543 ; -132.474 37.67 140.702 ; ...
388.445 518.635 -574.784 ; 250.015 259.803 67.137 ; ...
405.915 -25.566 -311.834 ; 568.859 164.809 -162.604 ];
M = [0;0;0;0;0;0;0;0;0;0;0];
A = [];
for i = 1:size(Points_2D,1)
u_i = Points_2D(i,1);
v_i = Points_2D(i,2);
x_i = Points_3D(i,1);
y_i = Points_3D(i,2);
z_i = Points_3D(i,3);
A_vec_1 = [x_i y_i z_i 1 0 0 0 0 -u_i*x_i -u_i*y_i -u_i*z_i -u_i]; %
A_vec_2 = [ 0 0 0 0 x_i y_i z_i 1 -v_i*x_i -v_i*y_i -v_i*z_i -v_i]; %
A(end+1,:) = A_vec_1;
A(end+1,:) = A_vec_2;
end
[U,S,V] = svd(A);
M = V(:,end);
M = transpose(reshape(M,[],3));
Q = M(:,1:3);
m_4 = M(:,4);
Center = (-Q^-1)*m_4alihaThu, 16 Mar 2017 23:01:21 -0500http://answers.opencv.org/question/134437/Converting 2D image coordinate to 3D World Coordinatehttp://answers.opencv.org/question/117466/converting-2d-image-coordinate-to-3d-world-coordinate/ Hello ,
I have been assigned the task of converting a 2D pixel coordinates to corresponding 3D world coordinates. I have a bit of Image processing experience from my school projects and Zero experience in openCV.
I started going through the PinHole camera model and understood that I need to do inverse perspective projection to be able to find the 3D world point of correspoding 2D pixel coordinates. ![image description](/upfiles/14811266207544953.png)
I am bit confused and think that I am not following the exact learning way which is supposed to be. I have few questions like
1) I guess i need to do camera calibration first to find the estimate of extrinsic and intrinsic parameters so as to know how my camera projects a 3D image into 2D pixel values.
(Reference : https://www.mathworks.com/help/vision/ug/single-camera-calibrator-app.html)
Is this the proper approach for my problem statement ; Like first understanding and finding Extrinsic & Intrinsic matrices and them moving to inverse perspective projection.
2) On Some references, I see World coordinates in mm and In others, I See (lat,long,alt) as world coordinates.
Which one i should pick as world Coordinates ,
Since we will be considering the focal point as ORIGIN , Is the world coordinates (X mm ,Y mm, Z mm) are w.r.t focal point ?
There are tons of resources available online which I think I'm getting misleaded and wandering here and there. If you guyz know of any particular resource which is quite st.forward to learn . please let me know.
~ Ashish ashish92Wed, 07 Dec 2016 10:17:59 -0600http://answers.opencv.org/question/117466/ProjectPoints not workinghttp://answers.opencv.org/question/103725/projectpoints-not-working/I have an image with a disparity map that I reproject to 3D. After running some algorithms to extract the bounding box in 3D, i reprojected each corner back to 2D to find the minimum bounding box but the results I get are totally wrong. I have verified that the corners in 3D are in the right position. But when reprojected onto 2D it becomes wrong. Been trying to figure out the problem for days and have no progress.
1) Reconstruct 3d
2) Run algorithm to get bounding box in 3d
3) Reproject the corners of each bounding box in 2d (projected points error)
4) Get minimum enclosing bounding box in 2d
![image description](/upfiles/14756889203006274.png)
![image description](/upfiles/14756889351991691.png)
Does anyone have any idea what is going on ? I am only using the function projectpoints();
Original Image Size : 1392 x 512
Calibrated Image Size : 1242 x 375 (This is the image I am working with)
EDIT: These are just the relevant portions of the code I think.
// Get Q Matrix
stereoRectify(K1, D1, K2, D2, cv::Size(1392, 512), R, T, R1, R2, P1, P2, Q, cv::CALIB_ZERO_DISPARITY, 0, cv::Size(1242, 375), 0, 0);
// Project image to 3D
disparity = disparity / 500.f
cv::reprojectImageTo3D(disparity, out, Q, true);
// Do some processing
// ....
// Reproject back to image
// opencv_cloud is a vector of point3f containing the 8 corners of a bounding box
// opencv_crd is a vector of point2f containing the projected points
cv::projectPoints(opencv_cloud, rvec, tvec, K1, cv::Mat(), opencv_crd);
rect.push_back(cv::boundingRect(cv::Mat(opencv_crd)));
Thus I used the left camera matrix to reproject the points. rvec and tvec are both [0, 0, 0]. I have tried replacing cv::Mat() with the distortion matrix of the left camera but it seems like there is no effect. Could it be the image resolution ? Is it reprojecting onto the image with the original resolution ?
EDIT: After scaling the camera matrix as suggested by Tetragramm, the results I get are much better.
![image description](/upfiles/1475726400562597.png)NbbWed, 05 Oct 2016 12:36:00 -0500http://answers.opencv.org/question/103725/Calibration with circle grid - center of gravity is not the circle center - is this implemented?http://answers.opencv.org/question/98325/calibration-with-circle-grid-center-of-gravity-is-not-the-circle-center-is-this-implemented/ Hello,
my question is about the calibration methods in OpenCV. I know the user can choose between chessboard and circle patterns. For circle patterns though, there is this problem that the center of a circle is not the same anymore as the center of the circle's gravity when viewed from some arbitrary angle. As we know, objects nearer appear bigger in the camera image and objects farther away appear smaller, thus, for a circle viewed from an angle, the nearer half will seem bigger in the image than the further half and thus, the real circle center is further away from us than one might think (since, for example, the nearer half takes 60%, the further half 40% of the area of the circle image, so the real circle center is at position 60%, while the center of gravity is always 50%).
This problem is discussed in [this paper](http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=609468&tag=1) by Heikkila and my question now is: Is this circular center projection error considered and corrected in the calibration methods implemented in OpenCV? Or is it ignored, since the errors are somewhat small?
I'm writing my master thesis with camera calibration as a central topic and it would be very helpful for me to know the answer to this question!JamesFWed, 13 Jul 2016 09:05:52 -0500http://answers.opencv.org/question/98325/confuse about projectPointshttp://answers.opencv.org/question/86549/confuse-about-projectpoints/I'm trying to get projection points using projectPoints function. But the output is not correct. I probably using the function wrong or did mistake somewhere, but I'm not sure where. This is my code
cv::Mat r(3, 3, cv::DataType<float>::type);
cv::Mat rR(1, 3, cv::DataType<float>::type);
cv::Mat c(3, 3, cv::DataType<float>::type);
std::vector<cv::Point2f> projectedPoints;
std::vector<cv::Vec3f> objectPoints;
r.at<float>(0, 0) = 1;
r.at<float>(1, 0) = 0;
r.at<float>(2, 0) = 0;
r.at<float>(0, 1) = 0;
r.at<float>(1, 1) = 1;
r.at<float>(2, 1) = 0;
r.at<float>(0, 2) = 0;
r.at<float>(1, 2) = 0;
r.at<float>(2, 2) = 1;
c.at<float>(0, 0) = -500;
c.at<float>(1, 0) = 0;
c.at<float>(2, 0) = 0;
c.at<float>(0, 1) = 0;
c.at<float>(1, 1) = -500;
c.at<float>(2, 1) = 0;
c.at<float>(0, 2) = 320;
c.at<float>(1, 2) = 240;
c.at<float>(2, 2) = 1;
objectPoints.push_back(cv::Point3f(150, 200, 350));
cv::Rodrigues(r, rR);
cv::Mat T(1, 3, cv::DataType<float>::type);
T.at<float>(0, 0) = -50;
T.at<float>(1, 0) = -85;
T.at<float>(2, 0) = -110;
// Create zero distortion
cv::Mat distCoeffs(4, 1, cv::DataType<float>::type);
distCoeffs.at<float>(0) = 0;
distCoeffs.at<float>(1) = 0;
distCoeffs.at<float>(2) = 0;
distCoeffs.at<float>(3) = 0;
cv::projectPoints(objectPoints, rR, T, c, distCoeffs, projectedPoints);
I appreciate much if anyone could point out the mistake.mm71Thu, 04 Feb 2016 01:57:01 -0600http://answers.opencv.org/question/86549/3d image reconstruction from a 2d image (a bottle pointing from 2d to 3d)http://answers.opencv.org/question/63707/3d-image-reconstruction-from-a-2d-image-a-bottle-pointing-from-2d-to-3d/Hello Everyone
I am working on a spin the bottle game with OpenCV and a robot NAO. I already made a program to draw a line projecting to the pointing place of the bottle but in 2d (the robot can see just the bottle in a white background) now I would like with opencv to project that line into the 3d space in order to find a person which the bottle points to.
I am not sure how or where to start. I found something with camera calibration, Pose estimation and Depth Map from Stereo Images. Could you guys please help me with information or topics where I could start reading or doing something from OpenCV?
Thanksdiegomez_86Tue, 09 Jun 2015 11:56:20 -0500http://answers.opencv.org/question/63707/