OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sat, 21 Mar 2020 13:26:55 -0500world coordinates to camera coordinates to pixel coordinates cv::projectpointshttp://answers.opencv.org/question/227849/world-coordinates-to-camera-coordinates-to-pixel-coordinates-cvprojectpoints/ Hello,
I am trying to project a giving 3D point to image plane,
I have a 3d point **(-455,-150,0)** where **x is the depth axis** , **z is the upwards axis** and **y is the horizontal one**. I have **roll: Rotation around the front-to-back axis (x)** , **pitch: Rotation around the side-to-side axis (y)** and **yaw:Rotation around the vertical axis (z)** also I have the **position on the camera (x,y,z)=(-50,0,100)** so I am doing the following first I am doing from world coordinates to camera coordinates using the extrinsic parameters:
double pi = 3.14159265358979323846;
double yp = 0.033716827630996704* pi / 180; //roll
double thet = 67.362312316894531* pi / 180; //pitch
double k = 89.7135009765625* pi / 180; //yaw
double rotxm[9] = { 1,0,0,0,cos(yp),-sin(yp),0,sin(yp),cos(yp) };
double rotym[9] = { cos(thet),0,sin(thet),0,1,0,-sin(thet),0,cos(thet) };
double rotzm[9] = { cos(k),-sin(k),0,sin(k),cos(k),0,0,0,1};
cv::Mat rotx = Mat{ 3,3,CV_64F,rotxm };
cv::Mat roty = Mat{ 3,3,CV_64F,rotym };
cv::Mat rotz = Mat{ 3,3,CV_64F,rotzm };
cv::Mat rotationm = rotz * roty * rotx; //rotation matrix
cv::Mat mpoint3(1, 3, CV_64F, { -455,-150,0 }); //the 3D point location
mpoint3 = mpoint3 * rotationm; //rotation
cv::Mat position(1, 3, CV_64F, {-50,0,100}); //the camera position
mpoint3=mpoint3 - position; //translation
and now I want to move from camera coordinates to image coordinates
the first solution was: as I read from some sources
Mat myimagepoint3 = mpoint3 * mycameraMatrix;
This didn't work and I believe that is normal
The second solution was
double fx = cameraMatrix.at<double>(0, 0);
double fy = cameraMatrix.at<double>(1, 1);
double cx1 = cameraMatrix.at<double>(0, 2);
double cy1= cameraMatrix.at<double>(1, 2);
xt = mpoint3 .at<double>(0) / mpoint3.at<double>(2);
yt = mpoint3 .at<double>(1) / mpoint3.at<double>(2);
double u = xt * fx + cx1;
double v = yt * fy + cy1;
but also didn't work
so now I tried to use opencv method fisheye::projectpoints(from world to image coordinates)
Mat recv2;
cv::Rodrigues(rotationm, recv2);
//inputpoints a vector contains one point which is the 3d world coordinate of the point
//outputpoints a vector to store the output point
cv::fisheye::projectPoints(inputpoints,outputpoints,recv2,position,mycameraMatrix,mydiscoff );
but it didn't work
as I read from the documentations this can find the 2d position of the 3d object or am I wrong?
by didn't work I mean: I know (in the image) where should the point appear but when I draw it, it is always in another place (not even close) sometimes I even got a negative values
note: there is no syntax errors or exceptions but may I made typos while I am writing code here so can any one suggest if I am doing something wrong?
SuomSat, 21 Mar 2020 13:26:55 -0500http://answers.opencv.org/question/227849/[SOLVED]How to project points from undistort image to distort image?http://answers.opencv.org/question/225123/solvedhow-to-project-points-from-undistort-image-to-distort-image/ I undistorted the fisheye lens image with help of `cv::fisheye::calibrate` and found below coefficients.
K =
array([[541.11407173, 0. , 659.87320043],
[ 0. , 541.28079025, 318.68920531],
[ 0. , 0. , 1. ]])
D =
array([[-3.91414244e-02],
[-4.60198728e-03],
[-3.02912651e-04],
[ 2.83586453e-05]])
new_K = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(K, D, (1280, 720), np.eye(3), balance=1, new_size=(3400, 1912), fov_scale=1)
map1, map2 = cv2.fisheye.initUndistortRectifyMap(K, D, np.eye(3), new_K, (3400, 1912), cv2.CV_16SC2)
undistorted_img = cv2.remap(distorted_img, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
![image description](/upfiles/15795313618550395.jpg)
![image description](/upfiles/15795313202244442.jpg)
**How to find x and y ?**JeyP4Mon, 20 Jan 2020 08:47:26 -0600http://answers.opencv.org/question/225123/Re-project 3D points to 2D, using two stereo-calibrated camerashttp://answers.opencv.org/question/220232/re-project-3d-points-to-2d-using-two-stereo-calibrated-cameras/ Hello,
I am using two stereo-calibrated cameras to separately track body parts in 2D, and I then triangulate the points from each camera view into 3D. Now I want to re-project the tracked points back onto each camera view, and I'm not sure which parameters from the stereo-calibration/rectification I should feed into `cv2.projectPoints()`.
1. After finding chessboard corners, I calibrate each camera using `cv2.calibrateCamera()` to get intrinsic params - camera matrix and distortion vector. This step works fine with ~0.01 pix re-projection error.
2. I then use `cv2.stereoCalibrate(..., flags = cv2.CALIB_FIX_INTRINSIC)` to also get `R,T,E,F` matrices.
3. I feed the previously obtained params into`cv2.stereoRectify()`to obtain `R1, R2, P1, P2, Q`.
4. After performing tracking and obtaining `x1,x2`, a tracked point in 2D from camera 1 and camera 2 respectively, I obtain the 3D point ` X = cv2.triangulatePoints( P1[:3], P2[:3], x1, x2 )`.
In other words, given the stereo-rectificaiton parameters, how do I invert the triangulation operation for each camera?Dan_BiderTue, 22 Oct 2019 16:18:00 -0500http://answers.opencv.org/question/220232/Porting to JavaScript: "Cannot register public name 'projectPoints' twice"http://answers.opencv.org/question/216930/porting-to-javascript-cannot-register-public-name-projectpoints-twice/I did the following:
1. git clone https://github.com/opencv/opencv.git
2. git clone https://github.com/opencv/opencv_contrib.git
3. I added the following in def get_build_flags(self) of opencv/platforms/js/build_js.py:
flags += "-s USE_PTHREADS=0 "
4. I enabled the build flag in def get_cmake_cmd(self): of opencv/platforms/js/build_js.py:`-DBUILD_opencv_calib3d` set to `ON`
5. I added the following def get_cmake_cmd(self): of opencv/platforms/js/build_js.py:`-DOPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules`
6. I appended `js` inside ocv_define_module at the end of WRAP list of opencv/modules/calib3d/CMakeLists.txt. In opencv/modules/features2d/CMakeLists.txt and opencv_contrib/modules/aruco/CMakeLists.txt also I added "js" parameter in ocv_define_module.
7. I added `solvePnP`and 'projectPoints' in the calib3d module in the opencv/modules/js/src/embindgen.py
calib3d = {'': ['findHomography','calibrateCameraExtended', 'drawFrameAxes',
'getDefaultNewCameraMatrix', 'initUndistortRectifyMap', 'solvePnP','projectPoints']}
8. I added the calib3d module to the makeWhiteList in the opencv/modules/js/src/embindgen.py
white_list = makeWhiteList([core, imgproc, objdetect, video, dnn, features2d, photo, aruco, calib3d])
9. I added "using namespace aruco;" in the opencv/modules/js/src/core_bindings.cpp
10. I built OpenCV.js using the following command:
sudo python ./platforms/js/build_js.py build_js --emscripten_dir=${EMSCRIPTEN} --clean_build_dir --build_test
Before adding these wrappers, it compiled perfectly without errors. Now in my tests.html I have the following message:
Downloading...
tests.html:61 Running...
tests.html:61 Exception thrown, see JavaScript console
opencv.js:24 Uncaught
BindingError
message: "Cannot register public name 'projectPoints' twice"
So seems like the overload functions are preventing me from porting them to JavaScript.
Any suggestions please of how I can fix it?
Here is the Pull Request:
https://github.com/opencv/opencv/pull/15311
https://github.com/opencv/opencv_contrib/pull/2228
Thanks in advance for your help.kavikodeWed, 14 Aug 2019 05:52:23 -0500http://answers.opencv.org/question/216930/Interpretation projectPoints outputhttp://answers.opencv.org/question/196520/interpretation-projectpoints-output/ Hello! I am working on converting 3D real-world points (X,Y,Z) meters to image coordinates (u, v) pixels. I have access to the Camera matrix K, camera orientation (in degrees) and translation vector. I use the code below to calculate the pixel coordinates. For (X,Y,Z) = (0.04379281, 0.15902013, 0.73328906) I obtain (u, v) = (-184.52735432, -249.19158505). Is the origin of pixel coordinates in the image still top left corner? If yes, the obtained pixel location is outside the image. Is this right?
# Calculates Rotation Matrix given euler angles.
def eulerAnglesToRotationMatrix(theta):
R_x = np.array([[1, 0, 0],
[0, math.cos(theta[0]), -math.sin(theta[0])],
[0, math.sin(theta[0]), math.cos(theta[0])]
])
R_y = np.array([[math.cos(theta[1]), 0, math.sin(theta[1])],
[0, 1, 0],
[-math.sin(theta[1]), 0, math.cos(theta[1])]
])
R_z = np.array([[math.cos(theta[2]), -math.sin(theta[2]), 0],
[math.sin(theta[2]), math.cos(theta[2]), 0],
[0, 0, 1]
])
R = np.dot(R_z, np.dot(R_y, R_x))
return R
# x-roll, y-pitch, z-yaw (in degrees)
theta = [-128.41639709472657, -11.528900146484375, 53.37379837036133]
# rotation vector
rvec = eulerAnglesToRotationMatrix(theta)
# translation vector
tvec = np.array([[0.26409998536109927], [1.294700026512146], [0.017799999564886094]], np.float)
# camera matrix
cameraMatrix = np.array([[148.130859375, 0, 142.5562286376953],
[0, 148.130859375, 179.04293823242188],
[0, 0, 1]], np.float)
# Point in (X,Y,Z) meters
objPts = np.array([[0.04379281, 0.15902013, 0.73328906]], np.float)
distCoeffs = np.array([0, 0, 0, 0, 0], np.float)
imgPts, _ = cv2.projectPoints(objPts, rvec, tvec, cameraMatrix, distCoeffs)
imgPts : (-184.52735432 -249.19158505) KlaimNodFri, 27 Jul 2018 08:04:26 -0500http://answers.opencv.org/question/196520/Camera position in world coordinate is not working but object pose in camera co ordinate system is working properlyhttp://answers.opencv.org/question/194724/camera-position-in-world-coordinate-is-not-working-but-object-pose-in-camera-co-ordinate-system-is-working-properly/I am working on the camera (iphone camera) pose estimation for head mount device (Hololens) using LEDs as a marker, using the solvepnp. I have calibrated the camera below is the camera intrinsic parameters
/* approx model*/
double focal_length = image.cols;
Point2d center = cv::Point2d(image.cols/2,image.rows/2);
iphone_camera_matrix = (cv::Mat_<double>(3,3) << focal_length, 0, center.x, 0 , focal_length, center.y, 0, 0, 1); iphone_dist_coeffs = cv::Mat::zeros(4,1,cv::DataType<double>::type);
/* caliberated(usng opencv) model */
iphone_camera_matrix = (cv::Mat_<double>(3,3) <<839.43920487140315, 0, 240, 0, 839.43920487140315, 424, 0, 0, 1);
iphone_dist_coeffs = (cv::Mat_<double>(5,1) <<4.6476561543838640e-02, -2.0580084834071521, 0, 0 ,2.0182662261396342e+01);
usng solvpnp am able to get the proper object pose in camera co-ordinate system below is the code
cv::solvePnP(world_points, image_points, iphone_camera_matrix, iphone_dist_coeffs, rotation_vector, translation_vector, true, SOLVEPNP_ITERATIVE);
the ouput is
rotation_vector :
[-65.41956646885059;
-52.49185328449133;
36.82917796058498]
translation_vector :
[94.1158604375937;
-164.2178023980637;
580.5666657301058]
using this rotation_vector and translation_vector am visualizing pose by projecting the trivector whose points are
points_to_project :
[0, 0, 0;
20, 0, 0;
0, 20, 0;
0, 0, 20]
projectPoints(points_to_project, rotation_vector, translation_vector, iphone_camera_matrix, iphone_dist_coeffs, projected_points);
the output of projectedPoints given as
projected_points :
[376.88803, 185.15131;
383.05768, 195.77643;
406.46454, 175.12997;
372.67371, 155.56181]
which seems correct as shown below
![object pose in camera co-ordinate system](/upfiles/15302624392884724.png)
I try to find the camera pose in world/object coordinate system by using the transformation of rotation_vector and translation_vector given by solvepnp as
cv::Rodrigues(rotation_vector, rotation_matrix);
rot_matrix_wld = rotation_matrix.t();
translation_vec_wld = -rot_matrix_wld * translation_vector;
I used the rot_matrix_wld, translation_vec_wld to visualize the pose (same way as how I visualized the pose of the object in the camera coordinate system as said in the above)
projectPoints(points_to_project, rot_matrix_wld, translation_vec_wld, iphone_camera_matrix, iphone_dist_coeffs, projected_points);
with
points_to_project :
[0, 0, 0;
20, 0, 0;
0, 20, 0;
0, 0, 20]
am getting wrong translation vector (below 2 projected_points are for 2 different image frames of a video)
projected_points :
[-795.11768, -975.85846;
-877.84937, -932.39697;
-868.5517, -1197.4443;
projected_points :
[589.42999, 3019.0732;
590.64789, 2665.5835;
479.49728, 2154.8057;
187.78407, 3333.3054]
-593.41058, -851.74432]
I have used the approx camera model and calibrated camera model both are giving the wrong translation vector.
I have gone through the link [here](https://stackoverflow.com/questions/47723638/output-from-solvepnp-doesnt-match-projectpoints) and verified my calibration procedure, I did it correctly.
I am not sure where am doing wrong can anyone please help me with this.
thanks in advance.
slvFri, 29 Jun 2018 03:58:36 -0500http://answers.opencv.org/question/194724/Difficulties getting projectPoints to work, returns weird valueshttp://answers.opencv.org/question/192216/difficulties-getting-projectpoints-to-work-returns-weird-values/I'm having trouble getting projectPoints to work. I've calibrated the camera and used solvPNP as in this tutorial:
https://longervision.github.io/2017/03/20/opencv-internal-calibration-circle-grid/
The images obtained from the video I recorded showed that the blob detection worked fine, so this part I'm pretty confident works as intended.
Then I got registered some coordinates from an image to their real world corresponding points. I would expect projectPoints should return the same imagePoints coordinates if I would use projectPoints with some of the reference points as inputs, but instead I'm getting output values that are wildly outside the image coordinates.
I wonder what I'm doing wrong? Any help is greatly appreciated! After this I'm also trying to figure out how to do this in inverse: input imagePoints and get out objectPoints with z=0
My input points for projectPoints are:
inPoints = np.zeros((3, 3))
inPoints[0] = (0 , 137.16 , 0)
inPoints[1] = (0 , 548.64 , 0)
inPoints[2] = (548.64 , 548.64, 0)
Expected Output:
(326, 156)
(398, 154)
(406, 170)
What I'm actually getting:
(19748.51884776, 14658.66747407)
(24693.12654318, 9023.29722927)
(33225.96561506, 3969.11639187)
Inputs:
rvec = [[-0.06161642] [ 0.74999101] [ 0.78220654]]
tvec = [[-914.24171214] [-834.30392656] [1188.29684866]]
cameraMatrix = [[2.25545289e+03 0.00000000e+00 1.27534861e+03]
[0.00000000e+00 2.32542640e+03 7.35878530e+02]
[0.00000000e+00 0.00000000e+00 1.00000000e+00]]
distCoeffs = [[-3.18000851e-02 1.83258452e+00 -4.43437310e-03 5.27295127e-03 -1.12934335e+01]]
Full script (after calibration):
import numpy as np
import cv2
import glob
import sys
import yaml
with open('./calib/calibration.yaml') as f:
loadeddict = yaml.load(f)
camera_matrix = loadeddict.get('camera_matrix')
dist_coeffs = loadeddict.get('dist_coeff')
tnsPoints = np.zeros((19, 3))
tnsPoints[0] = (0 , 0 , 0)
tnsPoints[1] = (0 , 137.16 , 0)
tnsPoints[2] = (0 , 548.64, 0)
tnsPoints[3] = (0 , 960.12, 0)
tnsPoints[4] = (0 , 1097.28 , 0)
tnsPoints[5] = (548.64, 137.16, 0)
tnsPoints[6] = (548.64, 548.64, 0)
tnsPoints[7] = (548.64, 960.12, 0)
tnsPoints[8] = (1188.72 , 0, 0)
tnsPoints[9] = (1188.72 , 137.16, 0)
tnsPoints[10] = (1188.72 , 548.64, 0)
tnsPoints[11] = (1188.72 , 960.12, 0)
tnsPoints[12] = (1188.72 , 1097.28, 0)
tnsPoints[13] = (1828.80 , 137.16, 0)
tnsPoints[14] = (1828.80 , 548.64, 0)
tnsPoints[15] = (1828.80 , 960.12, 0)
tnsPoints[16] = (2377.44 , 0 , 0)
tnsPoints[17] = (2377.44 , 137.16 , 0)
tnsPoints[18] = (2377.44 , 548.64 , 0)
#tnsPoints[19] = (2377.44 , 960.12 , 0)
#tnsPoints[20] = (2377.44 , 1097.28 , 0)
imPoints = np.zeros((19,2))
imPoints[0] = (302,158)
imPoints[1] = (326, 156)
imPoints[2] = (398, 154)
imPoints[3] = (471, 150)
imPoints[4] = (494, 148)
imPoints[5] = (319, 172)
imPoints[6] = (406, 170)
imPoints[7] = (491, 167)
imPoints[8] = (270, 206)
imPoints[9] = (306, 206)
imPoints[10] = (421, 203)
imPoints[11] = (532, 197)
imPoints[12] = (570, 195)
imPoints[13] = (283, 266)
imPoints[14] = (446, 260)
imPoints[15] = (607, 252)
imPoints[16] = (146, 390)
imPoints[17] = (235, 387)
imPoints[18] = (499, 374)
retval, rvec, tvec = cv2.solvePnP(tnsPoints, imPoints, np.asarray(camera_matrix), np.asarray(dist_coeffs))
inPoints = np.zeros((3, 3))
inPoints[0] = (0 , 137.16 , 0)
inPoints[1] = (0 , 548.64 , 0)
inPoints[2] = (548.64 , 548.64, 0)
print(rvec)
print(tvec)
print(np.asarray(camera_matrix))
print(np.asarray(dist_coeffs))
outPoints, jacobian = cv2.projectPoints(inPoints, rvec, tvec, np.asarray(camera_matrix), np.asarray(dist_coeffs))
print(outPoints)
ekuusiThu, 24 May 2018 01:30:30 -0500http://answers.opencv.org/question/192216/Is Relative Position Estimation with projectPoints Reasonable?http://answers.opencv.org/question/187882/is-relative-position-estimation-with-projectpoints-reasonable/Hello,
I am working on a project that involves finding points in my image pane relative to ARUCO targets. I am interested in whether what I am trying to do is reasonable/feasible.
First, I hard-code the xyz position of a point in space, say a black x on the ground, relative to an ARUCO target.
An example is, a black x on the ground is .4 meters to the left of the ARUCO target, so I would hard-code pos_rel = [-.4,0,0].
After successfully detecting the ARUCO target, I try and project this point back into my camera's image space like this:
camera_points = cv2.projectPoints(pos_rel, rvec, tvec, camera_matrix, dist_coeff)
So this camera_points is where the black x should be in the image. Experimentally, its been kind of close but at times the error is quite large. Especially, when the z-offset of my 'black x' is nonzero, the error is quite large.
Is it possible to do this? Does it require rigorous camera calibration? Or is this an infeasible/unreasonable goal? I'm looking for decent estimates for positions of my black x from up to 1 meter away from the ARUCO target.
I'd appreciate any advice anyone can givetomkoch96Tue, 27 Mar 2018 18:40:48 -0500http://answers.opencv.org/question/187882/I have a question regarding the imagepoints output of the projectPoints() funtion in open CV. I am getting image points which have negative coordinates and I understand that they are outside the screen area definitely.http://answers.opencv.org/question/179064/i-have-a-question-regarding-the-imagepoints-output-of-the-projectpoints-funtion-in-open-cv-i-am-getting-image-points-which-have-negative-coordinates/
So, what should be the maximum range of outputs for the imagePoints for them to be on the screen?
Will it be u= 1280 and v=720 if I am using it for an image that has 1280 pixels width and 720 pixels height?
For a clearer exposition of my problem I add the following details,
camera_distCoeffs: [0.045539, -0.057822, 0.001451, -0.000487, 0.006539, 0.438100, -0.135970, 0.011170]
camera_intrinsic: [606.215365, 0.000000, 632.285550, 0.000000, 679.696865, 373.770687, 0.000000, 0.000000, 1.000000]
Sample camera coordinate: [16.7819794502, -2.2923261485, 2.9228301598] with orientation quaternions:[Qx,Qy,Qz,Qw] as [0.0075078838, 0.062947858, 0.3573477229, -0.9318174734]
I am forming my rotation vector (rvec) by first converting the quaternions to rotation matrix and then calling the Rodrigues() function. I am constructing the translation vector as tvec=-(transpose of rotation matrix)*[column vector of camera cordinates]
Also, from what I understand the pinhole camera model used in the projectPoints() function has the camera aperture as the origin. So does it mean that the input parameter 'objectPoints' should be (x-x1, y-y1, z-z1) in my case where the camera is not the origin ? for brevity, here (x1,y1,z1) is camera coordinate in world frame and (x,y,z) is target object coordinate in world frame.shaondipFri, 24 Nov 2017 06:42:52 -0600http://answers.opencv.org/question/179064/cv2.projectPoints, error: (-215) npoints >= 0 && (depth == CV_32F || depth == CV_64F) in function projectPointshttp://answers.opencv.org/question/177001/cv2projectpoints-error-215-npoints-0-depth-cv_32f-depth-cv_64f-in-function-projectpoints/Hi all
I am trying to execute following piece of code.
def projectImgPlane(target, player, direction):
target = np.float32(target)
player = np.float32(player)
calibMat = np.array([[ 1.20068095e+03, 0.00000000e+00, 8.13634941e+02], [ 0.00000000e+00, 6.76829040e+02, 2.64292817e+02], [ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
distortion = np.float32([ 0.12168904, 1.70231151, 0.04272291, 0.08605422, -3.63581666])
tvec = target - player
rvec = np.float32(direction)
print ('target:',target,'rvec:',rvec,'tvec:',tvec,'calibmat:',calibMat,'distortion:',distortion)
print(type(target))
print(type(rvec))
print(type(tvec))
print(type(calibMat))
print(type(distortion))
imgCoords = cv2.projectPoints(target, rvec, tvec, calibMat, distortion)
return imgCoords
Following are the values of all variables in this code and types of variables:
target: [ 62.52583313 -59.98480225 -13.10193443]
rvec: [ 0.14032656 0.99005985 -0.00950932]
tvec: [-206.53077698 490.99945068 -55.8660965 ]
calibmat: [[ 1.20068095e+03 0.00000000e+00 8.13634941e+02] [ 0.00000000e+00 6.76829040e+02 2.64292817e+02] [ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
distortion: [ 0.12168904 1.70231152 0.04272291 0.08605422 -3.63581657]
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
Following is the error message:
imgCoords = cv2.projectPoints(target, rvec, tvec, calibMat, distortion)
cv2.error: /io/opencv/modules/calib3d/src/calibration.cpp:3267: error: (-215) npoints >= 0 && (depth == CV_32F || depth == CV_64F) in function projectPoints
Can you please tell me what I am doing wrong in my code. Is there any argument or variable use is wrong. I tried to investigate most of the possibilities but failed to comprehend the error. Thank you in advance.
Regards
SDSDThu, 26 Oct 2017 09:06:05 -0500http://answers.opencv.org/question/177001/OpenCV SolvePnP strange valueshttp://answers.opencv.org/question/176922/opencv-solvepnp-strange-values/Hello,
I experiment on a project.
I use **SolvePnP** to find rotation vector on an object.
Since the values are hard to understand, I used 3D software to define specific values that I am trying to find with OpenCV.
I've got a plane in the center on my scene. I apply rotations on X, Y or Z.
In example bellow, rotations are defined on :
**x=30°
y=0°
z=30°**
I've got good values for focalLength, fov, etc.
![image description](/upfiles/1508940197829013.jpg)
As you can see, the **cv2.projectPoints** works perfectly on my image.
When I call **SolvePnP**, the **rvecs returns strange values**.
For rotation X, I've got 28.939°
For rotation X, I've got 7.916°
For rotation Z, I've got 29.02031°
So when I try to map a plane with WebGL, I've got the result on image bellow (red plane)
![image description](/upfiles/15089407414127149.jpg)
**So here is my question.
Why SolvePnP doesn't return x:30°, y:0° and z:30° !
It's very strange no ???**
Do I have to use **Rodrigues** somewhere? If yes, how ?
Is there a lack of precision somewhere?
Thanks
Loïc
kopacabana73Wed, 25 Oct 2017 09:18:24 -0500http://answers.opencv.org/question/176922/How to get the accuracy of the calibration in millimeters (not in pixels)?http://answers.opencv.org/question/175578/how-to-get-the-accuracy-of-the-calibration-in-millimeters-not-in-pixels/I'd like to know the accuracy of the camera's calibration status in millimeter units. In other words, what does the X pixels error (the Root Mean Square error) in the 2D image correspond to the distance in millimeters in the 3D object coordinate system?
In OpenCV, I can get the accuracy of the calibration based on the Root Mean Square error (pixel units) as a result of `calibrateCamera` function. Or, I can manually calculate by reprojecting the object points to the image with `projectPoints` and by comparing them with the current image points.
For example, let's say I got the RMS error around 2.0 (I mean I already have 2d projection error in pixel units). Does this mean 5mm or 10mm difference? How to calculate the conversion from pixels in 2d to millimeters in 3d and to get mm units error in 3D space? Note that the ArUco marker is always placed horizontally and I know the size, position, and pose of the marker. Could you let me know how to calculate the error in x and y axes in the object frame? Please let me know if this question doesn't make sense or if you need further information.kangarooTue, 03 Oct 2017 00:59:56 -0500http://answers.opencv.org/question/175578/cv2.projectPoints jacobians columns orderhttp://answers.opencv.org/question/99343/cv2projectpoints-jacobians-columns-order/Documentation of `cv2.projectPoints` states:
jacobian – Optional output 2Nx(10+<numDistCoeffs>) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters.
I have found while digging in sources next info:
- dp/drot must be 2Nx3 floating-point matrix
- dp/dT must be 2Nx3 floating-point matrix
- dp/df must be 2Nx2 floating-point matrix
- dp/dc must be 2Nx2 floating-point matrix
- dp/df must be 2Nx14, 2Nx12, 2Nx8, 2Nx5, 2Nx4 or 2Nx2 floating-point matrix
And:
_jacobian.create(npoints*2, 3+3+2+2+ndistCoeffs, CV_64F);
Mat jacobian = _jacobian.getMat();
pdpdrot = &(dpdrot = jacobian.colRange(0, 3));
pdpdt = &(dpdt = jacobian.colRange(3, 6));
pdpdf = &(dpdf = jacobian.colRange(6, 8));
pdpdc = &(dpdc = jacobian.colRange(8, 10));
pdpddist = &(dpddist = jacobian.colRange(10, 10+ndistCoeffs));
That is strange for me that it is not clearly documented, e.g. I initially thought that it goes as `dx,dy,dz,droll,dpitch,dyaw...`, but anyway.
**What is a complete order of derivatives?** Especially for:
- pdpdrot: `droll, dpitch, dyaw` or `dyaw, dpitch, droll`
- dp/dt: `dx,dy`?
- dp/df: `dfx, dfy`?
- dp/dc: `dcx, dcy`?kpykcbMon, 01 Aug 2016 08:54:02 -0500http://answers.opencv.org/question/99343/