OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Thu, 03 Sep 2020 16:20:24 -0500Omnidir undistort vs projectPointshttp://answers.opencv.org/question/234725/omnidir-undistort-vs-projectpoints/I work with fisheye images (~180°FOV) and want to be able to convert forth and back between distorted and undistorted image coordinates. With the fisheye camera model I already have this running as expected, but wanted to have a look at the omnidirectional camera model. However, trying to do a coordinate roundtrip in python using
out1 = cv2.omnidir.projectPoints(points, rvec, tvec, K, xi, D)
followed by
out2 = cv2.omnidir.undistortPoints(out1, K, D, xi, None)
results in `out2` differ from `points` (also apart from the obvious difference in dimension). Only in case of manually set `xi = 0` I obtain concordance in `points` and `out2`.
Am I just wrong, having bug in my code or ist this an issue within the opencv_contrib implementation?
I already had a look at the source of these functions and the corresponding paper, but couldn't definitely figure out, whether there might be something wrong.
However, comparing https://github.com/opencv/opencv_contrib/blob/master/modules/ccalib/src/omnidir.cpp#L152 and https://github.com/opencv/opencv_contrib/blob/master/modules/ccalib/src/omnidir.cpp#L333 puzzles me a bit. Shouldn't one of them be the inverse operation or am I totally wrong?
Doing the same with fisheye camera model (using both, `projectPoints` and `distortPoints` basically interchangeable) leads to expected results.
I would be very thankful if anyone would have a hint for me regarding this.
code for reproducibility:
import numpy as np
import cv2
K = np.array([[ 1.24440479e+03, -1.22794708e-01, 9.60388731e+02],
[ 0.00000000e+00, 1.24469754e+03, 9.59437737e+02],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
D = np.array([[-2.68369102e-01, 3.37814230e-02, 2.31238441e-04,
-2.66100513e-04]])
xi = np.array([1.]) # works with np.array([0.])
x = np.array([500])
y = np.array([500])
x = (x - K[0, 2]) / K[0, 0]
y = (y - K[1, 2]) / K[1, 1]
points = np.array([x, y, np.ones_like(x)]).T
rvec = np.zeros((3, 1))
tvec = np.zeros((3, 1))
out1, _ = cv2.omnidir.projectPoints(points.reshape(-1, 1, 3), rvec, tvec, K, xi, D)
out2 = cv2.omnidir.undistortPoints(out1, K, D, xi, None)
pippoThu, 03 Sep 2020 16:20:24 -0500http://answers.opencv.org/question/234725/projectpoints distortionhttp://answers.opencv.org/question/231939/projectpoints-distortion/Hi everybody, I am opening a new question to add images and put together several other questions
- https://answers.opencv.org/question/227094/cvprojectpoints-giving-bad-results-when-undistorted-2d-projections-are-far/
- https://answers.opencv.org/question/197614/are-lens-distortion-coefficients-inverted-for-projectpoints/
To recap, I'm trying to project 3D points to an undistort image, in particular to the images from the [KITTI raw dataset](http://cvlibs.net/datasets/kitti/).
For those who do not know it, for each of the sequences of the dataset, a calibration file is provided both in terms of intrinsic and extrinsic; the dataset contains both camera images and lidar 3D points; also, a tutorial for the projection of the 3D points is provided, but everything is proposed with the undistorted/rectified images that the dataset also provides.
However, the intrinsic/extrinsic matrices are also provided so in general I should be able to project the 3D points directly on the image without having to either use their rectification or making any undistorsion of the raw image (raw means without processing in this case).
So let's start with their example, a projection of the points directly on the rectified image (please note, I'm saying rectified and not undistorted because the dataset has four cameras, so they provide the undistorted/rectified images).
![image description](/upfiles/15936456583289347.jpg)
as you can see this image is pretty undistorted and the lidar points should be correctly aligned (using their demo toolkit here in matlab).
Next, using the info provided:
> - S_xx: 1x2 size of image xx before rectification
- K_xx: 3x3 calibration matrix of camera xx before rectification
- D_xx: 1x5 distortion vector of camera xx before rectification
- R_xx: 3x3 rotation matrix of camera xx (extrinsic)
- T_xx: 3x1 translation vector of camera xx (extrinsic)
together with the info about where the lidar-sensor is I can easily prepare an example to
- read the lidar data.
- rotate the points in the camera coordinate frame (it is the same thing that giving the projectpoints this transformation with the rvec/tvec, which i use as identities since the first camera is the origin of all the cordinate systems, but this is a detail)
- project the points using projectPoints
Ok let's try to do this.... using
D_00 = np.array([ -3.745594e-01, 2.049385e-01, 1.110145e-03, 1.379375e-03, -7.084798e-02], dtype=np.float64)
![image description](/upfiles/159364653919928.jpg)
I hope you can see the image! there are points above the actual limit of the lidar points, as well some "noise" in between the other points (ok just to be fair, in the first image I didn't put all the points, but you should see a different "noise pattern" in this second image).
At the beginning I thought it was due to a stupid error in my code, parsing or whatsoever... but the I tried to do the same just using the undistorted image and thus setting the distortion coefficients to zero and... magic
D_00_zeros = np.array([ 0.0, 0.0, 0.0, 0.0, 0.0], dtype=np.float64)
![image description](/upfiles/15936469688493019.jpg)
Can this be still my fault on parsing data? mmm ... so, I decided to create a "virtual plane" and move it through the 3D space, moving this plane close to the camera.
![image description](/upfiles/1593647290947237.png)
![image description](/upfiles/1593647369769384.png)
![image description](/upfiles/15936473845954262.png)
![image description](/upfiles/15936473993019015.png)
and.. whaaat?! what's going on here!? Then I started to look around the internet and experimenting solutions; in particular, one of the most "inspiring" was [this one](https://answers.opencv.org/question/227094/cvprojectpoints-giving-bad-results-when-undistorted-2d-projections-are-far/) I linked before (the first link), when @HYPEREGO said *I read somewhere that using more than the usual 4/5 distortion coefficients in this function sometimes lead to bad results* ... so, just to the sake of argument, I tried to use
D_00 = np.array([ -3.745594e-01, 2.049385e-01, 1.110145e-03, 1.379375e-03], dtype=np.float64)
instead of
D_00 = np.array([ -3.745594e-01, 2.049385e-01, 1.110145e-03, 1.379375e-03, -7.084798e-02], dtype=np.float64)
and again... magic, here the same distance as before (9.1 m)
![image description](/upfiles/15936479356358398.png)
and even close to the camera
![image description](/upfiles/1593647955745727.png)
So what's going on here? Moreover, with respect to the second link, the K1 value is negative and I'm having a barrel distortion instead of the pincushion the wiki supposes
![image description](https://docs.opencv.org/2.4/_images/distortion_examples.png)
in particular
with `k1 > 0`
![image description](/upfiles/1593643837616328.png)
with `k1<0`
![image description](/upfiles/15936437527542456.png)
Following the wiki the effect should be the opposite as @Stringweasel pointed out
The vectors I used are, respectively,
D_00 = np.array([ 0.25, 0.0, 0.0, 0.0, 0.0], dtype=np.float64)
D_00 = np.array([ -0.25, 0.0, 0.0, 0.0, 0.0], dtype=np.float64)
---
So to conclude: I think I'm completely missing something here, OR there is a bug from the middle-age of opencv.. but It'd be weired..
Hope that some OpenCV *guru* can help me!
Bests,
AugustotrigalWed, 01 Jul 2020 19:05:12 -0500http://answers.opencv.org/question/231939/Are Lens Distortion Coefficients inverted for projectPoints?http://answers.opencv.org/question/197614/are-lens-distortion-coefficients-inverted-for-projectpoints/ Good day guys,
I am trying to simulate and image with lens barrel distortion. I create a virtual chessboard (only the corners) and then project it onto my image plane using OpenCV. The idea is to project these points with known distortion coefficients, and then attempt a lens distortion calibration (with `calibrateCamera`), and see if the same coefficients are obtained.
**My question** is about the `projectPoints` function which takes `distCoeffs` as an input. Are these coefficients the same that must be used to *undistort* an image (output of `calibrateCamera`)? This means the function will have to calculate the inverse of that operation. Or, does it use those coefficients to distort the object points directly? Meaning that the will not correlate at all at the output of e.g. `calibrateCamera`.
I ask, because I noticed my simulation does pincushion distortion when I expect barrel, and vica versa. Which seems that the distortion does the opposite of what I think it does.
The minimal working code that I used to simulate the image (in Python):
# The distortion matrix that I vary
distortion = np.array([0.03, 0.0001, 0.0, 0.0, 0.0001])
# Generate Grid of Object Points
grid_size, square_size = [20, 20], 0.2
object_points = np.zeros([grid_size[0] * grid_size[1], 3])
mx, my = [(grid_size[0] - 1) * square_size / 2, (grid_size[1] - 1) * square_size / 2]
for i in range(grid_size[0]):
for j in range(grid_size[1]):
object_points[i * grid_size[0] + j] = [i * square_size - mx, j * square_size - my, 0]
# Setup the camera information
f, p = [5e-3, 120e-8]
intrinsic = np.array([[f/p, 0, 0], [0, f/p, 0], [0, 0, 1]])
rvec = np.array([0.0, 0.0, 0.0])
tvec = np.array([0.0, 0.0, 3.0])
# Project the points
image_points, jacobian = cv2.projectPoints(object_points, rvec, tvec, intrinsic, distortion)
# Plot the points (using PyPlot)
plt.scatter(*zip(*image_points[:, 0, :]), marker='.')
plt.axis('equal')
plt.xlim((-4000, 4000))
plt.ylim((-4000, 4000))
plt.grid()
plt.show()
Thank you for your help!StringweaselFri, 17 Aug 2018 07:32:34 -0500http://answers.opencv.org/question/197614/Drawing checkerboard's reference frame does not yield correct results (solvePnp+projectPoints)http://answers.opencv.org/question/228937/drawing-checkerboards-reference-frame-does-not-yield-correct-results-solvepnpprojectpoints/Greetings,
I have followed the tutorial at https://docs.opencv.org/master/d7/d53/tutorial_py_pose.html based on calibration data obtained following the tutorial at https://docs.opencv.org/master/dc/dbb/tutorial_py_calibration.html.
The end objective is to obtain the pose of the checkerboard with respect to the camera, but first I am trying to draw the checkerboard's reference frame.
The input data is a set of 2 snapshots I have taken of my webcam feed pointed at a printed 10x7 checkerboard.
The calibration seems to succeed:
![image description](https://i.stack.imgur.com/jSGxg.jpg)
![image description](https://i.stack.imgur.com/w5feE.jpg)
But the output is totally wrong:
![image description](https://i.stack.imgur.com/aRFZE.jpg)
![image description](https://i.stack.imgur.com/CEkjq.jpg)
Here is the patched-up code:
import cv2 as cv
import numpy as np
import glob
import argparse
# algorithm parameters
CHECKERBOARD_WIDTH = 9
CHECKERBOARD_HEIGHT = 6
# termination criteria
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
#=== CALIBRATE CAMERA ============================================================================
#Prepare object points
objp = np.zeros((CHECKERBOARD_HEIGHT * CHECKERBOARD_WIDTH, 3), np.float32)
objp[:,:2] = np.mgrid[0:CHECKERBOARD_HEIGHT, 0:CHECKERBOARD_WIDTH].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
# Load the images
ap = argparse.ArgumentParser()
ap.add_argument('-f', '--folder', required=True, help='Path to the images folder with last slash')
ap.add_argument('-e', '--ext', required=True, help='Extension of image files without the dot')
args = vars(ap.parse_args())
images = glob.glob(args['folder']+'*.'+args['ext'])
#Process the images
for fname in images:
print('Calibrating on '+fname)
img = cv.imread(fname)
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv.findChessboardCorners(gray, (CHECKERBOARD_WIDTH,CHECKERBOARD_HEIGHT), None)
# If found, add object points, image points (after refining them)
if ret == True:
print('Found corners')
objpoints.append(objp)
corners2 = cv.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners)
# Draw and display the corners as feedback to the user
cv.drawChessboardCorners(img, (CHECKERBOARD_WIDTH,CHECKERBOARD_HEIGHT), corners2, ret)
cv.imshow('Calibration', img)
k = cv.waitKey(0) & 0xFF
if k == ord('s'):
cv.imwrite(fname+'_calib.png', img)
cv.destroyAllWindows()
#Obtain camera parameters
ret, mtx, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
#=== FIND POSE OF TARGETS ===========================================================================
#Prepare object points
objp = np.zeros((CHECKERBOARD_HEIGHT * CHECKERBOARD_WIDTH, 3), np.float32)
objp[:,:2] = np.mgrid[0:CHECKERBOARD_HEIGHT, 0:CHECKERBOARD_WIDTH].T.reshape(-1,2)
axis = np.float32([[3,0,0], [0,3,0], [0,0,-3]]).reshape(-1,3)
#Display
def draw(img, corners, imgpts):
corner = tuple(corners[0].ravel())
img = cv.line(img, corner, tuple(imgpts[0].ravel()), (255,0,0), 5)
img = cv.line(img, corner, tuple(imgpts[1].ravel()), (0,255,0), 5)
img = cv.line(img, corner, tuple(imgpts[2].ravel()), (0,0,255), 5)
return img
for fname in images:
print('Processing '+fname)
img = cv.imread(fname)
gray = cv.cvtColor(img,cv.COLOR_BGR2GRAY)
ret, corners = cv.findChessboardCorners(gray, (CHECKERBOARD_WIDTH,CHECKERBOARD_HEIGHT), None)
if ret == True:
print('Found corners')
corners2 = cv.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
# Find the rotation and translation vectors.
ret,rvecs, tvecs = cv.solvePnP(objp, corners2, mtx, dist)
# project 3D points to image plane
imgpts, jac = cv.projectPoints(axis, rvecs, tvecs, mtx, dist)
img = draw(img,corners2,imgpts)
cv.imshow('img',img)
k = cv.waitKey(0) & 0xFF
if k == ord('s'):
cv.imwrite(fname+'_output.png', img)
cv.destroyAllWindows()
Called e.g. with python3 test_cvpnp.py --folder ./images/ --ext png
The images for calibration and for the processing are the same, which should yield good results. What is happening? Thanks!
The original images are as follows, in case you might want to reproduce the behaviour:
![image description](https://i.stack.imgur.com/TD5zt.png)
![image description](https://i.stack.imgur.com/8i77L.png)mistermystereSun, 12 Apr 2020 15:32:09 -0500http://answers.opencv.org/question/228937/Covariance of estimated R and T from iterative solvePnP()http://answers.opencv.org/question/219611/covariance-of-estimated-r-and-t-from-iterative-solvepnp/Hi all,
I am using solvePnP with the iterative flag to estimate R and T (rvec, tvec more correctly) from a known correspondence of image corners and respective world points. I was wondering if it is possible to get the uncertainty (covariance) associated with this estimation?
I have been doing some research on this topic over the past few days and I came across the function projectPoints() in openCV which returns the jacobian of image points wrt intrinsic and extrinsic parameters. Can I just use this jacobian to get an estimate of covariance? Something like covariance = (J.transpose() * J).inverse()
Any help or explanation ?
Thanks.subodhFri, 11 Oct 2019 15:57:59 -0500http://answers.opencv.org/question/219611/test transform between two cameras with projectPoints?http://answers.opencv.org/question/219028/test-transform-between-two-cameras-with-projectpoints/ I have a stereo calibrated camera pair, so i have known intrinsic and extrinsics.
I have a chessboard visible in both cameras.
What i want to do is:
Find the chessboard in both frames.
Run SolvePnP from camera A.
Using the stereo calibration extrinsics, project the 3d points from camera B, using the extrinsics as the camera position.
Use the distance between the projected points on camera B frame, and the chessboard points in camera B frame to assess the extrinsic validity.
I have the following function, which displays the points, but they seem much further off than i would expect from the extrinsic values. Is this the correct approach here?
void main()
{
//build object points
int grid_size_x = 7;
int grid_size_y = 5;
double squareSize = 4.5;
std::vector<cv::Point3f> object_points_;
//set up nboard
for (int y = 0; y < grid_size_y; y++)
{
for (int x = 0; x < grid_size_x; x++)
{
object_points_.push_back(cv::Point3f(x * squareSize, y * squareSize, 1));
}
}
cv::Mat camMatrixB, distCoeffsB, camMatrixA, distCoeffsA, offsetT, offsetR;
bool readOk = readCameraParameters("CalibrationData.xml", camMatrixB, distCoeffsB, camMatrixA, distCoeffsA, offsetT, offsetR);
if (!readOk) {
cerr << "Invalid camera file" << endl;
}
//solvepnp from A frame to get board position.
std::vector<cv::Point2f> corners_z;
cv::Mat imagez = cv::imread("FrameA.jpg");
cv::Mat imageGrayz;
cv::cvtColor(imagez, imageGrayz, cv::COLOR_BGR2GRAY);
cv::Mat cv_translationA, cv_rotationA;
//find board in image A
bool found_chessboardA = cv::findChessboardCorners(imageGrayz, cv::Size(grid_size_x, grid_size_y), corners_z, 0);
if (found_chessboardA)
{
cv::cornerSubPix(imageGrayz, corners_z, cv::Size(3, 3), cv::Size(-1, -1), cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS, 50, 0.01));
//solve for board pose in A
cv::solvePnP(object_points_, corners_z, camMatrixA, distCoeffsA, cv_rotationA, cv_translationA);
//set the B pose to be the calculated offset amount, relative to the A pose.
cv::Mat offsetPos = cv::Mat(1, 3, CV_64F);
offsetPos.at<double>(0,0) = cv_translationA.at<double>(0, 0) - (offsetT.at<double>(0,0) /10);
offsetPos.at<double>(0, 1) = cv_translationA.at<double>(0, 1) - (offsetT.at<double>(0,1) / 10);
offsetPos.at<double>(0, 2) = cv_translationA.at<double>(0, 2) - (offsetT.at<double>(0, 2) / 10);
cv::Mat offsetR2;
cv::Rodrigues(offsetR, offsetR2);
cv:Mat offsetRot = cv::Mat(1, 3, CV_64F);
offsetRot.at<double>(0, 0) = cv_rotationA.at<double>(0, 0) - offsetR2.at<double>(0, 0);
offsetRot.at<double>(0, 1) = cv_rotationA.at<double>(0, 1) - offsetR2.at<double>(0, 1);
offsetRot.at<double>(0, 2) = cv_rotationA.at<double>(0, 2) - offsetR2.at<double>(0,2);
//reproject the object points and look at error.
cv::Mat imageSD = cv::imread("FrameB.jpg");
cv::Mat imageGray;
cv::cvtColor(imageSD, imageGray, cv::COLOR_BGR2GRAY);
cv::Mat cv_translationD, cv_rotationD;
std::vector<cv::Point2f> corners_;
//find board in image
bool found_chessboardB = cv::findChessboardCorners(imageGray, cv::Size(grid_size_x, grid_size_y), corners_, 0);
if (found_chessboardB)
{
cv::cornerSubPix(imageGray, corners_, cv::Size(3, 3), cv::Size(-1, -1), cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS, 50, 0.01));
//test repro
std::vector<cv::Point2f> projectedPointsSD;
cv::projectPoints(object_points_, offsetRot, offsetPos, camMatrixB, distCoeffsB, projectedPointsSD);
for (int p = 0; p < projectedPointsSD.size(); p++)
{
cv::drawMarker(imageSD, projectedPointsSD[p], cv::Scalar(255, 255, 255), cv::MARKER_TILTED_CROSS, 8, 1, 8);
}
for (int p = 0; p < corners_.size(); p++)
{
cv::drawMarker(imageSD, corners_[p], cv::Scalar(255, 255, 255), cv::MARKER_SQUARE, 8, 1, 8);
}
cv::imshow("repro", imageSD);
cv::waitKey(0);
}
}
}
antithingMon, 30 Sep 2019 08:25:59 -0500http://answers.opencv.org/question/219028/Convert yaw, pitch and roll values to rVec for projectPoints.http://answers.opencv.org/question/214216/convert-yaw-pitch-and-roll-values-to-rvec-for-projectpoints/ I'm trying to take a set of images and use projectPoints to take a real world Lat/Lng/Alt and draw markers on the images if the marker has a valid x,y within the image. I have the lat,lng,alt of the image along with the yaw, pitch and roll values from the camera that took the image. My YPR values are in the following format:
- Yaw being the general orientation of the camera when on a horizontal plane: toward north=0, toward east = 90°, south=180°, west=270°, etc.
- Pitch being the "nose" orientation of the camera: 0° = horizontal, -90° = looking down vertically, +90° = looking up vertically, 45° = looking up at an angle of 45° from the horizon, etc.
- Roll being if the camera is tilted left or right when in your hands: +45° = tilted 45° in a clockwise rotation when you grab the camera, thus +90° (and -90°) would be the angle needed for a portrait picture for example, etc.
I have been stuck for a few days on how to take the yaw pitch and roll values to get a valid rVec value for the projectPoints function.
Thanks for any help.CVPilotWed, 12 Jun 2019 14:18:12 -0500http://answers.opencv.org/question/214216/projectPoints functionality questionhttp://answers.opencv.org/question/96474/projectpoints-functionality-question/ I'm doing something similar to the tutorial here: http://docs.opencv.org/3.1.0/d7/d53/tutorial_py_pose.html#gsc.tab=0 regarding pose estimation. Essentially, I'm creating an axis in the model coordinate system and using ProjectPoints, along with my rvecs, tvecs, and cameraMatrix, to project the axis onto the image plane.
In my case, I'm working in the world coordinate space, and I have an rvec and tvec telling me the pose of an object. I'm creating an axis using world coordinate points (which assumes the object wasn't rotated or translated at all), and then using projectPoints() to draw the axes the object in the image plane.
I was wondering if it is possible to eliminate the projection, and get the world coordinates of those axes once they've been rotated and translated. To test, I've done the rotation and translation on the axis points manually, and then use projectPoints to project them onto the image plane (passing identity matrix and zero matrix for rotation, translation respectively), but the results seem way off. How can I eliminate the projection step to just get the world coordinates of the axes, once they've been rotation and translated? Thanks! bfc_opencvTue, 14 Jun 2016 21:19:07 -0500http://answers.opencv.org/question/96474/Calculate robot coordinates from measured chessbaord corners ( hand-eye calibration)http://answers.opencv.org/question/204910/calculate-robot-coordinates-from-measured-chessbaord-corners-hand-eye-calibration/Hello guys,
for my project I am trying to calibrate my depth camera with my robot-arm (hand-eye calibration). Therefore, I measured 8 corners of the cheassboard and got 8 pixel vectors and their corresponding robot 3D coordinates (using the gripper to point exactly at the corner). Then, I tried to calculate the transformation matrix. But the result is way off. Where's the mistake? Thank you very much!
Edit: Added the camera matrix
----------
img_pts = np.array(((237., 325.),
(242., 245.),
(248., 164.),
(318., 330.),
(322., 250.),
(328., 170.),
(398., 336.),
(403., 255.)), dtype=np.float32)
objectPoints = np.array(((-437., -491., -259.),
(-402., -457., -259.),
(-360., -421., -259.),
(-393., -534., -259.),
(-362., -504., -259.),
(-332., -476., -259.),
(-298., -511., -259.),
(-334., -546., -259.)), dtype=np.float32)
camera_matrix = np.array(((542.5860286838929, 0., 310.4749867256299),
(0., 544.2396851407029, 246.7577402755397),
(0., 0., 1.)), dtype=np.float32)
distortion = np.array([0.03056762860315778,
-0.1824835906443329,
-0.0007936359893356936,
-0.001735795279032343,
0])
ret, rvec, tvec = cv2.solvePnP(objectPoints, img_pts, camera_matrix, distortion)
rmtx, _ = cv2.Rodrigues(rvec)
T_gripper_cam = np.array([[rmtx[0][0], rmtx[0][1], rmtx[0][2], tvec[0]],
[rmtx[1][0], rmtx[1][1], rmtx[1][2], tvec[1]],
[rmtx[2][0], rmtx[2][1], rmtx[2][2], tvec[2]],
[0.0, 0.0, 0.0, 1.0]])
T_cam_gripper = np.linalg.inv(T_gripper_cam)
print(T_cam_gripper)
p = np.array((237., 325., 0.), dtype=np.float32)
p_new = np.array([p[0], p[1], p[2], 1])
objectPoint = np.dot(p_new, T_gripper_cam)[:3]
print(objectPoint)
# should be (-437., -491., -259.)
# --> actual = [ 33.57 -395.62 -64.46]
------------------------------------- Edit ----------------------------------------------------------
I think I might be on to something. According to this:
[Stackoverflow](https://stackoverflow.com/questions/12299870/computing-x-y-coordinate-3d-from-image-point)
I have to calculate the scaling factor. So i tried to do this in python:
uv = np.array(([237.], [325.], [1.]), dtype=np.float32)
rotInv = np.linalg.inv(rmtx)
camMatInv = np.linalg.inv(camera_matrix)
leftSideMat = np.dot(rotInv, np.dot(camMatInv, uv))
rightSideMat = np.dot(rotInv, tvec)
s = (295 + leftSideMat[2:] / rightSideMat[2:])
temp = (s * np.dot(camMatInv, uv))
tempSub = np.array(temp - tvec)
print(np.dot(rotInv, tempSub))
# should be (-437., -491., -259.)
# --> actual = [ 437.2 -501.3 -266.2]
Looks like I am pretty close. But its still way off in the Y direction. Is this the correct approach? Thank you!Mysterion46Fri, 07 Dec 2018 06:35:46 -0600http://answers.opencv.org/question/204910/Projectpoints not the right resulthttp://answers.opencv.org/question/199575/projectpoints-not-the-right-result/ Hey guys,
I want to display 3D-Object Points into an image. For that reason I use projectpoints. I think i've done it the right way but somehow theres a little deviation. So I want to display a distance-point from the camera (which is positioned in a car) into the image.
The Way i've done it: The intrinsic and extrinsic parameters related to the rear axle are given. Because there are different named coordinate axles in a car-coordinate-system and an image-coordinate-system, i'm going to translate them. For example in car the x-axis = depth, y-axis = horizontal and z-axis = vertical. In the image-coordinate system the x-axis is horizontal and y-axis is vertical and z-axis is depth. Short example -> angle_camera_depthZ = angle_car_depthX. The same im going to do with all other angles and translations. After that i build the rotation-matrices and multiply them to use the rodrigues to get the rotation-vector. After that im going to build the translation vector. Then i use the project points methood to get the pixel-coordinates.
For some reason there is a little difference between the calculated pixel and the real difference (the pixel is about 10-11 Metres when he should draw the distance for 9 Metres).
For better understanding i'm adding the code as a attachment. Info: At first the setter are gonna called, then msgRcv which calls the transformCoordinates method.
Please help me i'm about to get crazy.
Bryan
void cTransformation::transformCoord(){
crtRotationMatr();
crtRotationVec();
crtTranslVec();
crtCamMat();
crtDistVec();
calculate2D();
}
//Told in the Thread -> Translate the Name of the different coordinate Systems
void cTransformation::crtRotationMatr(){
rotationX = (cv::Mat_<double>(3, 3) << 1, 0, 0, 0, cos(this->pitch + this->pitchOffset), -sin(this->pitch + this->pitchOffset), 0, sin(this->pitch + this->pitchOffset), cos(this->pitch + this->pitchOffset));
rotationY = (cv::Mat_<double>(3, 3) << cos(this->yaw + this->yawOffset), 0, sin(this->yaw + this->yawOffset), 0, 1, 0, -sin(this->yaw + this->yawOffset), 0, cos(this->yaw + this->yawOffset));
rotationZ = (cv::Mat_<double>(3, 3) << cos(this->roll + this->rollOffset), -sin(this->roll + this->rollOffset), 0, sin(this->roll + this->rollOffset), cos(this->roll + this->rollOffset), 0, 0, 0, 1);
rotation = rotationX * rotationY * rotationZ;
}
void cTransformation::crtRotationVec(){
//Creates an rotation Vector of rotation matrix
cv::Rodrigues(rotation, rotationVec);
}
//Camera Vector = (X-horizontal,Y-vertikal,Z-depth)
void cTransformation::crtTranslVec()
{
//1.87m Offset to rear axle
translVec = (cv::Mat_<double>(3, 1) << this->posX, this->posZ, this->posY - 1.87);
}
void cTransformation::crtCamMat()
{
camMat = (cv::Mat_<double>(3, 3) << this->focalLengthX, 0, this->mainPointX, 0, this->focalLengthY, this->mainPointY, 0, 0, 1);
}
void cTransformation::crtDistVec()
{
this->DistVec = (cv::Mat_<double>(5, 1) << this->DistoArr[0], this->DistoArr[1], this->DistoArr[2], 0 , 0);
}
void cTransformation::calculate2D()
{
cv::Mat worldAxes(1, 1, CV_64FC3);
worldAxes.at<cv::Point3d>(0, 0) = cv::Point3d(0, 0, this->x1);
cv::Mat imagePoints(1, 1, CV_64FC2);
cv::projectPoints(worldAxes, this->rotationVec, this->translVec, this->camMat, this->DistVec, imagePoints);
cv::Point2d pxlTTC1 = imagePoints.at<cv::Point2d>(0, 0);
this->pxlTTC1X = pxlTTC1.y; //PIXEL RESULT
worldAxes.at<cv::Point3d>(0, 0) = cv::Point3d(0, 0, 9);
cv::projectPoints(worldAxes, this->rotationVec, this->translVec, this->camMat, this->DistVec, imagePoints);
cv::Point2d pxlTTC2 = imagePoints.at<cv::Point2d>(0, 0);
this->pxlTTC2X = pxlTTC2.y; //PIXEL RESULT
}HorstMon, 17 Sep 2018 06:05:08 -0500http://answers.opencv.org/question/199575/Most basic projectPoints example gives wrong answerhttp://answers.opencv.org/question/184917/most-basic-projectpoints-example-gives-wrong-answer/When I run projectPoints function using java code below with most simple inputs I get what I believe is incorrect answer. Did I miss something?
I get the following result:
(x,y,z)= [{0.0, 1.0, 1.0}, {0.0, 1.0, 5.0}, {0.0, 1.0, 10.0}]
(u,v)= [{0.0, 78.5398178100586}, {0.0, 19.73955535888672}, {0.0, 9.966865539550781}]
should be
(u,v)=[{0.0, 100.0}, {0.0, 20.0}, {0.0, 10.0}]
public static void main(String[] args) {
setDllLibraryPath("C:/aaa_eric/code/lib/x64");
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
MatOfPoint3f objectPts3f = new MatOfPoint3f(
new Point3(0.0,1.0, 1.0),
new Point3(0.0,1.0, 5.0),
new Point3(0.0,1.0,10.0));
MatOfPoint2f imagePts2f = new MatOfPoint2f();
Mat rVec = Mat.zeros(3,1, CvType.CV_64F);
Mat tVec = Mat.zeros(3,1, CvType.CV_64F);
//camera matrix, no distortion
Mat kMat = Mat.zeros(3, 3, CvType.CV_64F);
kMat.put(0, 0,
100.0, 0.0, 0.0,
0.0, 100.0, 0.0,
0.0, 0.0, 1.0);
Mat dMat = Mat.zeros(4, 1, CvType.CV_64F);
Calib3d.projectPoints(objectPts3f, imagePts2f, rVec, tVec, kMat, dMat);
System.out.println(objectPts3f.toList());
System.out.println(imagePts2f.toList());
}jetkaczykSat, 17 Feb 2018 09:47:25 -0600http://answers.opencv.org/question/184917/stereoRectify, projectPoints, triangulatePointshttp://answers.opencv.org/question/184605/stereorectify-projectpoints-triangulatepoints/Hi,
I am simulating 3D->2D->3D process.
first, i project worldPoint to two camera points(left and right)
then, use stereoRectify caculates the P1, and P2
finally use P1, P2 and two camera points to caculate the worldPointEst again.
If the the rvecL is 0 degree in all direction,
WorldPoint=WorldPointEst
However, if i add any degree in x or y direction,
WorldPoint != WorldPointEst (surpposed to be the same)
Any help for this? Thanks in advanced
----------------------------------------code sample---------------------------------------------------
projectPoints(worldPoint, rvecL, tvecL, cameraMatrix_L, distCoeffs_L, imagePoint_L);
projectPoints(worldPoint, rvecR, tvecR, cameraMatrix_R, distCoeffs_R, imagePoint_R);
stereoRectify(cameraMatrix_L, distCoeffs_L, cameraMatrix_R, distCoeffs_R, cv::Size(640,480), R_rotation_Matrix_rev, R_translation, R1, R2, P1, P2, Q);
for(int i =0; i<imagePoint_L.size(); i++)
{
triangulatePoints(P1, P2, Mat(imagePoint_L[i]), Mat(imagePoint_R[i]), worldPointEst);
}sweetsmile365Wed, 14 Feb 2018 01:29:55 -0600http://answers.opencv.org/question/184605/projectPoints fails with points behind the camerahttp://answers.opencv.org/question/20138/projectpoints-fails-with-points-behind-the-camera/I'm using the pojectPoints opencv function to get the projection of a 3d point in a camera image plane.
cv::projectPoints(inputPoint, rvec, tvec, fundamentalMatrix, distCoeffsMat, outputPoint);
The problem I'm facing is when Z (in camera local frame) is negative, instead of returning a point out of the image boundaries, it returns me the symmetric (Z positive) instead. I was expecting that function to check for positive Z values...
I can check this manually by myself, but is there a better way?
Thanks!Josep BoschWed, 04 Sep 2013 02:34:51 -0500http://answers.opencv.org/question/20138/project points from known 3d points, strange issuehttp://answers.opencv.org/question/181681/project-points-from-known-3d-points-strange-issue/ I have a set of known 3d world points that i am passing to `cv::projectPoints`.
I pass in the points along with the camera matrices, and my camera position (converted from Eigen Quaternion and Vector3f). The projected 2d points look good when the camera is rotated, but when translated, they slide in the direction of camera movement. Weirdly, inverting the camera tvec matrix has no effect. I feel like I am missing something simple, but I cannot spot it!
I am doing the following:
std::vector<cv::Point2d> projectedPoints(std::vector<cv::Point3d> objectPoints, cv::Mat rVec, cv::Mat tVec)
{
std::vector<cv::Point2d> imagePoints;
cv::projectPoints(objectPoints, rVec, tVec, intrisicMat, distCoeffs, imagePoints);
return imagePoints;
}
and passing the data as follows:
Eigen::Quaterniond zQuatRaw(quat.w, quat.y, -quat.z, quat.x); //camera rotation data
Eigen::Vector3f zPosRaw(translation.ty, -translation.tz, translation.tx); //camera position data
Eigen::Matrix3d R = zQuatRaw.toRotationMatrix();
cv::Mat Rr(3, 3, cv::DataType<double>::type);
cv::eigen2cv(R, Rr);
cv::Mat rvecR(3, 1, cv::DataType<double>::type);
cv::Rodrigues(Rr, rvecR);
cv::Mat tVecc(3, 1, cv::DataType<double>::type);
cv::eigen2cv(zPosRaw, tVecc);
//repro points
if (clickedPoints3d.size() > 0)
{
reprojectedUserPnts = projectedPoints(clickedPoints3d, rvecR, tVecc);
}
Like I said above, camera rotation looks great, the projected points stick in place. Camera translation causes them to slide in the direction of camera movement.
The 3d point coordinates are in cm, as is the camera position.
What could be going wrong here?
Thank you.antithingThu, 04 Jan 2018 16:05:57 -0600http://answers.opencv.org/question/181681/projectPoints tvec and rvechttp://answers.opencv.org/question/178930/projectpoints-tvec-and-rvec/ I have a problem understanding the two parameters of this function. I thought they are the translation and rotation of the camera in global coordinate system and object points are also given in this global coordinate system. But a simple test proves me wrong:
import cv2
import numpy as np
# principal point (60, 60)
cammat = np.array([[30, 0, 60], [0, 30, 60], [0, 0, 1]], dtype=np.float32)
distortion = np.array([0, 0, 0, 0, 0], dtype=np.float32)
tvec = np.array([0, 0, 0], dtype=np.float32)
rvec = np.array([0, 0, 0], dtype=np.float32)
point_3d = np.array([0.1, 0.1, 1], dtype=np.float32)
# [63.00, 63.00]
p1 = cv2.projectPoints(np.array([[point_3d]]), rvec, tvec, cammat, distortion)
# move camera a bit closer to the 3d point, expecting point on 2d plant
# to be a bit further away from the principal point
tvec = np.array([0, 0, 0.1], dtype=np.float32)
# [62.72, 62.72]
p2 = cv2.projectPoints(np.array([[point_3d]]), rvec, tvec, cammat, distortion)
Where am I wrong?gareinsWed, 22 Nov 2017 06:48:31 -0600http://answers.opencv.org/question/178930/Using OpenCV projectPoints to overlay real world data on a photohttp://answers.opencv.org/question/178342/using-opencv-projectpoints-to-overlay-real-world-data-on-a-photo/ I have a photograph and matching camera position (x,y,z), orientation (yaw, pitch and roll), camera matrix (Cx,Cy, Fx,Fy), and radial and tangential correction parameters. I would like to overlay some additional 3d information on the photo which is provided in the same coordinate system. Looking at a similar post [here][1] I feel I should be able to do this OpenCV projectPoints function as follows;
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <string>
int ProjectMyPoints()
{
std::vector<cv::Point3d> objectPoints;
std::vector<cv::Point2d> imagePoints;
// Camera position
double CameraX = 709095.949, CameraY = 730584.110, CameraZ = 64.740;
// Camera orientation (converting from Grads to radians)
double PI = 3.14159265359;
double Pitch = -99.14890023 * (PI / 200.0),
Yaw = PI + 65.47067336 * (PI / 200.0),
Roll = 194.92713428 * (PI / 200.0);
// Input data in real world coordinates
double x, y, z;
x = 709092.288; y = 730582.891; z = 62.837; objectPoints.push_back(cv::Point3d(x, y, z));
x = 709091.618; y = 730582.541; z = 62.831; objectPoints.push_back(cv::Point3d(x, y, z));
x = 709092.131; y = 730581.602; z = 62.831; objectPoints.push_back(cv::Point3d(x, y, z));
x = 709092.788; y = 730581.973; z = 62.843; objectPoints.push_back(cv::Point3d(x, y, z));
// Coefficients for camera matrix
double CV_CX = 1005.1951672908998,
CV_CY = 1010.36740512214021,
CV_FX = 1495.61455114326009,
CV_FY = 1495.61455114326009,
// Distortion co-efficients
CV_K1 = -1.74729071186991E-1,
CV_K2 = 1.18342592220238E-1,
CV_K3 = -2.29972026710921E-2,
CV_K4 = 0.00000000000000E+0,
CV_K5 = 0.00000000000000E+0,
CV_K6 = 0.00000000000000E+0,
CV_P1 = -3.46272954067614E-4,
CV_P2 = -4.45389772269491E-4;
// Intrisic matrix / camera matrix
cv::Mat intrisicMat(3, 3, cv::DataType<double>::type);
intrisicMat.at<double>(0, 0) = CV_FX;
intrisicMat.at<double>(1, 0) = 0;
intrisicMat.at<double>(2, 0) = 0;
intrisicMat.at<double>(0, 1) = 0;
intrisicMat.at<double>(1, 1) = CV_FY;
intrisicMat.at<double>(2, 1) = 0;
intrisicMat.at<double>(0, 2) = CV_CX;
intrisicMat.at<double>(1, 2) = CV_CY;
intrisicMat.at<double>(2, 2) = 1;
// Rotation matrix created from orientation
rRot.at<double>(0, 0) = cos(Yaw)*cos(Pitch);
rRot.at<double>(1, 0) = sin(Yaw)*cos(Pitch);
rRot.at<double>(2, 0) = -sin(Pitch);
rRot.at<double>(0, 1) = cos(Yaw)*sin(Pitch)*sin(Roll) - sin(Yaw)*cos(Roll);
rRot.at<double>(1, 1) = sin(Yaw)*sin(Pitch)*sin(Roll) + cos(Yaw)*cos(Roll);
rRot.at<double>(2, 1) = cos(Pitch)*sin(Roll);
rRot.at<double>(0, 2) = cos(Yaw)*sin(Pitch)*cos(Roll) + sin(Yaw)*sin(Roll);
rRot.at<double>(1, 2) = sin(Yaw)*sin(Pitch)*cos(Roll) - cos(Yaw)*sin(Roll);;
rRot.at<double>(2, 2) = cos(Pitch)*cos(Roll);
// Convert 3x3 rotation matrix to 1x3 rotation vector
cv::Mat rVec(3, 1, cv::DataType<double>::type); // Rotation vector
cv::Rodrigues(rRot, rVec);
cv::Mat tVec(3, 1, cv::DataType<double>::type); // Translation vector
tVec.at<double>(0) = CameraX;
tVec.at<double>(1) = CameraY;
tVec.at<double>(2) = CameraZ;
cv::Mat distCoeffs(5, 1, cv::DataType<double>::type); // Distortion vector
distCoeffs.at<double>(0) = CV_K1;
distCoeffs.at<double>(1) = CV_K2;
distCoeffs.at<double>(2) = CV_P1;
distCoeffs.at<double>(3) = CV_P2;
distCoeffs.at<double>(4) = CV_K3;
std::cout << "Intrisic matrix: " << intrisicMat << std::endl << std::endl;
std::cout << "Rotation vector: " << rVec << std::endl << std::endl;
std::cout << "Translation vector: " << tVec << std::endl << std::endl;
std::cout << "Distortion coef: " << distCoeffs << std::endl << std::endl;
std::vector<cv::Point2f> projectedPoints;
cv::projectPoints(objectPoints, rVec, tVec, intrisicMat, distCoeffs, imagePoints);
for (unsigned int i = 0; i < imagePoints.size(); ++i)
std::cout << "Image point: " << imagePoints[i] << std::endl;
std::cout << "Press any key to exit.";
std::cin.ignore();
std::cin.get();
return 0;
}
The rotation matrix is taken from [this post][2] as follows
[![enter image description here][3]][3]
and is converted to a Rodriques vector as described [here][4]. I had assumed the translation was for removal camera position and hence put the camera in the translation vector. Problem is, it doesn't work. If I zero out the translation I get coordinates close to pixel coordinates but not exactly similar (i.e. using a conformal transformation to compare the shapes shows high residuals). Any idea how to go about fixing this. FWIW, the target coordinates should be
1 1448 1680,
2 1393 1578,
3 1052 1605,
4 1053 1702
as shown below;
[![enter image description here][5]][5]
My feeling is I'm missing something obvious but any help greatly appreciated
[1]: https://stackoverflow.com/questions/25244603/opencvs-projectpoints-function
[2]: http://planning.cs.uiuc.edu/node102.html
[3]: https://i.stack.imgur.com/B894d.gif
[4]: https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#rodrigues
[5]: https://i.stack.imgur.com/iWHhh.jpgshanemaclWed, 15 Nov 2017 03:12:57 -0600http://answers.opencv.org/question/178342/projectPoints() Wrong results??http://answers.opencv.org/question/175542/projectpoints-wrong-results/Why is my code snippet giving me weird results for projected points?
//Generate the one 3D Point which i want to project onto 2D plane
vector<Point3d> points_3d;
points_3d.push_back(Point3d(10, 10, 100));
Mat points3d = Mat(points_3d);
//Generate the identity matrix and zero vector for rotation matrix and translation vector
Mat rvec = (Mat_<double>(3, 3) << (1, 0, 0, 0, 1, 0, 0, 0, 1));
Mat tvec = (Mat_<double>(3, 1) << (0, 0, 0));
//Generate a camera intrinsic matrix
Mat K = (Mat_<double>(3,3)
<< (1000, 0, 50,
0, 1000, 50,
0, 0, 1));
//Project the 3D Point onto 2D plane
Mat points_2d;
projectPoints(points_3d, rvec, tvec, K, Mat(), points_2d);
//Output
cout << points_2d;
I get as projected 2D Point
points_2d = (-1.708699427820658e+024, -9.673395654445999e-026)
If i calculate it on a paper on my own, i'm expecting a point points_2d = (150, 150) with that formula: ![image description](/upfiles/15069441372411866.png)mirnyyMon, 02 Oct 2017 06:23:27 -0500http://answers.opencv.org/question/175542/Drawing 3D points on a distorted imagehttp://answers.opencv.org/question/147737/drawing-3d-points-on-a-distorted-image/ I have the intrinsic parameters of my camera (camera matrix and distortion parameters). I also have a 3D point in the camera coordinate frame. I'd like to project that point onto the image, and show the result.
The distortion parameters seemed to be used in two places:
1. When calling undistort(), which provides me with an undistorted image;
2. When calling projectPoints(), which transforms my 3D point into a 2D point on the image.
I then add a cv::circle to the image with the resulting 2D point and imshow() the resulting image.
My question is this: Calling undistort() results in a cropped image, so do the resulting points from projectPoints() give pixel coordinates in the original distorted image, or are the pixel coordinates in the undistorted image? Put differently, should I call undistort() before drawing the point on my image if I want the point to be drawn in the right spot? Does undistorting the image before drawing on it compensate for the distortion twice?kam3kThu, 11 May 2017 09:32:16 -0500http://answers.opencv.org/question/147737/Difference between undistortPoints() and projectPoints() in OpenCVhttp://answers.opencv.org/question/129425/difference-between-undistortpoints-and-projectpoints-in-opencv/From my understanding, undistortPoints() takes a set of points on a distorted image, and calculates where their coordinates would be on an undistorted version of the same image. projectPoints() maps a set of object coordinates to their corresponding image coordinates.
However, I am unsure if projectPoints() maps the object coordinates to a set of image points on the distorted image (ie. the original image) or one that has been undistorted (straight lines)?
Furthermore, the OpenCV documentation for undistortPoints states that 'the function performs a reverse transformation to projectPoints()'. Could you please explain how this is so?rueynshardTue, 21 Feb 2017 08:35:58 -0600http://answers.opencv.org/question/129425/Find direction from cameraMatrix and distCoeffhttp://answers.opencv.org/question/96359/find-direction-from-cameramatrix-and-distcoeff/Hi Guys,
I have calibrated my camera by detecting checkerboard patterns and running calibrateCamera, retrieving the cameraMatrix and distortion coefficients. These I can plug into project points alongside 3D positions in the cameras space and retrieve the UV where the point is projected into the imperfect camera.
Im using a 3D point that projects into a point near my top left image coordinate corner.
This is all fine, but now I want to go the other way and convert a point in my distorted U,V coordinate into a directional vector pointing at all the points that would be projected into this UV coordinate.
I have tried playing around with the undistortPoints function, to find the ideal points U,V and from those use the cameraMatrix to find a point somewhere along the line, picking values from the cameraMatrix.
X = (U-C_x)/f_x
Y = (U-C_y)/f_y
Z = 1
But I can't seem to hit a direction that is pointing very close to the 3D point i started from.
Any idea what I might be doing wrong?
kind regards
Jesper TaxbølTAXfromDKMon, 13 Jun 2016 15:54:11 -0500http://answers.opencv.org/question/96359/Project points out of field viewhttp://answers.opencv.org/question/92266/project-points-out-of-field-view/ Hello everyone,
I'm currently trying to project points, so i'm want to use the projectPoints with correct parameters.
But i couldn't find in the documentation how the function will work because i know that i have points that can't be project because there are not in the field of view of the camera. (do the function returns me negative values ?)
Thank you in avdance
Sorry for my bad english :/Blue007422Fri, 08 Apr 2016 05:35:11 -0500http://answers.opencv.org/question/92266/solvePnP large (~100) pixel re-projection errorhttp://answers.opencv.org/question/91521/solvepnp-large-100-pixel-re-projection-error/ Hi,
I have been trying to find a camera pose in relation to an object frame, however I'm getting unstable results and large re-projection errors (100 pixels or more in total).
I know that object points and image points are correct. Intrinsic parameters and distortion coefficients were obtained with OpenCV's calibrateCamera with minimal re-projection error (0.5 pixel).
I have tried CV_EPNP and solvePnPRansac, all of them return about the same results, or worse.
The code:
cv::Mat intrinsic_matrix = (cv::Mat_<double>(3, 3) <<
502.21, 0, 476.11,
0, 502.69, 360.73,
0, 0, 1);
cv::Mat distortion_coeffs = (cv::Mat_<double>(1, 5) <<
-3.2587021051876525e-01, 1.1137886872576558e-01, -8.0030372520954252e-04, 1.4677531243862570e-03, -1.6824659875846807e-02);
// Intrinsic matrix and distortion coefficients are read from a file
vector<cv::Point3f> objectPoints;
vector<cv::Point2f> imagePoints;
if (pcl::io::loadPCDFile<pcl::PointXYZ> ("lms1.pcd", *Lms1PointCloud) == -1) //* load the file
{
PCL_ERROR ("Couldn't read file test_pcd.pcd \n");
return (-1);
}
if (pcl::io::loadPCDFile<pcl::PointXYZ> ("singleCamCloud.pcd", *SingleCamCloud) == -1) //* load the file
{
PCL_ERROR ("Couldn't read file test_pcd.pcd \n");
return (-1);
}
lms1PointCloud.points=Lms1PointCloud->points);
singleCamCloud.points=SingleCamCloud->points;
// Fill vectors objectPoints and imagePoints
for (int i=0; i<singleCamCloud.points.size(); i++)
{
imagePoints.push_back(cv::Point2f(singleCamCloud.points[i].x, singleCamCloud.points[i].y));
objectPoints.push_back(cv::Point3f(lms1PointCloud.points[i].x, lms1PointCloud.points[i].y, lms1PointCloud.points[i].z));
}
cv::Mat rotation_vector;
cv::Mat translation_vector;
solvePnP(objectPoints, imagePoints, intrinsic_matrix, cv::noArray(), rotation_vector, translation_vector, false, CV_ITERATIVE);
// Projection of objectPoints according to solvePnP
cv::Mat test_image = cv::Mat::zeros( 720, 960, CV_8UC3 );
vector<cv::Point2f> reprojectPoints;
cv::projectPoints(objectPoints, rotation_vector, translation_vector, intrinsic_matrix, cv::noArray(), reprojectPoints);
float sum = 0.;
sum = cv::norm(reprojectPoints, imagePoints);
std::cout << "sum=" << sum << std::endl;
// Draw projected points (red) and real image points (green)
int myradius=5;
for (int i=0; i<reprojectPoints.size(); i++)
{
cv::circle(test_image, cv::Point(reprojectPoints[i].x, reprojectPoints[i].y), myradius, cv::Scalar(0,0,255),-1,8,0);
cv::circle(test_image, cv::Point(imagePoints[i].x, imagePoints[i].y), myradius, cv::Scalar(0,255,0),-1,8,0);
}
imwrite( "test_image.jpg", test_image );
[Object points file](https://www.dropbox.com/s/unf5nnayq343p8w/lms1.pcd?dl=0) and [image points file](https://www.dropbox.com/s/d6jn344i5tkodhf/singleCamCloud.pcd?dl=0) (dropbox links).
In these conditions I get a re-projection error of 94.43. The image bellow shows original image points (green) and re-projected image points (red).
![image description](http://i.imgur.com/wy8eWaR.jpg)
I'm also not sure how I should use the distortion coefficients, since image points are already obtained from an undistorted image I opted to not use them on solvePnP and projectPoints, is this correct? Although I don't think this is where the large re-projection error comes from, since the error doesn't really change much by using them or not.
I can't seem to find an explanation for such a large error...
If you need any more details feel free to ask.
Thanks in advance.
EDIT: An image to help visualize the problem. See comments bellow.
![image description](http://i.imgur.com/gaLOvbg.png)
Green is the Z camera axis, orange with the frame overlapped is my reference frame, red is X, green is Y and blue is Z. The camera (green) orientation is correct, X and Y position relative to the reference frame are also close to reality, but Z is 0.5 meters lower than it should be.dtvsilvaThu, 31 Mar 2016 11:23:22 -0500http://answers.opencv.org/question/91521/solvePnPRansac gives unexpected resultshttp://answers.opencv.org/question/90624/solvepnpransac-gives-unexpected-results/I'm running into a spot of unexpected behaviour from solvePnPRansac, one would expect if you calculate the rotation and translation vector using a dataset from 3D and 2D datapoints which were matched before that you'd be able to extract the 2D coordinates matching with the 3D ones again, sadly this doesn't seem to be the case when running the following minimal code:
import cv2
import numpy as np
f = 24
sx = 23.2
sy = 15.4
width = 6016
height = 4000
fx = width * f / sx
fy = height * f / sy
cx = width / 2
cy = height / 2
mtx = np.array([[fx, 0, cx], [0, fy, cy], [0, 0, 1]], dtype=np.float32)
objp = np.array([[336.2260, 638.2822, -131.7110], [292.7631, 637.8978, -136.6205], [292.4433, 518.9155, -136.4933], [500.7705, 517.8542, -137.2822], [430.0151, 610.0834, -161.7526], [364.9480, 610.3971, -161.5365], [502.4865, 635.9937, -137.2524], [364.1737, 545.6596, -161.4099], [429.0482, 545.2378, -161.5822], [396.7234, 577.5511, -131.6136]], dtype=np.float32)
imp = np.array([[2089, 1123],[1507, 1153],[1400, 2863],[4566, 2875],[3421, 1645],[2539, 1621],[4483, 1171],[2539, 2521],[3439, 2521],[2983, 1981]], dtype=np.float32)
dist = np.array([])
print(mtx)
print(objp)
print(imp)
_, rvecs, tvecs, inliers = cv2.solvePnPRansac(objp, imp, mtx, dist)
print(rvecs)
print(tvecs)
print(inliers)
imgpts, jac = cv2.projectPoints(objp, rvecs, tvecs, mtx, dist)
print(imgpts)
To be precise the output is the following:
Camera Matrix:
[[ 6.22344824e+03 0.00000000e+00 3.00800000e+03]
[ 0.00000000e+00 6.23376611e+03 2.00000000e+03]
[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
3D Points
[[ 336.22601318 638.28222656 -131.71099854]
[ 292.76309204 637.89782715 -136.62049866]
[ 292.44329834 518.91552734 -136.49330139]
[ 500.77050781 517.85418701 -137.28219604]
[ 430.0151062 610.08337402 -161.75259399]
[ 364.94799805 610.39709473 -161.53649902]
[ 502.48651123 635.99371338 -137.25239563]
[ 364.17370605 545.65960693 -161.40989685]
[ 429.04818726 545.23779297 -161.5821991 ]
[ 396.72338867 577.55108643 -131.61360168]]
2D Points:
[[ 2089. 1123.]
[ 1507. 1153.]
[ 1400. 2863.]
[ 4566. 2875.]
[ 3421. 1645.]
[ 2539. 1621.]
[ 4483. 1171.]
[ 2539. 2521.]
[ 3439. 2521.]
[ 2983. 1981.]]
Rotation Vector:
[[-0.9784294 ]
[ 0.51393317]
[ 0.7867821 ]]
Translation Vector:
[[ -42.0280413 ]
[ -78.03000936]
[ 156.70873622]]
Inliers:
None
Calculated 2D points:
[[[ 7373.86523438 284.01651001]]
[[ 8017.08056641 448.40975952]]
[[ 7350.13916016 815.67822266]]
[[ 4649.390625 296.13134766]]
[[ 5951.33935547 517.70758057]]
[[ 6725.28710938 664.42297363]]
[[ 5423.21484375 71.22388458]]
[[ 6334.44677734 848.40484619]]
[[ 5543.67333984 675.48657227]]
[[ 6205.56494141 298.80780029]]]
So either I'm doing something horribly wrong, or this is a significant portion of unexpected behaviour. I also run into this issue when using the matlab interface (mexopencv).Bart PlovieMon, 21 Mar 2016 19:20:27 -0500http://answers.opencv.org/question/90624/OpenCV projectPoints jacobian matrix is not correcthttp://answers.opencv.org/question/85769/opencv-projectpoints-jacobian-matrix-is-not-correct/We compared the Jacobian matrix coming from projectPoints in OpenCV 2.4.11 against the one we have provided with matlab code (we have written the projection formula exactly based on the OpenCV source code). We found out that when the Tangential distortion coefficients (p1 and p2) are not zero the output of MATLAB code and the OpenCV projectPoints are not identical for the Jacobian of rotation and translation. It seems that the OpenCV source code is not correct for the computing the Jacobian for rotation and translation.
we are using Ubuntu 14.04 64 bit with gcc version 4.8.4.
for your convenience you can find our MATLAB code in the following:
% convert rotation vector to rotation matrix (world_in_camera)
syms rx ry rz real;
r=[rx; ry; rz];
theta=norm(r);
r=r/theta;
R=cos(theta)*eye(3)+(1-cos(theta))*(r*r')+sin(theta)*[0,-r(3), r(2);r(3),0,-r(1);-r(2),r(1),0];
% camera Position world_in_camera
syms tx ty tz real;
T = [tx; ty; tz];
syms fx fy cx cy real ;
% 3D points in camera coordinate system
syms X Y Z real; % position of 3D Points in world coordinate system
P = R*[X; Y; Z] + [tx; ty; tz];
% distortion
syms k1 k2 p1 p2 k3
x_perim = P(1)/P(3);
y_perim = P(2)/P(3);
r2 = (x_perim * x_perim) + (y_perim * y_perim);
r4 = r2 * r2;
r6 = r4 * r2;
cdist = 1 + k1*r2 + k2*r4 + k3*r6;
a1 = 2*x_perim*y_perim;
a2 = r2 + 2*x_perim*x_perim;
a3 = r2 + 2*y_perim*y_perim;
x_zegond = x_perim * cdist + p1 * a1 + p2 * a2;
y_zegond = y_perim * cdist + p1 * a3 + p2 * a1;
% 3D point projection equation
xp(X, Y, Z) = x_zegond * fx + cx;
yp(X, Y, Z) = y_zegond * fy + cy;
syms X3 Y3 Z3 real;
x = xp(X3, Y3, Z3);
y = yp(X3, Y3, Z3);
imagePoints=[x;y];
% disp('imagePoints = ');
% pretty(imagePoints);
dp_dfx = diff(imagePoints, fx);
dp_dfy = diff(imagePoints, fy);
dp_dcx = diff(imagePoints, cx);
dp_dcy = diff(imagePoints, cy);
dp_dk1 = diff(imagePoints, k1);
dp_dk2 = diff(imagePoints, k2);
dp_dp1 = diff(imagePoints, p1);
dp_dp2 = diff(imagePoints, p2);
dp_dk3 = diff(imagePoints, k3);
dp_drx = diff(imagePoints, rx);
dp_dry = diff(imagePoints, ry);
dp_drz = diff(imagePoints, rz);
dp_dtx = diff(imagePoints, tx);
dp_dty = diff(imagePoints, ty);
dp_dtz = diff(imagePoints, tz);
jacobianMat = [dp_drx, dp_dry, dp_drz ,...
dp_dtx, dp_dty, dp_dtz ,...
dp_dfx, dp_dfy, dp_dcx, dp_dcy, ...
dp_dk1, dp_dk2, dp_dp1, dp_dp2, dp_dk3];
fprintf('generating c code for jacobian \n');
ccode(jacobianMat,'file','jacobian.cpp');
navid_mahmoudianThu, 28 Jan 2016 07:23:20 -0600http://answers.opencv.org/question/85769/SolvePNP Bad Result Planar Markerhttp://answers.opencv.org/question/74327/solvepnp-bad-result-planar-marker/Hello,
I'm working with solvePnp with some bad results for a planar marker. The marker is a basket ball court. So the model size is 14 x 15.
Marker is (-7,-7.5,0) to (7,7.5,0). (Should this be normalized to 1 for homogeneous coordinates??) The chessboard size was 6x9 with 0.025 square size. Chessboard and marker are in same units of measurement.
We have 3d Object points and 2d image points. The image is undistorted when when 2d points are detected so distCoeffs can be cv::Mat() from here on out. (i.e. no distortion)
The 2d points are for a marker that is of known size is 14m by 15m. The chessboard square size was also in meters (0.025m square size).
//Make sure 64bit
rvec = cv::Mat(3, 1, CV_64F);
tvec = cv::Mat(3, 1, CV_64F);
bool flag = cv::solvePnP(allImagePoints3d, allImagePoints, camMatrix64, cv::Mat(), rvec, tvec);
std::vector<cv::Point2d> projectedPoints;
cv::Mat J;
cv::projectPoints(allImagePoints3d, rvec, tvec, camMatrix64, cv::Mat(), projectedPoints, J);
cv::Mat Sigma = cv::Mat(J.t() * J, cv::Rect(0,0,6,6)).inv();
// Compute standard deviation
cv::Mat std_dev;
sqrt(Sigma.diag(), std_dev);
std::cout << "rvec1, tvec1 standard deviation:" << std::endl << std_dev << std::endl;
I then draw these projected points in red and original points in green.
![image description](/upfiles/1445860686986108.png)
Any ideas? Other posts have suggested that rvec and tvec should be 64bit.
Using CV_P3P gets three correct marker locations, but the final point is waay out. (CV_EPNP fails entirely)
![image description](/upfiles/14458624175450669.png)
Full model...
- Green line: marker points detected.
Used for homography.
- Hollow circles:
model points under homography
transform (z=0 ignored).
- Full
circles: projectedPoints following
solvePnp.
![image description](/upfiles/14458652875974858.png)
solvePnpRansac
![image description](/upfiles/14458667686753004.png)MRDanielMon, 26 Oct 2015 07:19:18 -0500http://answers.opencv.org/question/74327/World Co-ordinates and Object Co-ordinateshttp://answers.opencv.org/question/73149/world-co-ordinates-and-object-co-ordinates/Hello,
I am working with cv::SolvePnp(), cv::ProjectPoints(). We are working with a fully calibrated camera with known camera matrix and distortion coefficients.
Given a detected marker, it is possible to get the rvec and tvec for a given 3D model.
This has been done for two types of model, a board model and a ball model. We then get three sets of tvecs/rvecs, one for the board and then two more models for the balls. As shown below.....
![image description](/upfiles/14448083761586458.png)
How do we relate these? We can project the model points into the image usings the result of solvepnp.
How do the rvecs and tvecs relate in this case? Is it possible to get the location of each ball on the board in terms of it's x,y,z location relative to the board model?
![image description](/upfiles/14448087363585152.png)
The board is shaped as (0,0), (1,0), (1,1), (0,1).
The ball are circles centered at 0, with radius 0.1, which is in scale with the real world objects.
Process so far....
- Detect board corners.
- Detect ball locations, and fit a circle.
- Calculate rvecs and tvecs for board and the two balls.
- Use projectPoints to project model into image.
Can we get more information on where the balls are on the board? The ultimate aim is collision detection/prediction once locations and velocity are determined.
Kind regards,
DanielMRDanielWed, 14 Oct 2015 02:51:11 -0500http://answers.opencv.org/question/73149/Using triangulatePoints() and projectPoints() with stereo camerashttp://answers.opencv.org/question/68228/using-triangulatepoints-and-projectpoints-with-stereo-cameras/ Hi, I have a stereo camera setup for which I am doing relative pose estimation of one camera w.r.t the other using the essential matrix. Once I computed the pose, I want to refine it; so I was thinking of triangulating the matched points into the 3D space, reprojecting them backwards and computing the reprojection error. Eventually I would also like to use solvePnP to refine my pose (I was trying bundle adjustment, but the results were awry: possibly because my extrinsics change over time)
The thing I am confused by is what R and t is used by the triangulatePoints() and projectPoints() functions. When I am passing the projection matrices to triangulatePoints(), do I just pass K1[I|0] for the first camera and K2[R|t] for the second? And does projectPoints() require the R and t between the camera and the object or camera1 to camera2? Any suggestions would be helpful.
Thanks!saihvMon, 10 Aug 2015 01:06:45 -0500http://answers.opencv.org/question/68228/OpenCV Assertion failed when using perspective transformhttp://answers.opencv.org/question/59555/opencv-assertion-failed-when-using-perspective-transform/Hey all!
I am using Aruco AR library to make a project.
My project is based on the following concept: I would like do define the camera position in world coordinates system using AR markers. The markers positions are known in world coordinate system. If I could define the distance of each markers relating to my camera I could define the world coordinates of my camera. For this purpose I would need to define minimum two markers coordinates. In this way I would get two result coordinates of the camera, so for the proper solution I would need three detected markers with the distance estimations.
I have successfully done the marker detection, and I also can draw f.e a cube on them, so the the camera calibration is valid, and everything works fine.
I have the following problem. When I try to use the cv::projectPoints function, i got the following errors:
OpenCV Error: Assertion failed (npoints >= 0 && (depth == CV_32F || depth == CV_64F)) in projectPoints, file /home/mirind4/DiplomaMSC/OpenCV/opencv-2.4.10/modules/calib3d/src/calibration.cpp, line 3349
Exception :/home/mirind4/DiplomaMSC/OpenCV/opencv-2.4.10/modules/calib3d/src/calibration.cpp:3349: error: (-215) npoints >= 0 && (depth == CV_32F || depth == CV_64F) in function projectPoints
I tried to find solution for this error, and I have found a seemingly good one [Link to the question](http://answers.opencv.org/question/18252/opencv-assertion-failed-for-perspective-transform/), but in my code I do not know how to define the type of datas to "CvType.CV_32FC2"
I would like to ask two question:
What would you advise for this error?
Do you think that my concept of the camera pose estimation is good?
Thanks in advance!
My source code is the following: (I am using the Aruco with python, so that's why you can see the "BOOST_PYTHON_MODULE" part at the end of it.
#include <iostream>
#include <aruco/aruco.h>
#include <aruco/cvdrawingutils.h>
#include <opencv2/highgui/highgui.hpp>
#include <fstream>
#include <sstream>
#include <boost/python.hpp>
using namespace cv;
using namespace aruco;
string TheInputVideo;
string TheIntrinsicFile;
float TheMarkerSize=45;
int ThePyrDownLevel;
MarkerDetector MDetector;
VideoCapture TheVideoCapturer;
vector<Marker> TheMarkers;
//projected points vector
std::vector<cv::Point2f> projectedPoints;
Mat TheInputImage,TheInputImageCopy;
CameraParameters TheCameraParameters;
void cvTackBarEvents(int pos,void*);
bool readCameraParameters(string TheIntrinsicFile,CameraParameters &CP,Size size);
pair<double,double> AvrgTime(0,0) ;//determines the average time required for detection
double ThresParam1,ThresParam2;
int iThresParam1,iThresParam2;
int waitTime=0;
void Init(){
int vIdx = 1;
cout<<"Opening camera index "<<vIdx<<endl;
TheVideoCapturer.open(vIdx);
waitTime=10;
TheVideoCapturer>>TheInputImage;
//Camera paramteres
TheCameraParameters.readFromXMLFile("camera.xml");
TheCameraParameters.resize(TheInputImage.size());
cout<<"Camera paramteres valid? : " << TheCameraParameters.isValid()<<endl<<std::flush;
//Create gui
cv::namedWindow("thres",1);
cv::namedWindow("in",1);
MDetector.getThresholdParams( ThresParam1,ThresParam2);
MDetector.setCornerRefinementMethod(MarkerDetector::SUBPIX);
iThresParam1=ThresParam1;
iThresParam2=ThresParam2;
cv::createTrackbar("ThresParam1", "in",&iThresParam1, 13, cvTackBarEvents);
cv::createTrackbar("ThresParam2", "in",&iThresParam2, 13, cvTackBarEvents);
}
int yay()
{
try{
char key=0;
//capture until press ESC or until the end of the video
do{
TheVideoCapturer.retrieve( TheInputImage);
//Detection of markers in the image passed
MDetector.detect(TheInputImage,TheMarkers,TheCameraParameters,TheMarkerSize);
//print marker info and draw the markers in image
TheInputImage.copyTo(TheInputImageCopy);
for (unsigned int i=0;i<TheMarkers.size();i++) {
TheMarkers[i].draw(TheInputImageCopy,Scalar(0,0,255),1);
//cout<<TheMarkers[i].id<<std::flush;
cout<<"Tvec : "<<TheMarkers[i].Tvec<<endl;
cout<<"Rvec : "<<TheMarkers[i].Rvec<<endl;
//CvDrawingUtils::draw3dCube(TheInputImageCopy,TheMarkers[i],TheCameraParameters);
cv::projectPoints(TheMarkers[i],TheMarkers[i].Rvec,TheMarkers[i].Tvec,TheCameraParameters.CameraMatrix,TheCameraParameters.Distorsion,projectedPoints);
}
;
cv::imshow("in",TheInputImageCopy);
cv::imshow("thres",MDetector.getThresholdedImage());
key=cv::waitKey(waitTime);//wait for key to be pressed
}while(key!=27 && TheVideoCapturer.grab());
cv::destroyAllWindows();
return TheMarkers.size();
}
catch (std::exception &ex){
cout<<"Exception :"<<ex.what()<<endl;
cv::destroyAllWindows();
return 0;
}
}
void cvTackBarEvents(int pos,void*)
{
if (iThresParam1<3) iThresParam1=3;
if (iThresParam1%2!=1) iThresParam1++;
if (ThresParam2<1) ThresParam2=1;
ThresParam1=iThresParam1;
ThresParam2=iThresParam2;
MDetector.setThresholdParams(ThresParam1,ThresParam2);
//recompute
MDetector.detect(TheInputImage,TheMarkers,TheCameraParameters);
TheInputImage.copyTo(TheInputImageCopy);
for (unsigned int i=0;i<TheMarkers.size();i++) TheMarkers[i].draw(TheInputImageCopy,Scalar(0,0,255),1);
cv::imshow("in",TheInputImageCopy);
cv::imshow("thres",MDetector.getThresholdedImage());
}
BOOST_PYTHON_MODULE(libTestPyAruco)
{
using namespace boost::python;
def("yay", yay);
def("Init", Init);
}
The "running" python file:
import libTestPyAruco
libTestPyAruco.Init()
print libTestPyAruco.yay()
mirind4Sat, 11 Apr 2015 08:51:31 -0500http://answers.opencv.org/question/59555/Calib3d.projectPoints - how to enable depth testinghttp://answers.opencv.org/question/53065/calib3dprojectpoints-how-to-enable-depth-testing/Is there anyway to use Calib3d.projectPoints(...) and have it specify that a point failed the depth-clip test? (Meaning, that the value of the 'z' dimension of point, after translation/rotation, is negative).
I would have expected some flag that could be specified to enable this, and perhaps it returns 'NaNs' as the values of the points that fail the test (or something similar).
I'm using the JavaOpenCV api, so maybe this is in the cpp API (but didn't make the java one)?
I can find nothing in the docs for the method, nor in the overview of the Calib3d module that seems to even mention that this isn't supported, which is confusing. If someone can confirm that indeed there's no built-in way to do this, that would also be helpful.visionyThu, 15 Jan 2015 03:31:00 -0600http://answers.opencv.org/question/53065/projectPoints example errorhttp://answers.opencv.org/question/39145/projectpoints-example-error/I'm currently trying to implement a example of OpenCV's [projectPoints][1] method. The idea behind this method is taking as input a set of 3D points, translation/rotation vector's of a given camera and its distortion coeficients, output the corresponding 2D points in the image plane.
The source of code is as follows:
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <string>
std::vector<cv::Point3d> Generate3DPoints();
int main(int argc, char* argv[])
{
// Read 3D points
std::vector<cv::Point3d> objectPoints = Generate3DPoints();
std::vector<cv::Point2d> imagePoints;
cv::Mat intrisicMat(3, 3, cv::DataType<double>::type); // Intrisic matrix
intrisicMat.at<double>(0, 0) = 1.6415318549788924e+003;
intrisicMat.at<double>(1, 0) = 0;
intrisicMat.at<double>(2, 0) = 0;
intrisicMat.at<double>(0, 1) = 0;
intrisicMat.at<double>(1, 1) = 1.7067753507885654e+003;
intrisicMat.at<double>(2, 1) = 0;
intrisicMat.at<double>(0, 2) = 5.3262822453148601e+002;
intrisicMat.at<double>(1, 2) = 3.8095355839052968e+002;
intrisicMat.at<double>(2, 2) = 1;
cv::Mat rVec(3, 1, cv::DataType<double>::type); // Rotation vector
rVec.at<double>(0) = -3.9277902400761393e-002;
rVec.at<double>(1) = 3.7803824407602084e-002;
rVec.at<double>(2) = 2.6445674487856268e-002;
cv::Mat tVec(3, 1, cv::DataType<double>::type); // Translation vector
tVec.at<double>(0) = 2.1158489381208221e+000;
tVec.at<double>(1) = -7.6847683212704716e+000;
tVec.at<double>(2) = 2.6169795190294256e+001;
cv::Mat distCoeffs(5, 1, cv::DataType<double>::type); // Distortion vector
distCoeffs.at<double>(0) = -7.9134632415085826e-001;
distCoeffs.at<double>(1) = 1.5623584435644169e+000;
distCoeffs.at<double>(2) = -3.3916502741726508e-002;
distCoeffs.at<double>(3) = -1.3921577146136694e-002;
distCoeffs.at<double>(4) = 1.1430734623697941e+002;
std::cout << "Intrisic matrix: " << intrisicMat << std::endl << std::endl;
std::cout << "Rotation vector: " << rVec << std::endl << std::endl;
std::cout << "Translation vector: " << tVec << std::endl << std::endl;
std::cout << "Distortion coef: " << distCoeffs << std::endl << std::endl;
std::vector<cv::Point2f> projectedPoints;
cv::projectPoints(objectPoints, rVec, tVec, intrisicMat, distCoeffs, projectedPoints);
for (unsigned int i = 0; i < projectedPoints.size(); ++i)
{
std::cout << "Image point: " << imagePoints[i] << " Projected to " << projectedPoints[i] << std::endl;
}
std::cout << "Press any key to exit.";
std::cin.ignore();
std::cin.get();
return 0;
}
std::vector<cv::Point3d> Generate3DPoints()
{
std::vector<cv::Point3d> points;
double x, y, z;
x = .5; y = .5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = .5; y = .5; z = .5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = .5; z = .5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = .5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = .5; y = -.5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = -.5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = -.5; z = .5;
points.push_back(cv::Point3d(x, y, z));
for(unsigned int i = 0; i < points.size(); ++i)
{
std::cout << points[i] << std::endl << std::endl;
}
return points;
}
The application crashes when I try to run the projectPoints method and I have no idea why. Any help would be greatly appreciated.
EDIT: The error I get is the following: OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN
(type0) && ((1 << type0) & fixedDepthMask) != 0)) in cv::_OutputArray::create, f
ile C:\builds\2_4_PackSlave-win64-vc12-shared\opencv\modules\core\src\matrix.cpp
, line 1486
[1]: http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#projectpointsMainframeMon, 11 Aug 2014 08:39:33 -0500http://answers.opencv.org/question/39145/