OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 03 Jun 2020 08:37:35 -0500undistort() and undistortPoints() are inconsistenthttp://answers.opencv.org/question/230847/undistort-and-undistortpoints-are-inconsistent/For testing I generate a grid image as matrix and again the grid points as point array:
!["distorted" camera image with feature points](https://i.stack.imgur.com/9UDmw.png)
This represents a "distorted" camera image along with some feature points. When I now undistort both the image and the grid points, I get the following result:
![image and points after individual undistortion](https://i.stack.imgur.com/W81Hl.png)
![zoom into undistorted result](https://i.stack.imgur.com/PXERS.png)
(Note that the fact that the "distorted" image is straight and the "undistorted" image is morphed is not the point, I'm just testing the undistortion functions with a straight test image.)
The grid image and the red grid points are totally misaligned now. I googled and found that some people forget to specify the "new camera matrix" parameter in undistortPoints but I didn't. The documentation also mentions a normalization but I still have the problem when I use the identity matrix as camera matrix. Also, in the central region it fits perfectly.
Why is this not identical, do I use something in a wrong way?
I use cv2 (4.1.0) in Python. Here is the code for testing:
<pre>
import numpy as np
import matplotlib.pyplot as plt
import cv2
w = 401
h = 301
# helpers
#--------
def plotImageAndPoints(im, pu, pv):
plt.imshow(im, cmap="gray")
plt.scatter(pu, pv, c="red", s=16)
plt.xlim(0, w)
plt.ylim(0, h)
plt.show()
def cv2_undistortPoints(uSrc, vSrc, cameraMatrix, distCoeffs):
uvSrc = np.array([np.matrix([uSrc, vSrc]).transpose()], dtype="float32")
uvDst = cv2.undistortPoints(uvSrc, cameraMatrix, distCoeffs, None, cameraMatrix)
uDst = [uv[0] for uv in uvDst[0]]
vDst = [uv[1] for uv in uvDst[0]]
return uDst, vDst
# test data
#----------
# generate grid image
img = np.ones((h, w), dtype = "float32")
img[0::20, :] = 0
img[:, 0::20] = 0
# generate grid points
uPoints, vPoints = np.meshgrid(range(0, w, 20), range(0, h, 20), indexing='xy')
uPoints = uPoints.flatten()
vPoints = vPoints.flatten()
# see if points align with the image
plotImageAndPoints(img, uPoints, vPoints) # perfect!
# undistort both image and points individually
#---------------------------------------------
# camera matrix parameters
fx = 1
fy = 1
cx = w/2
cy = h/2
# distortion parameters
k1 = 0.00003
k2 = 0
p1 = 0
p2 = 0
# convert for opencv
mtx = np.matrix([
[fx, 0, cx],
[ 0, fy, cy],
[ 0, 0, 1]
], dtype = "float32")
dist = np.array([k1, k2, p1, p2], dtype = "float32")
# undistort image
imgUndist = cv2.undistort(img, mtx, dist)
# undistort points
uPointsUndist, vPointsUndist = cv2_undistortPoints(uPoints, vPoints, mtx, dist)
# test if they still match
plotImageAndPoints(imgUndist, uPointsUndist, vPointsUndist) # awful!
</pre>
Any help appreciated!mqncWed, 03 Jun 2020 08:37:35 -0500http://answers.opencv.org/question/230847/triangulatePoints and undistortPointshttp://answers.opencv.org/question/229148/triangulatepoints-and-undistortpoints/I need some help in understanding these two functions. So afaik, triangulatePoints takes in a pair of 2D Pixel coordinates from calibrated images, and returns the triangulated point in homogeneous coordinates. I know in general what homogenous coordinates are but I'm confused about the 4th element in the output, what does this scale factor refer to? After we divide all the coordinates by this last element this is supposed to be in euclidean coordinates, but what is the origin for this euclidean coordinate system? Is it the left camera which is usually taken as the origin in stereoCalibrate?
Also, what are the units of the coordinate system, is it the size of the checkerboard square?
In addition, I also do not understand why we need to undistortPoints and what does undistortPoints exactly give us. I am hoping to find all my answers here because I am unable to understand properly anything from similar questions asked on stack overflow regarding these functions and the documentation isn't very informative PyNinjaFri, 17 Apr 2020 13:39:17 -0500http://answers.opencv.org/question/229148/How to generate a 3D image based on ChArUco calibration of two 2D imageshttp://answers.opencv.org/question/216785/how-to-generate-a-3d-image-based-on-charuco-calibration-of-two-2d-images/I'm currently extracting the calibration parameters of two images that were taken in a stereo vision setup via `cv2.aruco.calibrateCameraCharucoExtended()`. I'm using the `cv2.undistortPoints()` & `cv2.triangulatePoints()` function to convert any two 2D points to a 3D point coordinate, which works perfectly fine. I thus already have the intrinsic and extrinsic parameters of each of both cameras.
I'm now looking for a way to convert the 2D images, which can be seen under approach 1, to one 3D image. I need this 3D image because I would like to determine the order of these cups from left to right, to correctly use the triangulatePoints function. If I determine the order of the cups from left to right purely based on the x-coordinates of each of the 2D images, I get different results for each camera (the cup on the front left corner of the table for example is in a different 'order' depending on the camera angle).
Approach 1: Keypoint Feature Matching
-------------------------------------
I was first thinking about using a keypoint feature extractor like SIFT or SURF, so I therefore tried to do some keypoint extraction and matching. I tried using both the Brute-Force Matching and FLANN based Matcher, but the results are not really good:
Brute-Force
![image description](https://answers.opencv.org/upfiles/15654577115338022.jpg)
FLANN-based
![image description](https://answers.opencv.org/upfiles/15654577277732372.jpg)
I also tried to swap the images, but it still gives more or less the same results.
Approach 2: ReprojectImageTo3D()
--------------------------------
I looked further into the issue and I think I need the `cv2.reprojectImageTo3D()` [[docs]](https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#reprojectimageto3d) function. However, to use this function, I first need the Q matrix which needs to be obtained with `cv2.stereoRectify` [[docs]](https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#stereorectify). This stereoRectify function on its turn expects a couple of parameters that I'm able to provide, but there's two I'm confused about:
- R – Rotation matrix between the
coordinate systems of the first and
the second cameras.
- T – Translation vector between
coordinate systems of the cameras.
I do have the rotation and translation matrices for each camera separately, but not between them? Also, do I really need to do this stereoRectify all over again when I already did a full calibration in ChArUco and already have the camera matrix, distortion coefficients, rotation vectors and translations vectors?
Some extra info that might be useful
------------------------------------
I'm using 40 calibration images per camera of the ChArUco board to calibrate. I first extract all corners and markers after which I estimate the calibration parameters with the following code:
(ret, camera_matrix, distortion_coefficients0,
rotation_vectors, translation_vectors,
stdDeviationsIntrinsics, stdDeviationsExtrinsics,
perViewErrors) = cv2.aruco.calibrateCameraCharucoExtended(
charucoCorners=allCorners,
charucoIds=allIds,
board=board,
imageSize=imsize,
cameraMatrix=cameraMatrixInit,
distCoeffs=distCoeffsInit,
flags=flags,
criteria=(cv2.TERM_CRITERIA_EPS & cv2.TERM_CRITERIA_COUNT, 10000, 1e-9))
The board paremeter is created with the following settings:
CHARUCO_BOARD = aruco.CharucoBoard_create(
squaresX=9,
squaresY=6,
squareLength=4.4,
markerLength=3.5,
dictionary=ARUCO_DICT)
Thanks a lot in advance!Jérémy KSat, 10 Aug 2019 11:52:35 -0500http://answers.opencv.org/question/216785/Triangulate from stereo fisheye cameras?http://answers.opencv.org/question/212405/triangulate-from-stereo-fisheye-cameras/ I am trying to triangulate points from a calibrated pair of fisheye cameras.
I have a calibration that looks pretty good (checked with viewing rectified images).
I have my P1 and P2 matrices from the stereo calibration process.
I have my matched orb points in left and right frames (checked with cv::drawMatches).
i run the following:
std::vector<cv::Point2f> pntsMatchedLeft;
std::vector<cv::Point2f> pntsMatchedRight;
DetectKeypointsOrb(left, d1, kp1);
DetectKeypointsOrb(right, d2, kp2);
std::vector<cv::DMatch> matches;
matches = RobustMatching(d1, d2, kp1, kp2);
//check matches by drawing them.
for (int x = 0; x < matches.size(); x++)
{
cv::Point2f pnt2dL = kp1[matches[x].queryIdx].pt;
pntsMatchedLeft.push_back(pnt2dL);
cv::Point2f pnt2dR = kp1[matches[x].trainIdx].pt;
pntsMatchedRight.push_back(pnt2dR);
}
std::vector<cv::Point2f> pntsMatchedLeftUD;
std::vector<cv::Point2f> pntsMatchedRightUD;
cv::fisheye::undistortPoints(pntsMatchedLeft, pntsMatchedLeftUD, K1, D1);
cv::fisheye::undistortPoints(pntsMatchedRight, pntsMatchedRightUD, K2, D2);
cv::Mat point3d_homo;
cv::triangulatePoints(P1,P2,
pntsMatchedLeftUD, pntsMatchedRightUD,
point3d_homo);
assert(point3d_homo.cols == triangulation_points1.size());
// create point cloud
std::vector< Eigen::Vector3f> cloud;
for (int i = 0; i < point3d_homo.cols; i++) {
Eigen::Vector3f point;
cv::Mat p3d;
cv::Mat _p3h = point3d_homo.col(i);
convertPointsFromHomogeneous(_p3h.t(), p3d);
point.x() = p3d.at<double>(0);
point.y() = p3d.at<double>(1);
point.z() = p3d.at<double>(2);
cloud.push_back(point);
}
But the points that i get are, for example:
1.65398e+22 0 0
3.68707e+12 0 0
5.02082e+15 1.42875e-27 0
2.26795e+17 0 0
-3.87201e+15 0 0
-4.601e+24 0 0
-1.35347e+20 0 0
3.52328e+15 -1.29677e-25 0
4.1635e+12 -1.29677e-25 0
-1.08779e+15 -1.29677e-25 0
1-5.17322e+24 0 0
(I would expect no zeros)
What am I doing wrong here? Do i need to undistort the points as I am doing? Skipping that step seems to make minimal difference.
Thank you!antithingWed, 01 May 2019 05:55:57 -0500http://answers.opencv.org/question/212405/undistortPoints() returns odd/nonsensical values despite apparently functional camera calibrationhttp://answers.opencv.org/question/209335/undistortpoints-returns-oddnonsensical-values-despite-apparently-functional-camera-calibration/Not the most advanced OpenCV user/math-skilled individual, so please bear with me.
I've been following [this short](https://medium.com/@kennethjiang/calibrate-fisheye-lens-using-opencv-333b05afa0b0) tutorial in an effort to calibrate a fisheye lens in OpenCV. So far, everything seems to be working as the tutorial prescribes: I was able to obtain working camera and distance coefficients, and successfully undistort images (i.e. running it through the provided code produces images that appear correct.) Following the second part of the tutorial, I've also been able to adjust the balance.
However, my application is that I want to undistort certain points (namely contours and the centers of bounding boxes) rather than entire images for performance reasons. As such, **I thought I'd use [cv2.undistortPoints()](https://docs.opencv.org/3.4.5/da/d54/group__imgproc__transform.html#ga55c716492470bfe86b0ee9bf3a1f0f7e). My understanding is this should produce "ideal point coordinates", i.e. pixel coordinates corrected for the lens distortion.** However, this doesn't appear be working as I expected.
Since the tutorial gives a K and a D matrix at the end, I figured I'd just plug those into undistortPoints.
>>> cv2.fisheye.undistortPoints(
np.asarray([[[0, 0], [2592, 0], [0, 1944], [2592, 1944]]], dtype=np.float32),
np.array([[1076.7148792467171, 0.0, 1298.9712963540678], [0.0, 1078.515014983842, 929.9968760065017], [0.0, 0.0, 1.0]]),
np.array([[-0.016205134569390902], [-0.02434305021164351], [0.024555436941429715], [-0.008590717479362648]])
)
array([[[ 0.94239926, 0.67358345],
[ 0.13487473, -0.09684527],
[ 29.207176 , -22.761654 ],
[ 1.4594778 , 1.1426234 ]]], dtype=float32)
**Those sure aren't pixel coordinates. I thought that maybe they were normalized points, with the bounds of the image being -1 and 1, but these values still don't make sense even within that context.**
I also attempted to plug in the values obtained from the second part of the tutorial using balance=1.0. If you're looking at the tutorial, this corresponds to `cv2.undistortPoints(my_test_points, scaled_K, dist_coefficients, R=numpy.eye(3), P=new_K)`:
>>> cv2.fisheye.undistortPoints(
np.asarray([[[0, 0], [2592, 0], [0, 1944], [2592, 1944]]], dtype=np.float32),
np.array([[1076.7148792467171, 0.0, 1298.9712963540678], [0.0, 1078.515014983842, 929.9968760065017], [0.0, 0.0, 1.0]]),
np.array([[-0.019215744220979738], [-0.022168383678588813], [0.018999857407644722], [-0.003693599912847022]]),
R=np.eye(3),
P=np.array([[416.0971612201596, 0.0, 1304.304969960433], [0.0, 416.79282483962464, 927.3730022048695], [0.0, 0.0, 1.0]])
)
array([[[ -8029.981 , -5755.497 ],
[ 9563.489 , -5012.9556],
[ 27344.436 , -19400.076 ],
[-39704.24 , -31231.846 ]]], dtype=float32)
**Okay, those look more like pixel coordinates, but those still make no sense.**
At this point, I'm really not sure what to do. I've been struggling with this for quite some time now, so any and all help is truly appreciate. If you need the images, matrices, or anything else from me, I'm happy to provide it.
My camera is a [175° FOV RPi Camera (K)](https://www.waveshare.com/rpi-camera-k.htm) mounted on a Raspberry Pi, with the resolution at the maximum 2592×1944 for the purposes this question. I'm using OpenCV 3.4.4 with Python 3.edelmanjmSat, 23 Feb 2019 23:40:47 -0600http://answers.opencv.org/question/209335/Stereo rectification: undistortPoints() different results from initUndistortRectifyMap() + remap()http://answers.opencv.org/question/187458/stereo-rectification-undistortpoints-different-results-from-initundistortrectifymap-remap/I have a calibrated stereo camera rig and want to rectify the coordinates of some tracked image points.
From the stereo calibration the camera matrices (ML, MR), the distortion coefficients (DL, DR), the rotation matrix (R) between those cameras and the translation vector (T) are obtained.
To get the parameters for the rectification the following function is called
RL, RR, PL, PR, Q, _, _ = cv2.stereoRectify(ML, DL, MR, DR, IMG_SIZE, R, T, alpha=1)
**Rectify whole image**
Now i want to rectify the images from the left and right camera. Before doing so, red circles are drawn around the tracked coordinates. To rectify the images I call:
map_l = cv2.initUndistortRectifyMap(ML, DL, RL, PL, IMG_SIZE, cv2.CV_32FC1)
map_r = cv2.initUndistortRectifyMap(MR, DR, RR, PR, IMG_SIZE, cv2.CV_32FC1)
img_l = draw_circles(img_l, coordinates_left, "red") # white image with red circles
img_r = draw_circles(img_r, coordinates_right, "red")
img_rect_l = cv2.remap(img_l, map_l[0], map_l[1], cv2.INTER_LINEAR)
img_rect_r = cv2.remap(img_r, map_r[0], map_r[1], cv2.INTER_LINEAR)
**Rectify tracked points**
Instead of rectifying the whole image I just want to rectify the tracked point coordinates, therefore I use undistortPoints() and draw those over the rectified images in green.
coordinates_left_rectified = cv2.undistortPoints(coordinates_left, ML, DL, R=RL, P=PL)
coordinates_right_rectified = cv2.undistortPoints(coordinates_right, MR, DR, R=RR, P=PR)
img_rect_l = draw_circles(img_rect_l, coordinates_left_rectified, "green")
img_rect_l = draw_circles(img_rect_l, coordinates_right_rectified, "green")
Unfortunately I don't get the same results as with remap (image below) - The green circles should align with the red ones. So what am I doing wrong? Any ideas?
Thank you!
![image description](/upfiles/15218008175854161.png)
ObliFri, 23 Mar 2018 05:28:48 -0500http://answers.opencv.org/question/187458/Image space to world spacehttp://answers.opencv.org/question/183784/image-space-to-world-space/ I would like to cast a ray correspond to the image points. I am not sure if this code is correct. But it does not give me correct result.
Ray PinholeCamera::shootRay(const Vector2 &uv) const
{
std::vector<cv::Point2f> op;
op.push_back({uv.x, uv.y});
std::vector<cv::Point2f> p;
cv::undistortPoints(op, p, cameraMatrix, dist);
auto viewspace = normalize(Vector3(p[0].x, p[0].y, 1.f));
Vector3 dir = Vector3(invView*Vector4(viewspace, 0.f));
return Ray(camPos, dir);
}
Tim HsuFri, 02 Feb 2018 05:24:53 -0600http://answers.opencv.org/question/183784/undistortPoints not giving the exact inverse of distortion model.http://answers.opencv.org/question/147530/undistortpoints-not-giving-the-exact-inverse-of-distortion-model/ Hello,
I was doing some tests using the [distortion model of OpenCV](http://docs.opencv.org/2.4/_images/math/331ebcd980b851f25de1979ebb67a2fed1c8477e.png). Basically what I did is, implement the distortion equations and see if the [cv::undistortPoints](http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#undistortpoints) function gives me the inverse of these equations. I realized that [cv::undistortPoints](http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#undistortpoints) does not exactly give you the inverse of the distortion equations. When I saw this, I went to the implementation of [cv::undistortPoints](https://github.com/opencv/opencv/blob/master/modules/imgproc/src/undistort.cpp#L426-L583) and realized that in the end condition of the [iterative process](https://github.com/opencv/opencv/blob/master/modules/imgproc/src/undistort.cpp#L528-L536) of computing the inverse of the distortion model, OpenCV always does 5 iterations (if there are no distortion coefficients provided to the function it actually does 0 iterations) and does not use any error metric on the undistorted point to see if it is precisely undistorted. Haveing this in mind, I copied and modified the termination condition of the iteration process to take and error metrics into account. This gave me the exact inverse of the distortion model. The code showing this is attached at the end of this post. My question is:
Does this happen because OpenCV prefers performance (spending a bit less time) over accuracy (spending a bit more time) or is this just a "bug"? (it is obvious that with the termination condition that I propose the function will take more time to undistort each point)
Thank you very much!
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <iostream>
using namespace cv;
// This is a copy of the opencv implementation
void cvUndistortPoints_copy( const CvMat* _src, CvMat* _dst, const CvMat* _cameraMatrix,
const CvMat* _distCoeffs,
const CvMat* matR, const CvMat* matP )
{
double A[3][3], RR[3][3], k[8]={0,0,0,0,0,0,0,0}, fx, fy, ifx, ify, cx, cy;
CvMat matA=cvMat(3, 3, CV_64F, A), _Dk;
CvMat _RR=cvMat(3, 3, CV_64F, RR);
const CvPoint2D32f* srcf;
const CvPoint2D64f* srcd;
CvPoint2D32f* dstf;
CvPoint2D64f* dstd;
int stype, dtype;
int sstep, dstep;
int i, j, n, iters = 1;
CV_Assert( CV_IS_MAT(_src) && CV_IS_MAT(_dst) &&
(_src->rows == 1 || _src->cols == 1) &&
(_dst->rows == 1 || _dst->cols == 1) &&
_src->cols + _src->rows - 1 == _dst->rows + _dst->cols - 1 &&
(CV_MAT_TYPE(_src->type) == CV_32FC2 || CV_MAT_TYPE(_src->type) == CV_64FC2) &&
(CV_MAT_TYPE(_dst->type) == CV_32FC2 || CV_MAT_TYPE(_dst->type) == CV_64FC2));
CV_Assert( CV_IS_MAT(_cameraMatrix) &&
_cameraMatrix->rows == 3 && _cameraMatrix->cols == 3 );
cvConvert( _cameraMatrix, &matA );
if( _distCoeffs )
{
CV_Assert( CV_IS_MAT(_distCoeffs) &&
(_distCoeffs->rows == 1 || _distCoeffs->cols == 1) &&
(_distCoeffs->rows*_distCoeffs->cols == 4 ||
_distCoeffs->rows*_distCoeffs->cols == 5 ||
_distCoeffs->rows*_distCoeffs->cols == 8));
_Dk = cvMat( _distCoeffs->rows, _distCoeffs->cols,
CV_MAKETYPE(CV_64F,CV_MAT_CN(_distCoeffs->type)), k);
cvConvert( _distCoeffs, &_Dk );
iters = 5;
}
if( matR )
{
CV_Assert( CV_IS_MAT(matR) && matR->rows == 3 && matR->cols == 3 );
cvConvert( matR, &_RR );
}
else
cvSetIdentity(&_RR);
if( matP )
{
double PP[3][3];
CvMat _P3x3, _PP=cvMat(3, 3, CV_64F, PP);
CV_Assert( CV_IS_MAT(matP) && matP->rows == 3 && (matP->cols == 3 || matP->cols == 4));
cvConvert( cvGetCols(matP, &_P3x3, 0, 3), &_PP );
cvMatMul( &_PP, &_RR, &_RR );
}
srcf = (const CvPoint2D32f*)_src->data.ptr;
srcd = (const CvPoint2D64f*)_src->data.ptr;
dstf = (CvPoint2D32f*)_dst->data.ptr;
dstd = (CvPoint2D64f*)_dst->data.ptr;
stype = CV_MAT_TYPE(_src->type);
dtype = CV_MAT_TYPE(_dst->type);
sstep = _src->rows == 1 ? 1 : _src->step/CV_ELEM_SIZE(stype);
dstep = _dst->rows == 1 ? 1 : _dst->step/CV_ELEM_SIZE(dtype);
n = _src->rows + _src->cols - 1;
fx = A[0][0];
fy = A[1][1];
ifx = 1./fx;
ify = 1./fy;
cx = A[0][2];
cy = A[1][2];
for( i = 0; i < n; i++ )
{
double x, y, x0, y0;
if( stype == CV_32FC2 )
{
x = srcf[i*sstep].x;
y = srcf[i*sstep].y;
}
else
{
x = srcd[i*sstep].x;
y = srcd[i*sstep].y;
}
x0 = x = (x - cx)*ifx;
y0 = y = (y - cy)*ify;
// compensate distortion iteratively
int max_iters(500);
double e(1);
for( j = 0; j < max_iters && e>0; j++ )
{
double r2 = x*x + y*y;
double icdist = (1 + ((k[7]*r2 + k[6])*r2 + k[5])*r2)/(1 + ((k[4]*r2 + k[1])*r2 + k[0])*r2);
double deltaX = 2*k[2]*x*y + k[3]*(r2 + 2*x*x);
double deltaY = k[2]*(r2 + 2*y*y) + 2*k[3]*x*y;
double xant = x;
double yant = y;
x = (x0 - deltaX)*icdist;
y = (y0 - deltaY)*icdist;
e = pow(xant - x,2)+ pow(yant - y,2);
}
double xx = RR[0][0]*x + RR[0][1]*y + RR[0][2];
double yy = RR[1][0]*x + RR[1][1]*y + RR[1][2];
double ww = 1./(RR[2][0]*x + RR[2][1]*y + RR[2][2]);
x = xx*ww;
y = yy*ww;
if( dtype == CV_32FC2 )
{
dstf[i*dstep].x = (float)x;
dstf[i*dstep].y = (float)y;
}
else
{
dstd[i*dstep].x = x;
dstd[i*dstep].y = y;
}
}
}
void undistortPoints_copy( InputArray _src, OutputArray _dst,
InputArray _cameraMatrix,
InputArray _distCoeffs,
InputArray _Rmat=noArray(),
InputArray _Pmat=noArray() )
{
Mat src = _src.getMat(), cameraMatrix = _cameraMatrix.getMat();
Mat distCoeffs = _distCoeffs.getMat(), R = _Rmat.getMat(), P = _Pmat.getMat();
CV_Assert( src.isContinuous() && (src.depth() == CV_32F || src.depth() == CV_64F) &&
((src.rows == 1 && src.channels() == 2) || src.cols*src.channels() == 2));
_dst.create(src.size(), src.type(), -1, true);
Mat dst = _dst.getMat();
CvMat _csrc = src, _cdst = dst, _ccameraMatrix = cameraMatrix;
CvMat matR, matP, _cdistCoeffs, *pR=0, *pP=0, *pD=0;
if( R.data )
pR = &(matR = R);
if( P.data )
pP = &(matP = P);
if( distCoeffs.data )
pD = &(_cdistCoeffs = distCoeffs);
cvUndistortPoints_copy(&_csrc, &_cdst, &_ccameraMatrix, pD, pR, pP);
}
// Distortion model implementation
cv::Point2d distortPoint(cv::Point2d undistorted_point,cv::Mat camera_matrix, std::vector<double> distort_coefficients){
// Check that camera matrix is double
if (!(camera_matrix.type() == CV_64F || camera_matrix.type() == CV_64FC1)){
std::ostringstream oss;
oss<<"distortPoint(): Camera matrix type is wrong. It has to be a double matrix (CV_64)";
throw std::runtime_error(oss.str());
}
// Create distorted point
cv::Point2d distortedPoint;
distortedPoint.x = (undistorted_point.x - camera_matrix.at<double>(0,2))/camera_matrix.at<double>(0,0);
distortedPoint.y = (undistorted_point.y - camera_matrix.at<double>(1,2))/camera_matrix.at<double>(1,1);
// Get model
if (distort_coefficients.size() < 4 || distort_coefficients.size() > 8 ){
throw std::runtime_error("distortPoint(): Invalid numbrer of distortion coefficitnes.");
}
double k1(distort_coefficients[0]);
double k2(distort_coefficients[1]);
double p1(distort_coefficients[2]);// tangent distortion first coeficinet
double p2(distort_coefficients[3]);// tangent distortion second coeficinet
double k3(0);
double k4(0);
double k5(0);
double k6(0);
if (distort_coefficients.size() > 4)
k3 = distort_coefficients[4];
if (distort_coefficients.size() > 5)
k4 = distort_coefficients[5];
if (distort_coefficients.size() > 6)
k5 = distort_coefficients[6];
if (distort_coefficients.size() > 7)
k6 = distort_coefficients[7];
// Distort
double xcx = distortedPoint.x;
double ycy = distortedPoint.y;
double r2 = pow(xcx, 2) + pow(ycy, 2);
double r4 = pow(r2,2);
double r6 = pow(r2,3);
double k = (1+k1*r2+k2*r4+k3*r6)/(1+k4*r2+k5*r4+k5*r6);
distortedPoint.x = xcx*k + 2*p1*xcx*ycy + p2*(r2+2*pow(xcx,2));
distortedPoint.y = ycy*k + p1*(r2+2*pow(ycy,2)) + 2*p2*xcx*ycy;
distortedPoint.x = distortedPoint.x*camera_matrix.at<double>(0,0)+camera_matrix.at<double>(0,2);
distortedPoint.y = distortedPoint.y*camera_matrix.at<double>(1,1)+camera_matrix.at<double>(1,2);
// Exit
return distortedPoint;
}
int main(int argc, char** argv){
// Camera matrix
double cam_mat_da[] = {1486.58092,0,1046.72507,0,1489.8659,545.374244,0,0,1};
cv::Mat cam_mat(3,3,CV_64FC1,cam_mat_da);
// Distortion coefficients
double dist_coefs_da[] ={-0.13827409,0.29240721,-0.00088197,-0.00090189,0};
std::vector<double> dist_coefs(dist_coefs_da,dist_coefs_da+5);
// Distorted Point
cv::Point2d p0(0,0);
std::vector<cv::Point2d> p0_v;
p0_v.push_back(p0);
// Undistort Point
std::vector<cv::Point2d> ud_p_v;
cv::undistortPoints(p0_v,ud_p_v,cam_mat,dist_coefs);
cv::Point2d ud_p = ud_p_v[0];
ud_p.x = ud_p.x*cam_mat.at<double>(0,0)+cam_mat.at<double>(0,2);
ud_p.y = ud_p.y*cam_mat.at<double>(1,1)+cam_mat.at<double>(1,2);
// Redistort Point
cv::Point2d p = distortPoint(ud_p, cam_mat,dist_coefs);
// Undistort Point using own termination of iterative process
std::vector<cv::Point2d> ud_p_v_local;
undistortPoints_copy(p0_v,ud_p_v_local,cam_mat,dist_coefs);
cv::Point2d ud_p_local = ud_p_v_local[0];
ud_p_local.x = ud_p_local.x*cam_mat.at<double>(0,0)+cam_mat.at<double>(0,2);
ud_p_local.y = ud_p_local.y*cam_mat.at<double>(1,1)+cam_mat.at<double>(1,2);
// Redistort Point
cv::Point2d p_local = distortPoint(ud_p_local, cam_mat,dist_coefs);
// Display results
std::cout<<"Distorted original point: "<<p0<<std::endl;
std::cout<<"Undistorted point (CV): "<<ud_p<<std::endl;
std::cout<<"Distorted point (CV): "<<p<<std::endl;
std::cout<<"Erorr in the distorted point (CV): "<<sqrt(pow(p.x-p0.x,2)+pow(p.y-p0.y,2))<<std::endl;
std::cout<<"Undistorted point (Local): "<<ud_p_local<<std::endl;
std::cout<<"Distorted point (Local): "<<p_local<<std::endl;
std::cout<<"Erorr in the distorted point (Local): "<<sqrt(pow(p_local.x-p0.x,2)+pow(p_local.y-p0.y,2))<<std::endl;
// Exit
return 0;
}
apalomerWed, 10 May 2017 05:04:03 -0500http://answers.opencv.org/question/147530/How is undistortPoints solvedhttp://answers.opencv.org/question/147322/how-is-undistortpoints-solved/ Hello,
I was trying to implement a different function to undistort points (to test potential new distortion models) and I came across the implementation that OpenCV has [here](https://github.com/opencv/opencv/blob/master/modules/imgproc/src/undistort.cpp#L426-L583). There, in the for loop ([here](https://github.com/opencv/opencv/blob/master/modules/imgproc/src/undistort.cpp#L528-L536)) it actually computes the undistorted point. Can somebody tell me how does it solve the equation system I have tried to identify it to some iterative solutions (least squares, Newton-Rapson...) and I don't see how it does it.
Thank you very much!apalomerTue, 09 May 2017 10:01:10 -0500http://answers.opencv.org/question/147322/UndistortPoints odd resultshttp://answers.opencv.org/question/134038/undistortpoints-odd-results/ Hi all,
I've calibrated my camera and here is the distortion parameters:
[ 7.0576386285112147e-02, -5.0734456409579369e+00,-1.1508247483618957e-02, -3.9730820350519589e-03,
8.0251688016585078e+01 ]
My problem is that when undistort a point (552,320) , I get (0.146645,-0.104564 ). What can be the cause of this?
How can I get undistorted point in pixel coordinates for example (560,315) ?
OguzhanTue, 14 Mar 2017 11:04:32 -0500http://answers.opencv.org/question/134038/Difference between undistortPoints() and projectPoints() in OpenCVhttp://answers.opencv.org/question/129425/difference-between-undistortpoints-and-projectpoints-in-opencv/From my understanding, undistortPoints() takes a set of points on a distorted image, and calculates where their coordinates would be on an undistorted version of the same image. projectPoints() maps a set of object coordinates to their corresponding image coordinates.
However, I am unsure if projectPoints() maps the object coordinates to a set of image points on the distorted image (ie. the original image) or one that has been undistorted (straight lines)?
Furthermore, the OpenCV documentation for undistortPoints states that 'the function performs a reverse transformation to projectPoints()'. Could you please explain how this is so?rueynshardTue, 21 Feb 2017 08:35:58 -0600http://answers.opencv.org/question/129425/Find direction from cameraMatrix and distCoeffhttp://answers.opencv.org/question/96359/find-direction-from-cameramatrix-and-distcoeff/Hi Guys,
I have calibrated my camera by detecting checkerboard patterns and running calibrateCamera, retrieving the cameraMatrix and distortion coefficients. These I can plug into project points alongside 3D positions in the cameras space and retrieve the UV where the point is projected into the imperfect camera.
Im using a 3D point that projects into a point near my top left image coordinate corner.
This is all fine, but now I want to go the other way and convert a point in my distorted U,V coordinate into a directional vector pointing at all the points that would be projected into this UV coordinate.
I have tried playing around with the undistortPoints function, to find the ideal points U,V and from those use the cameraMatrix to find a point somewhere along the line, picking values from the cameraMatrix.
X = (U-C_x)/f_x
Y = (U-C_y)/f_y
Z = 1
But I can't seem to hit a direction that is pointing very close to the 3D point i started from.
Any idea what I might be doing wrong?
kind regards
Jesper TaxbølTAXfromDKMon, 13 Jun 2016 15:54:11 -0500http://answers.opencv.org/question/96359/Approximation method in cv::undistortPointshttp://answers.opencv.org/question/89082/approximation-method-in-cvundistortpoints/The function `cv::UndistortPoints()`, applies reverse lens distortion to a set of observed point coordinates. The models for lens distortion available in openCV are not invertible, which means an approximation can be made and indeed, from the documentation:
> ...undistort() is an approximate
> iterative algorithm that estimates the
> normalized original point coordinates
> out of the normalized distorted point
> coordinates (“normalized” means that
> the coordinates do not depend on the
> camera matrix).
**My question is therefore:** where can I find information on the approximation method used in `undistortPoints`? What are it's characteristics? How was it derived? Under what conditions is it likely to succeed/fail?
Handling lens distortion well is integral to many 3D Reconstruction applications so it would really be helpful with some clarity here.npwestWed, 02 Mar 2016 04:01:17 -0600http://answers.opencv.org/question/89082/triangulatePoints() functionhttp://answers.opencv.org/question/62682/triangulatepoints-function/I don't understand what exactly projPoints1 and projPoints2 are in triangulatePoints() built-in function of calib3d module. Here is the C++ API of triangulatePoints():
void triangulatePoints(InputArray projMatr1, InputArray projMatr2, InputArray projPoints1, InputArray projPoints2, OutputArray points4D)
When a point is detected in left and right images, do we have to use undistortPoints() function of imgproc module in order to obtain collinear points and then send the results to triangulatePoints() as projPoints1 and projPoints2? OR is it the right way to send detected 2D image point (including lens distortions) directly to triangulatePoints()? In OpenCV documentation of triangulatePoints() it just says
projPoints1 – 2xN array of feature points in the first image
projPoints2 – 2xN array of feature points in the second image
where N is number of features. MATLAB Computer Vision Toolbox has a similar triangulation function, and before using triangulate function in there, they undistort detected feature coordinates. See this example:
http://www.mathworks.com/help/vision/ref/triangulate.html
They have a warning in this page:
The triangulate function does not account for lens distortion. You can undistort the images using the undistortImage function before detecting the points. Alternatively, you can undistort the points themselves using the undistortPoints function.
I wonder if it is the same in OpenCV too or not. I would be happiest person in the world if someone can respond.tahaWed, 27 May 2015 02:09:21 -0500http://answers.opencv.org/question/62682/undistortPoints, findEssentialMat, recoverPose: What is the relation between their arguments?http://answers.opencv.org/question/65788/undistortpoints-findessentialmat-recoverpose-what-is-the-relation-between-their-arguments/**TL;DR**: What relation should hold between the arguments passed to `undistortPoints`, `findEssentialMat` and `recoverPose`.
I have code like the following in my program
Mat mask; // inlier mask
undistortPoints(imgpts1, imgpts1, K, dist_coefficients, noArray(), K);
undistortPoints(imgpts2, imgpts2, K, dist_coefficients, noArray(), K);
Mat E = findEssentialMat(imgpts1, imgpts2, 1, Point2d(0,0), RANSAC, 0.999, 3, mask);
correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
recoverPose(E, imgpts1, imgpts2, R, t, 1.0, Point2d(0,0), mask);
I `undistort` the Points before finding the essential matrix. The doc states that one can pass the new camera matrix as the last argument. When omitted, points are in *normalized* coordinates (between -1 and 1). In that case, I would expect that I pass 1 for the focal length and (0,0) for the principal point to `findEssentialMat`, as the points are normalized. So I would think this to be the way:
1. **Possibility 1** (normalize coordinates)
Mat mask; // inlier mask
undistortPoints(imgpts1, imgpts1, K, dist_coefficients);
undistortPoints(imgpts2, imgpts2, K, dist_coefficients);
Mat E = findEssentialMat(imgpts1, imgpts2, 1.0, Point2d(0,0), RANSAC, 0.999, 3, mask);
correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
recoverPose(E, imgpts1, imgpts2, R, t, 1.0, Point2d(0,0), mask);
2. **Possibility 2** (do not normalize coordinates)
Mat mask; // inlier mask
undistortPoints(imgpts1, imgpts1, K, dist_coefficients, noArray(), K);
undistortPoints(imgpts2, imgpts2, K, dist_coefficients, noArray(), K);
double focal = K.at<double>(0,0);
Point2d principalPoint(K.at<double>(0,2), K.at<double>(1,2));
Mat E = findEssentialMat(imgpts1, imgpts2, focal, principalPoint, RANSAC, 0.999, 3, mask);
correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
recoverPose(E, imgpts1, imgpts2, R, t, focal, principalPoint, mask);
However, I have found, that I only get reasonable results when I tell `undistortPoints` that the old camera matrix shall still be valid (I guess in that case only distortion is removed) and pass arguments to `findEssentialMat` as if the points were normalized, which they are not.
Is this a bug, insufficient documentation or user error?
**Update**
It might me that `correctedMatches` should be called with (non-normalised) image/pixel coordinates and the Fundamental Matrix, not E, this may be another mistake in my computation. It can be obtained by `F = K^-T * E * K^-1`themightyoarfishWed, 08 Jul 2015 05:43:24 -0500http://answers.opencv.org/question/65788/Making use of MATLAB cameraParams in OpenCV programhttp://answers.opencv.org/question/59438/making-use-of-matlab-cameraparams-in-opencv-program/Hello,
I have a MATLAB program that loads two images and returns two camera matrices and a cameraParams object with distortion coefficients, etc. I would now like to use this exact configuration to undistort points and so on, in an OpenCV program that triangulates points given their 2D locations in two different videos.
function [cameraMatrix1, cameraMatrix2, cameraParams] = setupCameraCalibration(leftImageFile, rightImageFile, squareSize)
% Auto-generated by cameraCalibrator app on 20-Feb-2015
The thing is, the output of undistortPoints is different in MATLAB and OpenCV even though both use the same arguments.
As an example:
>> undistortPoints([485, 502], defaultCameraParams)
ans = 485 502
In Java, the following test mimics the above (it passes).
public void testUnDistortPoints() {
MatOfPoint2f src = new MatOfPoint2f(new Point(485f, 502d));
MatOfPoint2f dst = new MatOfPoint2f();
Mat defaultCameraMatrix = Mat.eye(3, 3, CvType.CV_64FC1);
Mat defaultDistCoefficientMatrix = new Mat(1, 4, CvType.CV_64FC1);
Imgproc.undistortPoints(
src,
dst,
defaultCameraMatrix,
defaultDistCoefficientMatrix
);
assertEquals(dst.get(0, 0)[0], 485d);
assertEquals(dst.get(0, 0)[1], 502d);
}
However, say I change the first distortion coefficient (k1). In MATLAB:
changedDist = cameraParameters('RadialDistortion', [2 0 0])
>> undistortPoints([485, 502], changedDist)
ans = 4.8756 5.0465
In Java:
public void testUnDistortPointsChangedDistortion() {
MatOfPoint2f src = new MatOfPoint2f(new Point(485f, 502f));
MatOfPoint2f dst = new MatOfPoint2f();
Mat defaultCameraMatrix = Mat.eye(3, 3, CvType.CV_64FC1);
Mat distCoefficientMatrix = new Mat(1, 4, CvType.CV_64FC1);
distCoefficientMatrix.put(0, 0, 2); // updated
Imgproc.undistortPoints(
src,
dst,
defaultCameraMatrix,
distCoefficientMatrix
);
System.out.println(dst.dump());
assertEquals(dst.get(0, 0)[0], 4.8756d);
assertEquals(dst.get(0, 0)[0], 5.0465d);
}
It fails with the following output:
[0.0004977131, 0.0005151587]
junit.framework.AssertionFailedError:
Expected :4.8756
Actual :4.977131029590964E-4
Why are the results different? I thought Java's distortion coefficient matrix includes both the radial and tangential distortion coefficients.
Also, is CV_64FC1 a good choice of type for the camera / distortion coefficient matrices?
I was trying to test the effect of changing the camera matrix itself (i.e. the value of f_x), but it's not possible to set the 'IntrinsicMatrix' parameter when using cameraparams, so I want to solve the distortion matrix problem first.
Any help would be greatly appreciated.MarcinkonysThu, 09 Apr 2015 19:20:09 -0500http://answers.opencv.org/question/59438/Why do we pass R and P to undistortPoints() fcn (calib3d module)?http://answers.opencv.org/question/57314/why-do-we-pass-r-and-p-to-undistortpoints-fcn-calib3d-module/I have two AVT Manta G125B cameras. I made individual calibrations of the cameras, and then stereo calibration. I am trying to triangulate a point of interest in real-time. I noticed that triangulatePoints() function of calib3d module accepts undistorted image point coordinates as input so I need to use undistortPoints() function to obtain ideal point coordinates. As far as I know, it must be sufficient to pass only cameraMatrix and distCoeffs parameters to undistortPoints. By finding nonlinear least squares solution, undistortPoints() must provide solution. I did not understand why we need to pass R and P (obtained with stereoRectify() fcn) to undistortPoints.
void undistortPoints(InputArray src, OutputArray dst, InputArray cameraMatrix, InputArray distCoeffs, InputArray R=noArray(), InputArray P=noArray())tahaWed, 11 Mar 2015 22:47:17 -0500http://answers.opencv.org/question/57314/undistortpoints :how to usehttp://answers.opencv.org/question/53966/undistortpoints-how-to-use/Hi,
we wants to use the function undistortpoints to find the undistorded points of some points ( we have already the point and don't wants tu use the undistort function with an image). We wants to find the undistorded points in pixels but we have results points between 0 and 1. It seems that the result is normalized ? how to find the position in pixels ?nextWed, 28 Jan 2015 10:30:21 -0600http://answers.opencv.org/question/53966/