OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sat, 10 Aug 2019 11:52:35 -0500How to generate a 3D image based on ChArUco calibration of two 2D imageshttp://answers.opencv.org/question/216785/how-to-generate-a-3d-image-based-on-charuco-calibration-of-two-2d-images/I'm currently extracting the calibration parameters of two images that were taken in a stereo vision setup via `cv2.aruco.calibrateCameraCharucoExtended()`. I'm using the `cv2.undistortPoints()` & `cv2.triangulatePoints()` function to convert any two 2D points to a 3D point coordinate, which works perfectly fine. I thus already have the intrinsic and extrinsic parameters of each of both cameras.
I'm now looking for a way to convert the 2D images, which can be seen under approach 1, to one 3D image. I need this 3D image because I would like to determine the order of these cups from left to right, to correctly use the triangulatePoints function. If I determine the order of the cups from left to right purely based on the x-coordinates of each of the 2D images, I get different results for each camera (the cup on the front left corner of the table for example is in a different 'order' depending on the camera angle).
Approach 1: Keypoint Feature Matching
-------------------------------------
I was first thinking about using a keypoint feature extractor like SIFT or SURF, so I therefore tried to do some keypoint extraction and matching. I tried using both the Brute-Force Matching and FLANN based Matcher, but the results are not really good:
Brute-Force
![image description](https://answers.opencv.org/upfiles/15654577115338022.jpg)
FLANN-based
![image description](https://answers.opencv.org/upfiles/15654577277732372.jpg)
I also tried to swap the images, but it still gives more or less the same results.
Approach 2: ReprojectImageTo3D()
--------------------------------
I looked further into the issue and I think I need the `cv2.reprojectImageTo3D()` [[docs]](https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#reprojectimageto3d) function. However, to use this function, I first need the Q matrix which needs to be obtained with `cv2.stereoRectify` [[docs]](https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#stereorectify). This stereoRectify function on its turn expects a couple of parameters that I'm able to provide, but there's two I'm confused about:
- R – Rotation matrix between the
coordinate systems of the first and
the second cameras.
- T – Translation vector between
coordinate systems of the cameras.
I do have the rotation and translation matrices for each camera separately, but not between them? Also, do I really need to do this stereoRectify all over again when I already did a full calibration in ChArUco and already have the camera matrix, distortion coefficients, rotation vectors and translations vectors?
Some extra info that might be useful
------------------------------------
I'm using 40 calibration images per camera of the ChArUco board to calibrate. I first extract all corners and markers after which I estimate the calibration parameters with the following code:
(ret, camera_matrix, distortion_coefficients0,
rotation_vectors, translation_vectors,
stdDeviationsIntrinsics, stdDeviationsExtrinsics,
perViewErrors) = cv2.aruco.calibrateCameraCharucoExtended(
charucoCorners=allCorners,
charucoIds=allIds,
board=board,
imageSize=imsize,
cameraMatrix=cameraMatrixInit,
distCoeffs=distCoeffsInit,
flags=flags,
criteria=(cv2.TERM_CRITERIA_EPS & cv2.TERM_CRITERIA_COUNT, 10000, 1e-9))
The board paremeter is created with the following settings:
CHARUCO_BOARD = aruco.CharucoBoard_create(
squaresX=9,
squaresY=6,
squareLength=4.4,
markerLength=3.5,
dictionary=ARUCO_DICT)
Thanks a lot in advance!Jérémy KSat, 10 Aug 2019 11:52:35 -0500http://answers.opencv.org/question/216785/undistortPoints() returns odd/nonsensical values despite apparently functional camera calibrationhttp://answers.opencv.org/question/209335/undistortpoints-returns-oddnonsensical-values-despite-apparently-functional-camera-calibration/Not the most advanced OpenCV user/math-skilled individual, so please bear with me.
I've been following [this short](https://medium.com/@kennethjiang/calibrate-fisheye-lens-using-opencv-333b05afa0b0) tutorial in an effort to calibrate a fisheye lens in OpenCV. So far, everything seems to be working as the tutorial prescribes: I was able to obtain working camera and distance coefficients, and successfully undistort images (i.e. running it through the provided code produces images that appear correct.) Following the second part of the tutorial, I've also been able to adjust the balance.
However, my application is that I want to undistort certain points (namely contours and the centers of bounding boxes) rather than entire images for performance reasons. As such, **I thought I'd use [cv2.undistortPoints()](https://docs.opencv.org/3.4.5/da/d54/group__imgproc__transform.html#ga55c716492470bfe86b0ee9bf3a1f0f7e). My understanding is this should produce "ideal point coordinates", i.e. pixel coordinates corrected for the lens distortion.** However, this doesn't appear be working as I expected.
Since the tutorial gives a K and a D matrix at the end, I figured I'd just plug those into undistortPoints.
>>> cv2.fisheye.undistortPoints(
np.asarray([[[0, 0], [2592, 0], [0, 1944], [2592, 1944]]], dtype=np.float32),
np.array([[1076.7148792467171, 0.0, 1298.9712963540678], [0.0, 1078.515014983842, 929.9968760065017], [0.0, 0.0, 1.0]]),
np.array([[-0.016205134569390902], [-0.02434305021164351], [0.024555436941429715], [-0.008590717479362648]])
)
array([[[ 0.94239926, 0.67358345],
[ 0.13487473, -0.09684527],
[ 29.207176 , -22.761654 ],
[ 1.4594778 , 1.1426234 ]]], dtype=float32)
**Those sure aren't pixel coordinates. I thought that maybe they were normalized points, with the bounds of the image being -1 and 1, but these values still don't make sense even within that context.**
I also attempted to plug in the values obtained from the second part of the tutorial using balance=1.0. If you're looking at the tutorial, this corresponds to `cv2.undistortPoints(my_test_points, scaled_K, dist_coefficients, R=numpy.eye(3), P=new_K)`:
>>> cv2.fisheye.undistortPoints(
np.asarray([[[0, 0], [2592, 0], [0, 1944], [2592, 1944]]], dtype=np.float32),
np.array([[1076.7148792467171, 0.0, 1298.9712963540678], [0.0, 1078.515014983842, 929.9968760065017], [0.0, 0.0, 1.0]]),
np.array([[-0.019215744220979738], [-0.022168383678588813], [0.018999857407644722], [-0.003693599912847022]]),
R=np.eye(3),
P=np.array([[416.0971612201596, 0.0, 1304.304969960433], [0.0, 416.79282483962464, 927.3730022048695], [0.0, 0.0, 1.0]])
)
array([[[ -8029.981 , -5755.497 ],
[ 9563.489 , -5012.9556],
[ 27344.436 , -19400.076 ],
[-39704.24 , -31231.846 ]]], dtype=float32)
**Okay, those look more like pixel coordinates, but those still make no sense.**
At this point, I'm really not sure what to do. I've been struggling with this for quite some time now, so any and all help is truly appreciate. If you need the images, matrices, or anything else from me, I'm happy to provide it.
My camera is a [175° FOV RPi Camera (K)](https://www.waveshare.com/rpi-camera-k.htm) mounted on a Raspberry Pi, with the resolution at the maximum 2592×1944 for the purposes this question. I'm using OpenCV 3.4.4 with Python 3.edelmanjmSat, 23 Feb 2019 23:40:47 -0600http://answers.opencv.org/question/209335/Approximation method in cv::undistortPointshttp://answers.opencv.org/question/89082/approximation-method-in-cvundistortpoints/The function `cv::UndistortPoints()`, applies reverse lens distortion to a set of observed point coordinates. The models for lens distortion available in openCV are not invertible, which means an approximation can be made and indeed, from the documentation:
> ...undistort() is an approximate
> iterative algorithm that estimates the
> normalized original point coordinates
> out of the normalized distorted point
> coordinates (“normalized” means that
> the coordinates do not depend on the
> camera matrix).
**My question is therefore:** where can I find information on the approximation method used in `undistortPoints`? What are it's characteristics? How was it derived? Under what conditions is it likely to succeed/fail?
Handling lens distortion well is integral to many 3D Reconstruction applications so it would really be helpful with some clarity here.npwestWed, 02 Mar 2016 04:01:17 -0600http://answers.opencv.org/question/89082/Why do we pass R and P to undistortPoints() fcn (calib3d module)?http://answers.opencv.org/question/57314/why-do-we-pass-r-and-p-to-undistortpoints-fcn-calib3d-module/I have two AVT Manta G125B cameras. I made individual calibrations of the cameras, and then stereo calibration. I am trying to triangulate a point of interest in real-time. I noticed that triangulatePoints() function of calib3d module accepts undistorted image point coordinates as input so I need to use undistortPoints() function to obtain ideal point coordinates. As far as I know, it must be sufficient to pass only cameraMatrix and distCoeffs parameters to undistortPoints. By finding nonlinear least squares solution, undistortPoints() must provide solution. I did not understand why we need to pass R and P (obtained with stereoRectify() fcn) to undistortPoints.
void undistortPoints(InputArray src, OutputArray dst, InputArray cameraMatrix, InputArray distCoeffs, InputArray R=noArray(), InputArray P=noArray())tahaWed, 11 Mar 2015 22:47:17 -0500http://answers.opencv.org/question/57314/undistortpoints :how to usehttp://answers.opencv.org/question/53966/undistortpoints-how-to-use/Hi,
we wants to use the function undistortpoints to find the undistorded points of some points ( we have already the point and don't wants tu use the undistort function with an image). We wants to find the undistorded points in pixels but we have results points between 0 and 1. It seems that the result is normalized ? how to find the position in pixels ?nextWed, 28 Jan 2015 10:30:21 -0600http://answers.opencv.org/question/53966/