OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 14 Aug 2020 09:04:47 -0500Triangulate points from 2 images to estimate the pose on a thirdhttp://answers.opencv.org/question/233704/triangulate-points-from-2-images-to-estimate-the-pose-on-a-third/I want to estimate the pose of a current image relative to a matched one in a database. The images are coming from a moving robot with reasonably accurate odometry so I can take two consecutive images that are both similar to the one in the database and have a good estimate of the relative pose between these two images. I want to use that information to estimate the 3D relationship between the matched keypoints in the two current images using triangulatePoints and then use solvePnPRansac to estimate the pose of the image in the database relative to the first current image using the keypoints that also match to it.
If have tried to implement this in OpenCV with Python as shown below. Currently I do not use the database image and am simply confirming if I can get the odometery that I enforced back, but unfortunately the current output is garbage (translation in the order of e+10).
import numpy as np
import cv2
# Fisheye camera and distortion matrices
K=np.array([[455.5000196386718, 0.0, 482.65324003911945], [0.0, 340.6409393462825, 254.5063795692748], [0.0, 0.0, 1.0]])
D=np.array([[-0.018682808343432777], [-0.044315351694893736], [0.047678551616171246], [-0.018283908577445218]])
orb = cv2.ORB_create(nfeatures=1000)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=False)
img0 = cv2.imread("0cm.png")
img2 = cv2.imread("2cm.png")
# Find keypoints and match them up
kp0, des0 = orb.detectAndCompute(img0, None)
kp2, des2 = orb.detectAndCompute(img2, None)
matches = bf.knnMatch(des0, des2, k=2)
# Find good matches using the ratio test
ratio_thresh = 0.8
good_matches = []
for m,n in matches:
if m.distance < ratio_thresh * n.distance:
good_matches.append(m)
# Convert from keypoints to points
pts0 = np.float32([kp0[m.queryIdx].pt for m in good_matches]).reshape(-1, 1, 2)
pts2 = np.float32([kp2[m.trainIdx].pt for m in good_matches]).reshape(-1, 1, 2)
# Remove the fisheye distortion from the points
pts0 = cv2.fisheye.undistortPoints(pts0, K, D, P=K)
pts2 = cv2.fisheye.undistortPoints(pts2, K, D, P=K)
# Keep only the points that make geometric sense
# TODO: find a more efficient way to apply the mask
E, mask = cv2.findEssentialMat(pts0, pts2, K, cv2.RANSAC, 0.999, 1, None)
_, R, t, mask = cv2.recoverPose(E, pts0, pts2, cameraMatrix=K, mask=mask)
pts0_m = []
pts2_m = []
for i in range(len(mask)):
if mask[i] == 1:
pts0_m.append(pts0[i])
pts2_m.append(pts2[i])
pts0 = np.array(pts0_m).T.reshape(2, -1)
pts2 = np.array(pts2_m).T.reshape(2, -1)
# Setup the projection matrices
R = np.eye(3)
t0 = np.array([[0], [0], [0]])
t2 = np.array([[0], [0], [2]])
P0 = np.dot(K, np.concatenate((R, t0), axis=1))
P2 = np.dot(K, np.concatenate((R, t2), axis=1))
# Find the keypoint world homogeneous coordinates assuming img0 is the world origin
X = cv2.triangulatePoints(P0, P2, pts0, pts2)
# Convert from homogeneous cooridinates
X /= X[3]
objPts = X.T[:,:3]
# Find the pose of the second frame
_, rvec, tvec, inliers = cv2.solvePnPRansac(objPts, pts2.T, K, None)
print(rvec)
print(tvec)
Is there something wrong with my code or my approach (or both)?
EDIT:
I tested this with an image that is 6cm away instead of just 2cm away and it seems to work just fine then. I guess a small translation only in the forward direction is resulting in some kind of numerical instability somewhere. There is a slight difference in the results depending on the units I use to construct the projection matrices though (0.06 for metres vs 6 for centimetres), but I have no idea what units I should then actually use to get the most accurate results seeing as how it does not appear to be irrelevant. I though it could somehow be related to the camera matrix, but I obtained mine using the guide at https://medium.com/@kennethjiang/calibrate-fisheye-lens-using-opencv-333b05afa0b0 and scaling the dimensions of the checkerboard in the calibration code has no effect on the resulting matrix so now I have no idea.GerharddcFri, 14 Aug 2020 09:04:47 -0500http://answers.opencv.org/question/233704/Triangulate Chessboard gives weird resultshttp://answers.opencv.org/question/208725/triangulate-chessboard-gives-weird-results/Hi all,
So I'm trying to create a projection in 3D with use of stereo camera with use of Python.
I found the intrinsic camera calibration parameters, which seem good.
Then I find the chessboard corners on both images with :
`_, C1 = cv2.findChessboardCorners(img1, (6, 9), None)`.
I undistort the found corners with:
`C1Norm = cv2.undistortPoints(C1, K1, D1)`
And use those to find the essential matrix with:
`E, mask = cv2.findEssentialMat(C1Norm, C2Norm, focal=1.00, pp=(0., 0.), method=cv2.RANSAC, prob=0.999)`.
And at last I find the rotation and translation between the cameras with:
`M, R, t, mask = cv2.recoverPose(E, C1Norm, C2Norm)`.
Now with use of that I find the projection matrix of both cameras;
`P1 = K1 * [I3 | 0]` and `P2 = K2 * [R | t]` where K1 and K2 are the intrinsic camera parameters and I is an 3x3 identity matrix.
Last step now should be to triangulate. I use `4D = triangulatePoints(P1, P2, C1Trans, C2Trans)` which gives me the 4D coordinates.
However, when I plot this, my chessboard is all crooked and wrong. Has anyone any idea where something might go wrong?
Also, I know the dimensions of the chessboard, any idea how I can turn the homogeneous coordinates of the chessboard corners to cm?
BerrentMon, 11 Feb 2019 23:29:07 -0600http://answers.opencv.org/question/208725/How to give input parameters to triangulatePoints in python?http://answers.opencv.org/question/173969/how-to-give-input-parameters-to-triangulatepoints-in-python/ I want to find out 3D coordinates using stereo cameras. For that, I reached up to rectification of images but get stuck at [cv2.triangulatePoints](http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#triangulatepoints). I find out P1 and P2 using [cv2.stereoRectify](http://docs.opencv.org/3.0-beta/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#stereorectify). Now I have to put four parameters in `triangulatePoints` functions which are projMatr1 (P1), projMatr2 (P2), projPoints1 and projPoints2. Now my first doubt is if P1 and P2 remain same for every pair of images as we are giving input Camera matrices, rotation and translation matrices in `stereoRectify`? According to me, P1 and P2 should be same as none of input matrices are changing.
Now I want to know how to get *projPoints1* and *projPoints2* to get 3D coordinates. For example, if I know the pixel values of any point A in rectified left image as (Xl,Yl) and same Point A in right image as (Xr,Yr) can I put projPoints1 = (Xl,Yl) and projPoints2 = (Xr,Yr) in triangulatePoints? If not then how to get these `projPoints`?
Naseeb GillSun, 10 Sep 2017 14:51:44 -0500http://answers.opencv.org/question/173969/