Hello everyone,
I am trying to rectify the camera feed of a stereo camera from the fisheye distortion in order to calculate an accurate disparity map.
I have successfully rectified the left image (code below), but using the same function I am not able to get a similar result for the right image.
The alpha value (balance) was set to 1.0 for both images but the results are very different (see link below) https://ibb.co/c3QbFrc
I have tried setting the alpha value to 0.0 but I would loose too much information in both cases (left and right) and also the two returned images would not be centered around the same area so I would have two different rectified and cropped areas which I could not use for the disparity map. The only way I was able to get a right image (bottom right in the link above) similar to the left one, was rectifying the right image using the left camera matrix and left distCoeffs, but I would like to rectify using the correct parameters for each image.
Here are the camera matrixes and DistCoeffs: RIGHT:
KR = np.array([[549.4162819483603, 0.0, 551.6433974365737], [0.0, 549.353853181188, 834.8714999390519], [0.0, 0.0, 1.0]])
DR = np.array([[-0.048517564373268304], [0.003953550421578333], [-0.0058953907995022165], [0.0012471124530071267]])
LEFT:
KL = np.array([[539.148904941535, 0.0, 536.4766372367185], [0.0, 538.8898526644163, 814.5062311218674], [0.0, 0.0, 1.0]])
DL = np.array([[-0.048656363512753174], [0.0030608545720397346], [-0.0005358265441960563], [-0.0011724762585148128]])
and here is the function that undistorts the image given as input:
def undistort(K, D, img, balance=1.0, dim2=None, dim3=None):
img1 = cv2.resize(img, (500, 340))
img = cv2.pyrDown(img)
img = cv2.pyrDown(img)
dim1 = img.shape[:2][::-1] #dim1 is the dimension of input image to un-distort
# print("dim 1 , 0 ", dim1[0])
# print("dim 1 , 1 ", dim1[1])
# print("DIM 1 , 0 ", DIM[0])
# print("DIM 1 , 0 ", DIM[1])
assert dim1[0]/dim1[1] == DIM[0]/DIM[1], "Image to undistort needs to have same aspect ratio as the ones used in calibration"
if not dim2:
dim2 = dim1
if not dim3:
dim3 = dim1
scaled_K = K * dim1[0] / DIM[0] # The values of K is to scale with image dimension.
scaled_K[2][2] = 1.0 # Except that K[2][2] is always 1.0
# This is how scaled_K, dim2 and balance are used to determine the final K used to un-distort image. OpenCV document failed to make this clear!
new_K = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(scaled_K, D, dim2, np.eye(3), balance=balance)
map1, map2 = cv2.fisheye.initUndistortRectifyMap(scaled_K, D, np.eye(3), new_K, dim3, cv2.CV_16SC2)
undistorted_img = cv2.remap(img, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
undistorted_img2 = cv2.resize(undistorted_img, (500, 340))
final = np.vstack([img1, undistorted_img2])
return final, undistorted_img
EDIT:
Here are the flags used to extract the right and left camera matrix:
subpix_criteria = (cv2.TERM_CRITERIA_EPS+cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1)
calibration_flags = cv2.fisheye.CALIB_RECOMPUTE_EXTRINSIC+cv2.fisheye.CALIB_CHECK_COND+cv2.fisheye.CALIB_FIX_SKEW