Ask Your Question

# Assertion failed: abs_max < threshold in function 'stereoCalibrate'

Hi,

I had a stereo calibration pipeline built for regular (non fisheye) lenses that was working just fine (based on cv2.stereoCalibrate(), etc.). Now I'm trying to adapt this procedure to fisheye cameras, using the corresponding fisheye API. I incurred in other errors related to the shape of objectPoints and imagePoints, but apparently those were solved and individual camera calibration with cv2.fisheye.calibrate() seems to be working. (To check this step, I tried to undistort a few images based on the calibration of individual cameras and the resulting undistorted images look fine.)

Next, I moved to calibrating the fisheye camera rig, however, I'm stuck on the following error from cv2.fisheye.stereoCalibrate():

cv2.error: OpenCV(4.1.0) /home/massimo/repositories/opencv-4.1.0/modules/calib3d/src/fisheye.cpp:1023: error: (-215:Assertion failed) abs_max < threshold in function 'stereoCalibrate'


originating from the following failed assertion in fisheye.cpp:

CV_Assert(abs_max < threshold); // bad stereo pair


According to the solution suggested in #67855 for the same error, I tried to use different subsets of the calibration images or even individual images, hoping to identify the image(s) causing the issue. However, all combinations I tried resulted in the same error. Besides, upon visual inspection the images look fine.

I'm trying to figure out why cv2.fisheye.stereoCalibrate() rejects all images, but I don't understand how the assertion is evaluated and on what basis it determines that a stereo pair is bad.

Any idea on how to get rid of this error?

Thanks in advance!

Relevant system specifications:

• Ubuntu 18.04.2
• Python 3.6.7
• OpenCV 4.1.0 (compiled from source)
edit retag close merge delete

## 2 answers

Sort by » oldest newest most voted

Eventually, the solution was analogous to the one provided in #67855 and it boiled down to removing some of the calibration images to complete the calibration procedure. Apologies for asking a duplicate question!

I had 25 calibration images and it took me some trial and error to single out the culprits: apparently 4 out of 25 images were bad stereo pairs. Visually, they appeared equivalently good to the others that were accepted. So, even though the practical question is answered, I still don't understand why those 4 stereo images were bad and the other 21 were good.

Hoping that any of the fisheye module developer might read this, I'd like to observe that it would be very helpful if:

1. cv::fisheye::stereoCalibrate() offered some clues as to which or how many stereo pairs are found to be "bad", to streamline the identification and removal of bad calibration images; and/or
2. its documentation clarified the criterion used to check goodness of a stereo pair (which unfortunately I couldn't understand from looking at the code), so users might possibly capture better calibration images in the first place.
more

I just tracked down the cause of this bug in my code: every so often the checkerboard was flipping 180 degrees between left/right images, breaking stereo matching.

My quick solution was to check the vertical disparity between the left and right checkerboard points. If it's above a threshold, reverse the list of checkerboard points (which is equivalent to flipping them 180 degrees).

The python code for that looks like this:

        # Check for the flipped checkerboard!
diff = leftCorners - rightCorners
lengths = np.linalg.norm(diff[:, :, 1], axis=-1)
sum = np.sum(lengths, axis=0)
if (sum > 2000.0):
print("THIS STEREO PAIR IS BROKEN!!! Diff is: "+str(sum))
rightCorners = np.flipud(rightCorners)

more

Official site

GitHub

Wiki

Documentation

## Stats

Asked: 2019-06-05 08:50:29 -0500

Seen: 422 times

Last updated: Jun 07 '19