Ask Your Question

Problem with getOptimalNewCameraMatrix

asked 2016-09-17 22:48:00 -0500

Hilman gravatar image

updated 2016-09-18 23:57:07 -0500

I want to calibrate a car video recorder and use it for 3D reconstruction with Structure from Motion (SfM). The original size of the pictures I have took with this camera is 1920x1080. Basically, I have been using the source code from the OpenCV tutorial for the calibration.

But there are some problems and I would really appreciate any help.

So, as usual (at least in the above source code), here is the pipeline:

  1. Find the chessboard corner with findChessboardCorners
  2. Get its subpixel value with cornerSubPix
  3. Draw it for visualisation with drawhessboardCorners
  4. Then, we calibrate the camera with a call to calibrateCamera
  5. Call the getOptimalNewCameraMatrix and the undistort function to undistort the image

In my case, since the image is too big (1920x1080), I have resized it to 640x320 and used it for the calibration (during SfM, I will also use this size of image, so, I don't think it would be any problem). And also, I have used a 9x6 chessboard corners for the calibration.

Here, the problem arose. After a call to the getOptimalNewCameraMatrix, the distortion come out totally wrong. Even the returned ROI is [0,0,0,0]. Below is the original image and its undistorted version:

image description image description

You can see the image in the undistorted image is at the bottom left.

But, if I didn't call the getOptimalNewCameraMatrix and just straight undistort it, I got a quite good image. image description

So, I have two questions.

  1. Why is this? I have tried with another dataset taken with the same camera, and also with my iPhone 6 Plus, but the results are same as above.
  2. For SfM, I guess the call to getOptimalNewCameraMatrix is important? Because if not, the undistorted image would be zoomed and blurred, making the keypoint detection harder (in my case, I will be using the optical flow)? I have tested the code with the opencv sample pictures and the results are just fine.

Below is my source code:

from sys import argv
import numpy as np
import imutils  # To use the imutils.resize function. 
                       # Resizing while preserving the image's ratio.
                       # In this case, resizing 1920x1080 into 640x360.
import cv2
import glob

# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((9*6,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.

images = glob.glob(argv[1] + '*.jpg')
width = 640

for fname in images:
    img = cv2.imread(fname)
    if width:
        img = imutils.resize(img, width=width)

    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    # Find the chess board corners
    ret, corners = cv2.findChessboardCorners(gray, (9,6),None)

    # If found, add object points, image points (after refining them)
    if ret == True:

        corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
        imgpoints.append ...
edit retag flag offensive close merge delete



Can you show the section of code where you call the function? I just tested with the calibration I'm currently using, and it works fine. It's probably a parameter problem, and for that we need to see what you're doing.

Tetragramm gravatar imageTetragramm ( 2016-09-18 00:04:34 -0500 )edit

Hmm, I don't see anything wrong. I will point out that you should only need to call getOptimalNewCameraMatrix once though. It should be the same for all the images since they have same distortion and camera matrix.

Tetragramm gravatar imageTetragramm ( 2016-09-18 09:24:04 -0500 )edit

Yeah, I can take a look.

Tetragramm gravatar imageTetragramm ( 2016-09-18 19:38:01 -0500 )edit

Google Drive, megaupload, an imgur gallery, whatever.

Tetragramm gravatar imageTetragramm ( 2016-09-18 20:20:52 -0500 )edit

Ok, My C++ version works fine. I'm fixing a problem with Python right now, so I'll get back to you on that.

Tetragramm gravatar imageTetragramm ( 2016-09-18 21:09:43 -0500 )edit

Hmm, your python code is running just fine on my machine. What version of OpenCV are you using?

Tetragramm gravatar imageTetragramm ( 2016-09-18 22:09:26 -0500 )edit

I'm using Python3, but yeah. No problems at all. I just copied your code and added a few print statements. It doesn't look like there's been any changes to the code, so I don't know why you wouldn't get the same results.

It's definitely because the roi is all zeros, but why is it all zeros? Are you getting all zeros in the distortion matrix as well?

Tetragramm gravatar imageTetragramm ( 2016-09-19 07:47:01 -0500 )edit

And I've got

Camera Matrix is
[[ 469.87346015    0.          319.48973789]
 [   0.          506.50140767  179.47657816]
 [   0.            0.            1.        ]]
Dist Matrix is
[[ 0.  0.  0.  0.  0.]]

Which makes me wonder what the difference is.

Tetragramm gravatar imageTetragramm ( 2016-09-19 19:46:33 -0500 )edit

So I tried just overwriting the dist matrix with the one you have, and it works fine. Your distortion matrix seems correct, and it puts out good results from getOptimalNewCameraMatrix.

Tetragramm gravatar imageTetragramm ( 2016-09-19 19:52:33 -0500 )edit

Yep. Not a clue. But when using your values from calibrateCamera, getOptimalNCM works just fine.

Ah, wait what? Why is part of my calibration.cpp commented out? That may have something to do with it.

Tetragramm gravatar imageTetragramm ( 2016-09-19 23:07:42 -0500 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2016-09-21 22:39:26 -0500

Tetragramm gravatar image

Ok, so I've figure out why. I don't know how to solve it yet.

The why is that the distortion causes the corners of the image to wrap around, making actual negative size ROIs, which get turned into all zeros.

If you manually alter the camera matrix and undistort, (say by multiplying the focal lengths by 0.2), you can see this:

image description

So I would file a bug report showing the problem, and see where it goes. I'll think a bit and see if I figure out a better way than it's being done now. It's rather crude, inside, so there probably is.

edit flag offensive delete link more


The problem is the same for C++ and Python.

If you are just doing this for one camera, you're better off just manually altering the camera matrix. It's not that hard, just multiply the focal length x and y values by a constant until it matches what you want.

Tetragramm gravatar imageTetragramm ( 2016-09-25 17:55:50 -0500 )edit

@Hilman, @Tetragramm, Check out this question and answer at .

mannyglover gravatar imagemannyglover ( 2016-12-22 15:43:18 -0500 )edit

Question Tools



Asked: 2016-09-17 22:48:00 -0500

Seen: 4,780 times

Last updated: Sep 21 '16