Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Difficulties getting projectPoints to work, returns weird values

I'm having trouble getting projectPoints to work. I've calibrated the camera and used solvPNP as in this tutorial:

https://longervision.github.io/2017/03/20/opencv-internal-calibration-circle-grid/

The images obtained from the video I recorded showed that the blob detection worked fine, so this part I'm pretty confident works as intended.

Then I got registered some coordinates from an image to their real world corresponding points. I would expect projectPoints should return the same imagePoints coordinates if I would use projectPoints with some of the reference points as inputs, but instead I'm getting output values that are wildly outside the image coordinates.

I wonder what I'm doing wrong? Any help is greatly appreciated! After this I'm also trying to figure out how to do this in inverse: input imagePoints and get out objectPoints with z=0

My input points for projectPoints are:

inPoints = np.zeros((3, 3))
inPoints[0]  = (0 , 137.16  , 0)
inPoints[1]  = (0 , 548.64 , 0)
inPoints[2]  = (548.64 , 548.64, 0)

Expected Output:

(326, 156)
(398, 154)
(406, 170)

What I'm actually getting:

(19748.51884776, 14658.66747407)
 (24693.12654318,  9023.29722927)
 (33225.96561506,  3969.11639187)

Inputs:

rvec = [[-0.06161642] [ 0.74999101] [ 0.78220654]]
tvec = [[-914.24171214] [-834.30392656] [1188.29684866]]
cameraMatrix = [[2.25545289e+03 0.00000000e+00 1.27534861e+03]
                [0.00000000e+00 2.32542640e+03 7.35878530e+02]
                [0.00000000e+00 0.00000000e+00 1.00000000e+00]]
distCoeffs = [[-3.18000851e-02  1.83258452e+00 -4.43437310e-03  5.27295127e-03 -1.12934335e+01]]

Full script (after calibration):

import numpy as np
import cv2
import glob
import sys
import yaml

with open('./calib/calibration.yaml') as f:
    loadeddict = yaml.load(f)
camera_matrix = loadeddict.get('camera_matrix')
dist_coeffs = loadeddict.get('dist_coeff')


tnsPoints = np.zeros((19, 3)) 
tnsPoints[0]  = (0 , 0  , 0)
tnsPoints[1]  = (0 , 137.16 , 0)
tnsPoints[2]  = (0 , 548.64, 0)
tnsPoints[3]  = (0 , 960.12, 0)
tnsPoints[4]  = (0 , 1097.28 , 0)
tnsPoints[5]  = (548.64, 137.16, 0)
tnsPoints[6]  = (548.64, 548.64, 0)
tnsPoints[7]  = (548.64, 960.12, 0)
tnsPoints[8]  = (1188.72 , 0, 0)
tnsPoints[9]  = (1188.72 , 137.16, 0)
tnsPoints[10]  = (1188.72 , 548.64, 0)
tnsPoints[11]  = (1188.72 , 960.12, 0)
tnsPoints[12]  = (1188.72 , 1097.28, 0)
tnsPoints[13]  = (1828.80 , 137.16, 0)
tnsPoints[14]  = (1828.80 , 548.64, 0)
tnsPoints[15]  = (1828.80 , 960.12, 0)
tnsPoints[16]  = (2377.44 , 0  , 0)
tnsPoints[17]  = (2377.44 , 137.16  , 0)
tnsPoints[18]  = (2377.44 , 548.64  , 0)
#tnsPoints[19]  = (2377.44 , 960.12  , 0)
#tnsPoints[20]  = (2377.44 , 1097.28  , 0)

imPoints = np.zeros((19,2))
imPoints[0] = (302,158)
imPoints[1] = (326, 156)
imPoints[2] = (398, 154)
imPoints[3] = (471, 150)
imPoints[4] = (494, 148)
imPoints[5] = (319, 172)
imPoints[6] = (406, 170)
imPoints[7] = (491, 167)
imPoints[8] = (270, 206)
imPoints[9] = (306, 206)
imPoints[10] = (421, 203)
imPoints[11] = (532, 197)
imPoints[12] = (570, 195)
imPoints[13] = (283, 266)
imPoints[14] = (446, 260)
imPoints[15] = (607, 252)
imPoints[16] = (146, 390)
imPoints[17] = (235, 387)
imPoints[18] = (499, 374)

retval, rvec, tvec = cv2.solvePnP(tnsPoints, imPoints, np.asarray(camera_matrix), np.asarray(dist_coeffs))

inPoints = np.zeros((3, 3))
inPoints[0]  = (0 , 137.16  , 0)
inPoints[1]  = (0 , 548.64 , 0)
inPoints[2]  = (548.64 , 548.64, 0)

print(rvec)
print(tvec)
print(np.asarray(camera_matrix))
print(np.asarray(dist_coeffs))

outPoints, jacobian = cv2.projectPoints(inPoints, rvec, tvec, np.asarray(camera_matrix), np.asarray(dist_coeffs))

print(outPoints)
click to hide/show revision 2
retagged

updated 2018-05-24 01:56:35 -0600

berak gravatar image

Difficulties getting projectPoints to work, returns weird values

I'm having trouble getting projectPoints to work. I've calibrated the camera and used solvPNP as in this tutorial:

https://longervision.github.io/2017/03/20/opencv-internal-calibration-circle-grid/

The images obtained from the video I recorded showed that the blob detection worked fine, so this part I'm pretty confident works as intended.

Then I got registered some coordinates from an image to their real world corresponding points. I would expect projectPoints should return the same imagePoints coordinates if I would use projectPoints with some of the reference points as inputs, but instead I'm getting output values that are wildly outside the image coordinates.

I wonder what I'm doing wrong? Any help is greatly appreciated! After this I'm also trying to figure out how to do this in inverse: input imagePoints and get out objectPoints with z=0

My input points for projectPoints are:

inPoints = np.zeros((3, 3))
inPoints[0]  = (0 , 137.16  , 0)
inPoints[1]  = (0 , 548.64 , 0)
inPoints[2]  = (548.64 , 548.64, 0)

Expected Output:

(326, 156)
(398, 154)
(406, 170)

What I'm actually getting:

(19748.51884776, 14658.66747407)
 (24693.12654318,  9023.29722927)
 (33225.96561506,  3969.11639187)

Inputs:

rvec = [[-0.06161642] [ 0.74999101] [ 0.78220654]]
tvec = [[-914.24171214] [-834.30392656] [1188.29684866]]
cameraMatrix = [[2.25545289e+03 0.00000000e+00 1.27534861e+03]
                [0.00000000e+00 2.32542640e+03 7.35878530e+02]
                [0.00000000e+00 0.00000000e+00 1.00000000e+00]]
distCoeffs = [[-3.18000851e-02  1.83258452e+00 -4.43437310e-03  5.27295127e-03 -1.12934335e+01]]

Full script (after calibration):

import numpy as np
import cv2
import glob
import sys
import yaml

with open('./calib/calibration.yaml') as f:
    loadeddict = yaml.load(f)
camera_matrix = loadeddict.get('camera_matrix')
dist_coeffs = loadeddict.get('dist_coeff')


tnsPoints = np.zeros((19, 3)) 
tnsPoints[0]  = (0 , 0  , 0)
tnsPoints[1]  = (0 , 137.16 , 0)
tnsPoints[2]  = (0 , 548.64, 0)
tnsPoints[3]  = (0 , 960.12, 0)
tnsPoints[4]  = (0 , 1097.28 , 0)
tnsPoints[5]  = (548.64, 137.16, 0)
tnsPoints[6]  = (548.64, 548.64, 0)
tnsPoints[7]  = (548.64, 960.12, 0)
tnsPoints[8]  = (1188.72 , 0, 0)
tnsPoints[9]  = (1188.72 , 137.16, 0)
tnsPoints[10]  = (1188.72 , 548.64, 0)
tnsPoints[11]  = (1188.72 , 960.12, 0)
tnsPoints[12]  = (1188.72 , 1097.28, 0)
tnsPoints[13]  = (1828.80 , 137.16, 0)
tnsPoints[14]  = (1828.80 , 548.64, 0)
tnsPoints[15]  = (1828.80 , 960.12, 0)
tnsPoints[16]  = (2377.44 , 0  , 0)
tnsPoints[17]  = (2377.44 , 137.16  , 0)
tnsPoints[18]  = (2377.44 , 548.64  , 0)
#tnsPoints[19]  = (2377.44 , 960.12  , 0)
#tnsPoints[20]  = (2377.44 , 1097.28  , 0)

imPoints = np.zeros((19,2))
imPoints[0] = (302,158)
imPoints[1] = (326, 156)
imPoints[2] = (398, 154)
imPoints[3] = (471, 150)
imPoints[4] = (494, 148)
imPoints[5] = (319, 172)
imPoints[6] = (406, 170)
imPoints[7] = (491, 167)
imPoints[8] = (270, 206)
imPoints[9] = (306, 206)
imPoints[10] = (421, 203)
imPoints[11] = (532, 197)
imPoints[12] = (570, 195)
imPoints[13] = (283, 266)
imPoints[14] = (446, 260)
imPoints[15] = (607, 252)
imPoints[16] = (146, 390)
imPoints[17] = (235, 387)
imPoints[18] = (499, 374)

retval, rvec, tvec = cv2.solvePnP(tnsPoints, imPoints, np.asarray(camera_matrix), np.asarray(dist_coeffs))

inPoints = np.zeros((3, 3))
inPoints[0]  = (0 , 137.16  , 0)
inPoints[1]  = (0 , 548.64 , 0)
inPoints[2]  = (548.64 , 548.64, 0)

print(rvec)
print(tvec)
print(np.asarray(camera_matrix))
print(np.asarray(dist_coeffs))

outPoints, jacobian = cv2.projectPoints(inPoints, rvec, tvec, np.asarray(camera_matrix), np.asarray(dist_coeffs))

print(outPoints)