Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Birds eye view perspectivetransform from camera calibration

I am trying to get the bird's eye view perspective transform from camera intrinsic, extrinsic matrices and distortion coefficients.

I tried using the answer from this question.

The image used is the sample image left02.jpg from the opencv official github repo

The image to be prospectively un-distored left02.jpg image from opencv sample images i.e get the bird's eye view of the image

I calibrated the camera and found the intrinsic, extrinsic matrices and the distortion co-efficients.

I undistored the image and found the pose. To check if the params are right.

Image after un-distortion and visualising pose

The equations I used to find the perspective transformation matrix are (Refer the above link):

Hr = K * R.inv() * K.inv() where R is rotational matrix (from cv2.Rodrigues()) and K is obtained from cv2.getoptimalnewcameramatrix()

     [ 1  0  |         ]
Ht = [ 0  1  | -K*C/Cz ]
     [ 0  0  |         ]

Where C=-R.inv()*T Where T is translational vector from cv2.solvePnP() and Cz is the 3rd component of the C vector

The code I used to construct the above equation is:

K = newcameramtx # from cv2.getoptimalnewcameramatrix()
ret,rvec,tvec = cv2.solvePnP(world_points,corners2,K,dist) 
R,_ = cv2.Rodrigues(rvec)
_,R_inv = cv2.invert(R)
_,K_inv = cv2.invert(K)
Hr = np.matmul(K,np.matmul(R_inv,K_inv))
C = np.matmul(-R_inv,tvec)
Cz = C[2]
temp_vector = np.matmul(-K,C/Cz)
Ht = np.identity(3)
for i,val in enumerate(temp_vector):
    Ht[i][2] = val
homography = np.matmul(Ht,Hr)
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]))
# where img is the above undistored image with visualized pose

The resulting warped image is not correct. With homographic matrix = Ht*Hr

If I remove the translation from the homography by using the below code

homography = Hr.copy()
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]))

I am getting the following image With homographic matrix = Hr

I think the above image shows that my rotational part is correct but my translation is wrong.

Since the translational matrix (Ht) is an augmented matrix am unsure whether my construction of the above matrix is correct.

I specifically want to figure out the bird's eye perspective transformation from the camera calibration.

So, How do I correct the above equations so that I am getting the perfect bird's eye view of the chessboard image

Could anyone also please explain the math on how the above equations for Ht and Hr are derived? I don't have much exposure to Linear algebra so these equations are not very obvious to me.

Birds eye view perspectivetransform from camera calibration

I am trying to get the bird's eye view perspective transform from camera intrinsic, extrinsic matrices and distortion coefficients.

I tried using the answer from this question.

The image used is the sample image left02.jpg from the opencv official github repo

The image to be prospectively un-distored left02.jpg image from opencv sample images i.e get the bird's eye view of the image

I calibrated the camera and found the intrinsic, extrinsic matrices and the distortion co-efficients.

I undistored the image and found the pose. To check if the params are right.

Image after un-distortion and visualising pose

The equations I used to find the perspective transformation matrix are (Refer the above link):

Hr = K * R.inv() * K.inv() where R is rotational matrix (from cv2.Rodrigues()) and K is obtained from cv2.getoptimalnewcameramatrix()

     [ 1  0  |         ]
Ht = [ 0  1  | -K*C/Cz ]
     [ 0  0  |         ]

Where C=-R.inv()*T Where T is translational vector from cv2.solvePnP() and Cz is the 3rd component of the C vector

The required transformation is: H = Ht * Hr

The code I used to construct the above equation is:

K = newcameramtx # from cv2.getoptimalnewcameramatrix()
ret,rvec,tvec = cv2.solvePnP(world_points,corners2,K,dist) 
R,_ = cv2.Rodrigues(rvec)
_,R_inv = cv2.invert(R)
_,K_inv = cv2.invert(K)
Hr = np.matmul(K,np.matmul(R_inv,K_inv))
C = np.matmul(-R_inv,tvec)
Cz = C[2]
temp_vector = np.matmul(-K,C/Cz)
Ht = np.identity(3)
for i,val in enumerate(temp_vector):
    Ht[i][2] = val
homography = np.matmul(Ht,Hr)
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]))
# where img is the above undistored image with visualized pose

The resulting warped image is not correct. With homographic matrix = Ht*Hr

If I remove the translation from the homography by using the below code

homography = Hr.copy()
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]))

I am getting the following image With homographic matrix = Hr

I think the above image shows that my rotational part is correct but my translation is wrong.

Since the translational matrix (Ht) is an augmented matrix am unsure whether my construction of the above matrix is correct.

I specifically want to figure out the bird's eye perspective transformation from the camera calibration.

So, How do I correct the above equations so that I am getting the perfect bird's eye view of the chessboard image

Could anyone also please explain the math on how the above equations for Ht and Hr are derived? I don't have much exposure to Linear algebra so these equations are not very obvious to me.

Birds eye view perspectivetransform from camera calibration

I am trying to get the bird's eye view perspective transform from camera intrinsic, extrinsic matrices and distortion coefficients.

I tried using the answer from this question.

The image used is the sample image left02.jpg from the opencv official github repo

The image to be prospectively un-distored left02.jpg image from opencv sample images i.e get the bird's eye view of the image

I calibrated the camera and found the intrinsic, extrinsic matrices and the distortion co-efficients.

I undistored the image and found the pose. To check if the params are right.

Image after un-distortion and visualising pose

The equations I used to find the perspective transformation matrix are (Refer the above link):

Hr = K * R.inv() * K.inv() where R is rotational matrix (from cv2.Rodrigues()) and K is obtained from cv2.getoptimalnewcameramatrix()

     [ 1  0  |         ]
Ht = [ 0  1  | -K*C/Cz ]
     [ 0  0  |         ]

Where C=-R.inv()*T Where T is translational vector from cv2.solvePnP() and Cz is the 3rd component of the C vector

The required transformation is: H = Ht * Hr

The code I used to construct the above equation is:

K = newcameramtx # from cv2.getoptimalnewcameramatrix()
ret,rvec,tvec = cv2.solvePnP(world_points,corners2,K,dist) 
R,_ = cv2.Rodrigues(rvec)
_,R_inv = cv2.invert(R)
_,K_inv = cv2.invert(K)
Hr = np.matmul(K,np.matmul(R_inv,K_inv))
C = np.matmul(-R_inv,tvec)
Cz = C[2]
temp_vector = np.matmul(-K,C/Cz)
Ht = np.identity(3)
for i,val in enumerate(temp_vector):
    Ht[i][2] = val
homography = np.matmul(Ht,Hr)
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]))
# where img is the above undistored image with visualized pose

The resulting warped image is not correct. With homographic matrix = Ht*Hr

If I remove the translation from the homography by using the below code

homography = Hr.copy()
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]))

I am getting the following image With homographic matrix = Hr

I think the above image shows that my rotational part is correct but my translation is wrong.

Since the translational matrix (Ht) is an augmented matrix am unsure whether my construction of the above matrix is correct.

I specifically want to figure out the bird's eye perspective transformation from the camera calibration.

So, How do I correct the above equations so that I am getting the perfect bird's eye view of the chessboard image

Could anyone also please explain the math on how the above equations for Ht and Hr are derived? I don't have much exposure to Linear algebra so these equations are not very obvious to me.

UPDATE:

homography = np.matmul(Ht,Hr)
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]),flags=cv2.WARP_INVERSE_MAP)

cv2.WARP_INVERSE_MAP flag gave me a different result

Still not the result I am looking for!

Birds eye view perspectivetransform from camera calibration

I am trying to get the bird's eye view perspective transform from camera intrinsic, extrinsic matrices and distortion coefficients.

I tried using the answer from this question.

The image used is the sample image left02.jpg from the opencv official github repo

The image to be prospectively un-distored left02.jpg image from opencv sample images i.e get the bird's eye view of the image

I calibrated the camera and found the intrinsic, extrinsic matrices and the distortion co-efficients.

I undistored the image and found the pose. To check if the params are right.

Image after un-distortion and visualising pose

The equations I used to find the perspective transformation matrix are (Refer the above link):

Hr = K * R.inv() * K.inv() where R is rotational matrix (from cv2.Rodrigues()) and K is obtained from cv2.getoptimalnewcameramatrix()

     [ 1  0  |         ]
Ht = [ 0  1  | -K*C/Cz ]
     [ 0  0  |         ]

Where C=-R.inv()*T Where T is translational vector from cv2.solvePnP() and Cz is the 3rd component of the C vector

The required transformation is: H = Ht * Hr

The code I used to construct the above equation is:

K = newcameramtx # from cv2.getoptimalnewcameramatrix()
ret,rvec,tvec = cv2.solvePnP(world_points,corners2,K,dist) 
R,_ = cv2.Rodrigues(rvec)
_,R_inv = cv2.invert(R)
_,K_inv = cv2.invert(K)
Hr = np.matmul(K,np.matmul(R_inv,K_inv))
C = np.matmul(-R_inv,tvec)
Cz = C[2]
temp_vector = np.matmul(-K,C/Cz)
Ht = np.identity(3)
for i,val in enumerate(temp_vector):
    Ht[i][2] = val
homography = np.matmul(Ht,Hr)
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]))
# where img is the above undistored image with visualized pose

The resulting warped image is not correct. With homographic matrix = Ht*Hr

If I remove the translation from the homography by using the below code

homography = Hr.copy()
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]))

I am getting the following image With homographic matrix = Hr

I think the above image shows that my rotational part is correct but my translation is wrong.

Since the translational matrix (Ht) is an augmented matrix am unsure whether my construction of the above matrix is correct.

I specifically want to figure out the bird's eye perspective transformation from the camera calibration.

So, How do I correct the above equations so that I am getting the perfect bird's eye view of the chessboard image

Could anyone also please explain the math on how the above equations for Ht and Hr are derived? I don't have much exposure to Linear algebra so these equations are not very obvious to me.

UPDATE:

homography = np.matmul(Ht,Hr)
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]),flags=cv2.WARP_INVERSE_MAP)

cv2.WARP_INVERSE_MAP flag gave me a different result

Still not the result I am looking for!