solvePNP() and trinagulatePoints() do not give the anticipated results - Python

asked 2019-10-13 06:51:01 -0500

ikaranta gravatar image

updated 2019-10-13 10:37:58 -0500

Hello guys. I'm doing my senior thesis and i have this problem for about a month. I'm a surveyor engineer , so i have measured specific points on the wall with the shape of crosshair. So they have specific coordinates X,Y,Z in this local coordinate system i have created. Now, i also have two cameras in this lab and as a first step i want to find X,Y,Z coordinates for some of these crosshairs in order to check if my algorithm runs properly. But the results from my algorithm doesn't match the results i have measured(ground truth values). These are images of the left and right cameras and i have circled points i used to check my results.image description image description

So i will explain a little bit my code.. When i run the code the susteAGE4 image is appear. I check the 80 , 5 , 67, 69 with this exact order. I click ESC. Then the sustAGE3 is appeared. I click the 40 , 55 , 59 , 66 with this order. These image coordinates combined with the world coordinate i have measured give me the translation and rotation vectors using solvePNP(). After that i click ESC again and i click 66,67,70,68 points, firsty in the left, after that in the right image in order to find the X,Y,Z using triangulatePoints() coordinates and compare them with my measured world coordinates. But the results doesn't match. I believe it's something with my projective matrices i create manually but i am not sure. When i run my code i firstly run imgp3 and imgp4 dicts so my code runs without errors. I also noticed that every time i run the code the results X Y Z of 66 67 70 68 are quite different. If someone needs the data in order to run the code i will gladly send them through email or something

Here is my code:

import numpy as np import cv2 import pandas as pd

def coords(event,x,y,flags,param=(imgp4,imgp3)): if event==cv2.EVENT_LBUTTONDOWN: global i global imgp4 global imgp3 i+=1

    if i<=4:
        imgp4[i]=(x,y)
    else:
        global j
        j+=1
        imgp3[j]=(x,y)

def coords_test(event,x,y,flags,param): if event==cv2.EVENT_LBUTTONDOWN: global k global testL global testR k+=1

    if k<=4:
        print(x,y)
        testL[k]=(x,y)
    else:
        global l
        l+=1
        print(x,y)
        testR[l]=(x,y)

def capture(video): ret,frame=video.read() return(ret,frame)

def pose(Coords,imgp,mtx,dst):

# Find the rotation and translation vectors.

ret,rvecs, tvecs = cv2.solvePnP(Coords,imgp,mtx,dst)
return (rvecs,tvecs)

def kfw(rvecs,tvecs):

rotation_matrix=cv2.Rodrigues(rvecs)[0]

projection_matrix=np.append(rotation_matrix,tvecs,axis=1)
zyx=cv2.decomposeProjectionMatrix(projection_matrix)



return(rotation_matrix,projection_matrix,zyx)

if __name__=='__main__':

Coords=pd.read_csv("coords.txt",sep=",",header=None)

# Start Live video
video_left=cv2.VideoCapture('video4.mp4')
video_right=cv2.VideoCapture('video3.mp4')
i=0
j=0
k=0 ...
(more)
edit retag flag offensive close merge delete