Ask Your Question

# Revision history [back]

### Calculating distance of 2D point

I have a pattern with a series of points on it. I have the world positions of these points and I want to use the solvePnP method to estimate the distance of one of these points on a 2D photo.

I have successfully generated my rvec and tvec vectors by providing solvePnP world positions and the camera positions of my points.

However when I try to apply these vectors to my 3D points I'm getting unexpected results. I'm certain this is because my understanding of the maths here is lacking and I'm hoping someone could point me in the right direction.

First, I am using the Rodrigues method to generate my rotation matrix:

cv2.Rodrigues(rvec, rotation_matrix)


I am then multiplying my points by the rotation matrix:

transformed_point = actual_point[i] * rotation_matrix


I then add the transition vector to my points

transformed_point = transformed_point + tvec


When I do this I get unexpected x,y,z positions. For example some of the points have a negative z which would imply some of them are behind the camera which makes no sense.

Am I missing something obvious here?

### Calculating distance of 2D point

I have a pattern with a series of points on it. I have the world positions of these points and I want to use the solvePnP method to estimate the distance of one of these points on a 2D photo.

I have successfully generated my rvec and tvec vectors by providing solvePnP world positions and the camera positions of my points.

However when I try to apply these vectors to my 3D points I'm getting unexpected results. I'm certain this is because my understanding of the maths here is lacking and I'm hoping someone could point me in the right direction.

First, I am using the Rodrigues method to generate my rotation matrix:

cv2.Rodrigues(rvec, rotation_matrix)


I am then multiplying my points by the rotation matrix:

transformed_point = actual_point[i] * rotation_matrix


I then add the transition translation vector to my points

transformed_point = transformed_point + tvec


When I do this I get unexpected x,y,z positions. For example some of the points have a negative z which would imply some of them are behind the camera which makes no sense.

Am I missing something obvious here?