Ask Your Question
0

I have a question regarding the imagepoints output of the projectPoints() funtion in open CV. I am getting image points which have negative coordinates and I understand that they are outside the screen area definitely.

asked 2017-11-24 06:42:52 -0600

So, what should be the maximum range of outputs for the imagePoints for them to be on the screen? Will it be u= 1280 and v=720 if I am using it for an image that has 1280 pixels width and 720 pixels height?

For a clearer exposition of my problem I add the following details,

camera_distCoeffs: [0.045539, -0.057822, 0.001451, -0.000487, 0.006539, 0.438100, -0.135970, 0.011170]

camera_intrinsic: [606.215365, 0.000000, 632.285550, 0.000000, 679.696865, 373.770687, 0.000000, 0.000000, 1.000000]                          

Sample camera coordinate: [16.7819794502, -2.2923261485, 2.9228301598] with orientation quaternions:[Qx,Qy,Qz,Qw] as [0.0075078838, 0.062947858, 0.3573477229, -0.9318174734]

I am forming my rotation vector (rvec) by first converting the quaternions to rotation matrix and then calling the Rodrigues() function. I am constructing the translation vector as tvec=-(transpose of rotation matrix)*[column vector of camera cordinates]

Also, from what I understand the pinhole camera model used in the projectPoints() function has the camera aperture as the origin. So does it mean that the input parameter 'objectPoints' should be (x-x1, y-y1, z-z1) in my case where the camera is not the origin ? for brevity, here (x1,y1,z1) is camera coordinate in world frame and (x,y,z) is target object coordinate in world frame.

edit retag flag offensive close merge delete

Comments

I have a question Regarding this: (tvec=-(transpose of rotation matrix)*[column vector of camera cordinates]) I thought that tvec represents only camera coordinates .. since I am working on the same idea please can you explain this to me

Suom gravatar imageSuom ( 2020-03-21 18:11:56 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2017-11-24 11:51:32 -0600

Tetragramm gravatar image

Project points will, if your inputs are correct, provide the locations on the FPA that would see those 3d points. To be in the image, it must be within the bounds of the FPA, which is (0,0) to (in your case) (1280,720).

You can subtract the camera location, or, if it's not too large, just put the camera location in the tvec variable. Either should produce the same results, but it can fail if the camera tvec is too large a magnitude.

Please note that OpenCV uses the camera coordinate system +X is right, +Y is down, and +Z is forward. So an rvec and tvec of all zeros (No rotation, no translation) would see the point (0,0,100) as exactly the center of the image. Make sure your quaternions represent that properly.

edit flag offensive delete link more

Comments

Hi, I think I will have to subtract the camera coordinates from the object coordinates (please see the last paragraph of my question) in that case, what will be my tvec ? I am getting pretty bad results when I am using tvec=camera coordinates in world frame

shaondip gravatar imageshaondip ( 2017-11-28 09:00:25 -0600 )edit

If you subtract the camera coordinates from your object point, you also need to subtract the same value from your tvec. You don't want to change the relative relationship of the two, just slide them all around as a group.

Tetragramm gravatar imageTetragramm ( 2017-11-28 17:43:33 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2017-11-24 06:42:52 -0600

Seen: 3,213 times

Last updated: Nov 24 '17