Hii I want to get real world (X,Y,Z) coordinates of an object from live capture from a PTZ camera . I have found the intrinsic parameters , and using chessboard calibration with 15 chessboard images . I have also found the extrinsic prameters . I know Z coordinate cannot be found using a single camera .
To do this I need to find rotation and translation matrix . I have a doubt that for finding rotation matrix cvRodrigues() function will convert a rotation vector to 3x3 rotation matrix , but here I will have 15 rotation vectors . Which one should I use for finding the rotation matrix ??
Also I want to know if I happen to pan or tilt my camera from the original calibration position . will i have to recalculate my rotation matrix and translation matrix or can i use the old ones ??