Generate different camera view using camera poses
I have obtained multiple views of a scene using different poses of the same camera in Blender. The poses of cameras are given by (tx, ty, tz)
and 4-tuple quaternion (w, x, y, z)
with respect to the world coordinate system. Now, given I have the poses of all the camera views, I want to generate the camera view of a camera C2
with rotation and translation as R2
and T2
, given I know the pose of another camera C1
with R1
and T1
. Then I want to compare how close the generated image is to the groundtruth image (as generated by Blender). For generating the image, I use a grid of points and warp the grid using the transformation matrix. Then I use GPU to warp the image. Let's consider the reference camera's grid points be p1
. The way I find the warped points is by doing the following:
p2 = K*inv(R2)*R1*p1 + K*inv(R2)*(T1 - T2)
where K
is the intrinsic parameters.
My question is: Is the generated image going to be exactly same as the groundtruth image ? Because only the distant features of the image are aligning with the groundtruth. The nearby objects are not aligned properly. Is this because of parallax as the optical centers are not same for both the cameras ?