Ask Your Question

Generate different camera view using camera poses

asked 2017-02-21 02:12:39 -0600

I have obtained multiple views of a scene using different poses of the same camera in Blender. The poses of cameras are given by (tx, ty, tz) and 4-tuple quaternion (w, x, y, z) with respect to the world coordinate system. Now, given I have the poses of all the camera views, I want to generate the camera view of a camera C2 with rotation and translation as R2 and T2, given I know the pose of another camera C1 with R1 and T1. Then I want to compare how close the generated image is to the groundtruth image (as generated by Blender). For generating the image, I use a grid of points and warp the grid using the transformation matrix. Then I use GPU to warp the image. Let's consider the reference camera's grid points be p1. The way I find the warped points is by doing the following:

p2 = K*inv(R2)*R1*p1 + K*inv(R2)*(T1 - T2)

where K is the intrinsic parameters.

My question is: Is the generated image going to be exactly same as the groundtruth image ? Because only the distant features of the image are aligning with the groundtruth. The nearby objects are not aligned properly. Is this because of parallax as the optical centers are not same for both the cameras ?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted

answered 2017-02-21 19:19:19 -0600

Tetragramm gravatar image

That is precisely the problem. If you knew the depth at each pixel you could warp it exactly (except for where the first camera can't see the scene), but from an image you don't know that.

edit flag offensive delete link more

Question Tools

1 follower


Asked: 2017-02-21 02:12:39 -0600

Seen: 201 times

Last updated: Feb 21 '17