OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sat, 12 Mar 2016 07:37:46 -0600How to derive relative R and T from camera extrinsicshttp://answers.opencv.org/question/89968/how-to-derive-relative-r-and-t-from-camera-extrinsics/Hi,
I have an array of cameras capturing a scene. They are precalibrated and their coordinates are stored as translation from scene origin, and 3 Euler angles describing the camera's orientation.
I need to supply stereoRectify() the **relative** translation and rotation of the second camera with respect to the first camera. I have found several contradictory definitions of R and T, none of which seem to give me a correct rectified image (epipolar lines are not horizontal).
With trial and error, the following (still probably incorrect) is the closest I've been able to get:
R = R<sub>1</sub> * R<sub>2</sub><sup>T</sup>
T = R<sup>T</sup> * ( T<sub>1</sub> - T<sub>2</sub> )
Where R<sub>1</sub> and R<sub>2</sub> are 3x3 rotation matrices formed from the Euler angles, and T<sub>1</sub> and T<sub>2</sub> are translation vectors from the scene origin. R and T are then sent to stereoRectify() once R has been converted to axis-angle notation with Rodrigues().
I have also tried R = R<sub>2</sub> * R<sub>1</sub><sup>T</sup> with
T = R<sub>1</sub> * ( T<sub>2</sub> - T<sub>1</sub> ), along with a few other permutations. All incorrect.
If I could get the real way to obtain R and T from world-space, that would help immensely with identifying the source of the incorrect outputs I'm getting.
I have taken into account the z-foward and -y up coordinate system of OpenCV. I have also rendered a pair of CGI images to verify that incorrect calibration was not the issue. (My goal is to compute the disparity map between each camera (taken as pairs), in order to later perform derive depth maps and perform image-based view synthesis.)
Thanks a million!
Fri, 11 Mar 2016 22:08:20 -0600http://answers.opencv.org/question/89968/how-to-derive-relative-r-and-t-from-camera-extrinsics/Answer by Eduardo for <p>Hi,</p>
<p>I have an array of cameras capturing a scene. They are precalibrated and their coordinates are stored as translation from scene origin, and 3 Euler angles describing the camera's orientation.</p>
<p>I need to supply stereoRectify() the <strong>relative</strong> translation and rotation of the second camera with respect to the first camera. I have found several contradictory definitions of R and T, none of which seem to give me a correct rectified image (epipolar lines are not horizontal).</p>
<p>With trial and error, the following (still probably incorrect) is the closest I've been able to get:</p>
<p>R = R<sub>1</sub> * R<sub>2</sub><sup>T</sup></p>
<p>T = R<sup>T</sup> * ( T<sub>1</sub> - T<sub>2</sub> )</p>
<p>Where R<sub>1</sub> and R<sub>2</sub> are 3x3 rotation matrices formed from the Euler angles, and T<sub>1</sub> and T<sub>2</sub> are translation vectors from the scene origin. R and T are then sent to stereoRectify() once R has been converted to axis-angle notation with Rodrigues().</p>
<p>I have also tried R = R<sub>2</sub> * R<sub>1</sub><sup>T</sup> with
T = R<sub>1</sub> * ( T<sub>2</sub> - T<sub>1</sub> ), along with a few other permutations. All incorrect.</p>
<p>If I could get the real way to obtain R and T from world-space, that would help immensely with identifying the source of the incorrect outputs I'm getting.</p>
<p>I have taken into account the z-foward and -y up coordinate system of OpenCV. I have also rendered a pair of CGI images to verify that incorrect calibration was not the issue. (My goal is to compute the disparity map between each camera (taken as pairs), in order to later perform derive depth maps and perform image-based view synthesis.)</p>
<p>Thanks a million!</p>
http://answers.opencv.org/question/89968/how-to-derive-relative-r-and-t-from-camera-extrinsics/?answer=89993#post-id-89993Try this:
- For each camera, you have the corresponding homogeneous transformation matrix between each camera frame and the world frame: ![image description](/upfiles/14577873056575207.png)
- It means that you can convert a coordinate in the camera frame to a coordinate in the world frame: ![image description](/upfiles/14577877976431459.png)
If you want to get the homogeneous transformation matrix between camera1 and camera2 ![image description](/upfiles/14577879027073239.png):
![image description](/upfiles/14577881923534593.png)
With:
![image description](/upfiles/14577886552188325.png)
And:
![image description](/upfiles/14577888768562011.png)
Also, the product of two homogeneous transformation matrices is:
![image description](/upfiles/14577892286436897.png)
What you want should be:
![image description](/upfiles/14577894477730703.png)
![image description](/upfiles/14577898013556495.png)Sat, 12 Mar 2016 07:37:46 -0600http://answers.opencv.org/question/89968/how-to-derive-relative-r-and-t-from-camera-extrinsics/?answer=89993#post-id-89993