OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 07 Jun 2019 03:31:22 -0500Turning ArUco marker in parallel with camera planehttp://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/I need to warp the image to fix its perspective distortion based on detected marker. In other words - to get the plane where the marker lays become parallel to the camera plane.
In general it works for me, when I simply map points of perspective-distorted marker to its orthogonal position [(Sketch)](/upfiles/14908652376747118.png) with getPerspectiveTranfrorm() and then warpPerspective(), which warps whole image:
The following are sample params for getPerspectiveTransform()
src1 (100, 100) => dst1 (100, 100)
src2 (110, 190) => dst2 (100, 200)
src3: (190, 190) => dst3 (200, 200)
src4: (200, 100) => dst4 (200, 100)
The result looks OK, but not always, so I think that this way is wrong.
My assumption that since for detected marker I can get its pose estimation (which shows its relation to camera) I can calculate required marker position (or camera position?) using marker points and rotation/translation vectors.
Now I'm stuck basically not understanding the math solution. Could you advise?
**UPDATE**
The following is a source image with detected markers. The white circles represent the desired position of marker that will be used in getPerspectiveTransform().
![source](/upfiles/14909619594485387.png)
Source corners: [479, 335; 530, 333; 528, 363; 475, 365]
Result corners: [479, 335; 529, 335; 529, 385; 479, 385]
The following is the result image, which is still distorted:
![image description](/upfiles/14909622416010795.png)Thu, 30 Mar 2017 04:19:28 -0500http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/Comment by Eduardo for <p>I need to warp the image to fix its perspective distortion based on detected marker. In other words - to get the plane where the marker lays become parallel to the camera plane.</p>
<p>In general it works for me, when I simply map points of perspective-distorted marker to its orthogonal position <a href="/upfiles/14908652376747118.png">(Sketch)</a> with getPerspectiveTranfrorm() and then warpPerspective(), which warps whole image:</p>
<p>The following are sample params for getPerspectiveTransform()</p>
<pre><code>src1 (100, 100) => dst1 (100, 100)
src2 (110, 190) => dst2 (100, 200)
src3: (190, 190) => dst3 (200, 200)
src4: (200, 100) => dst4 (200, 100)
</code></pre>
<p>The result looks OK, but not always, so I think that this way is wrong.</p>
<p>My assumption that since for detected marker I can get its pose estimation (which shows its relation to camera) I can calculate required marker position (or camera position?) using marker points and rotation/translation vectors.</p>
<p>Now I'm stuck basically not understanding the math solution. Could you advise?</p>
<p><strong>UPDATE</strong></p>
<p>The following is a source image with detected markers. The white circles represent the desired position of marker that will be used in getPerspectiveTransform().
<img alt="source" src="/upfiles/14909619594485387.png"/></p>
<pre><code>Source corners: [479, 335; 530, 333; 528, 363; 475, 365]
Result corners: [479, 335; 529, 335; 529, 385; 479, 385]
</code></pre>
<p>The following is the result image, which is still distorted:</p>
<p><img alt="image description" src="/upfiles/14909622416010795.png"/></p>
http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=136853#post-id-136853Maybe you can add a sample data: image + extracted corners points in text?
As the marker is planar, the transformation should be a homography. Knowing the two camera poses (current pose estimated and desired camera pose), you should be able to [compute](https://en.wikipedia.org/wiki/Homography_(computer_vision)) the homography matrix from the camera displacement. Once you have the homography, you will have to use `warpPerspective()`. You can also compare the two homography matrices.Thu, 30 Mar 2017 13:10:59 -0500http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=136853#post-id-136853Comment by Eduardo for <p>I need to warp the image to fix its perspective distortion based on detected marker. In other words - to get the plane where the marker lays become parallel to the camera plane.</p>
<p>In general it works for me, when I simply map points of perspective-distorted marker to its orthogonal position <a href="/upfiles/14908652376747118.png">(Sketch)</a> with getPerspectiveTranfrorm() and then warpPerspective(), which warps whole image:</p>
<p>The following are sample params for getPerspectiveTransform()</p>
<pre><code>src1 (100, 100) => dst1 (100, 100)
src2 (110, 190) => dst2 (100, 200)
src3: (190, 190) => dst3 (200, 200)
src4: (200, 100) => dst4 (200, 100)
</code></pre>
<p>The result looks OK, but not always, so I think that this way is wrong.</p>
<p>My assumption that since for detected marker I can get its pose estimation (which shows its relation to camera) I can calculate required marker position (or camera position?) using marker points and rotation/translation vectors.</p>
<p>Now I'm stuck basically not understanding the math solution. Could you advise?</p>
<p><strong>UPDATE</strong></p>
<p>The following is a source image with detected markers. The white circles represent the desired position of marker that will be used in getPerspectiveTransform().
<img alt="source" src="/upfiles/14909619594485387.png"/></p>
<pre><code>Source corners: [479, 335; 530, 333; 528, 363; 475, 365]
Result corners: [479, 335; 529, 335; 529, 385; 479, 385]
</code></pre>
<p>The following is the result image, which is still distorted:</p>
<p><img alt="image description" src="/upfiles/14909622416010795.png"/></p>
http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=137099#post-id-137099I think the issue you have should come from some noise, incertitude in the corner coordinates that will affect the estimation of the perspective transformation. Using points more spread out should lead to better results in my opinion. The original image can also be distorted due to the camera lens and can have an impact somehow.
Note: I think that `findHomography()` or `perspectiveTransform()` should give you the same transformation matrix, you have to check.Fri, 31 Mar 2017 11:58:18 -0500http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=137099#post-id-137099Comment by tischenkoalex for <p>I need to warp the image to fix its perspective distortion based on detected marker. In other words - to get the plane where the marker lays become parallel to the camera plane.</p>
<p>In general it works for me, when I simply map points of perspective-distorted marker to its orthogonal position <a href="/upfiles/14908652376747118.png">(Sketch)</a> with getPerspectiveTranfrorm() and then warpPerspective(), which warps whole image:</p>
<p>The following are sample params for getPerspectiveTransform()</p>
<pre><code>src1 (100, 100) => dst1 (100, 100)
src2 (110, 190) => dst2 (100, 200)
src3: (190, 190) => dst3 (200, 200)
src4: (200, 100) => dst4 (200, 100)
</code></pre>
<p>The result looks OK, but not always, so I think that this way is wrong.</p>
<p>My assumption that since for detected marker I can get its pose estimation (which shows its relation to camera) I can calculate required marker position (or camera position?) using marker points and rotation/translation vectors.</p>
<p>Now I'm stuck basically not understanding the math solution. Could you advise?</p>
<p><strong>UPDATE</strong></p>
<p>The following is a source image with detected markers. The white circles represent the desired position of marker that will be used in getPerspectiveTransform().
<img alt="source" src="/upfiles/14909619594485387.png"/></p>
<pre><code>Source corners: [479, 335; 530, 333; 528, 363; 475, 365]
Result corners: [479, 335; 529, 335; 529, 385; 479, 385]
</code></pre>
<p>The following is the result image, which is still distorted:</p>
<p><img alt="image description" src="/upfiles/14909622416010795.png"/></p>
http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=137084#post-id-137084Yes, I plan to switch to subpixel accuracy too. Just wanted first to make sure that I'm not going in wrong direction by not calculating desired coordinates from vectors. I didn't check the homography topic yet though... I believe it will give me more understanding.Fri, 31 Mar 2017 10:28:43 -0500http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=137084#post-id-137084Comment by Eduardo for <p>I need to warp the image to fix its perspective distortion based on detected marker. In other words - to get the plane where the marker lays become parallel to the camera plane.</p>
<p>In general it works for me, when I simply map points of perspective-distorted marker to its orthogonal position <a href="/upfiles/14908652376747118.png">(Sketch)</a> with getPerspectiveTranfrorm() and then warpPerspective(), which warps whole image:</p>
<p>The following are sample params for getPerspectiveTransform()</p>
<pre><code>src1 (100, 100) => dst1 (100, 100)
src2 (110, 190) => dst2 (100, 200)
src3: (190, 190) => dst3 (200, 200)
src4: (200, 100) => dst4 (200, 100)
</code></pre>
<p>The result looks OK, but not always, so I think that this way is wrong.</p>
<p>My assumption that since for detected marker I can get its pose estimation (which shows its relation to camera) I can calculate required marker position (or camera position?) using marker points and rotation/translation vectors.</p>
<p>Now I'm stuck basically not understanding the math solution. Could you advise?</p>
<p><strong>UPDATE</strong></p>
<p>The following is a source image with detected markers. The white circles represent the desired position of marker that will be used in getPerspectiveTransform().
<img alt="source" src="/upfiles/14909619594485387.png"/></p>
<pre><code>Source corners: [479, 335; 530, 333; 528, 363; 475, 365]
Result corners: [479, 335; 529, 335; 529, 385; 479, 385]
</code></pre>
<p>The following is the result image, which is still distorted:</p>
<p><img alt="image description" src="/upfiles/14909622416010795.png"/></p>
http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=137082#post-id-137082I would use rather one or multiple corners but for all the markers to estimate the perspective transformation (you will have to change also the desired coordinates).
Looks like your extracted corner coordinates are integer numbers. Maybe you could check also if you can refine the coordinates of the corners (subpixel accuray, see [here](http://docs.opencv.org/3.2.0/d5/dae/tutorial_aruco_detection.html) in Corner Refinement section) and use `cv::Point2f` or `cv::Point2d`.Fri, 31 Mar 2017 10:14:28 -0500http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=137082#post-id-137082Comment by tischenkoalex for <p>I need to warp the image to fix its perspective distortion based on detected marker. In other words - to get the plane where the marker lays become parallel to the camera plane.</p>
<p>In general it works for me, when I simply map points of perspective-distorted marker to its orthogonal position <a href="/upfiles/14908652376747118.png">(Sketch)</a> with getPerspectiveTranfrorm() and then warpPerspective(), which warps whole image:</p>
<p>The following are sample params for getPerspectiveTransform()</p>
<pre><code>src1 (100, 100) => dst1 (100, 100)
src2 (110, 190) => dst2 (100, 200)
src3: (190, 190) => dst3 (200, 200)
src4: (200, 100) => dst4 (200, 100)
</code></pre>
<p>The result looks OK, but not always, so I think that this way is wrong.</p>
<p>My assumption that since for detected marker I can get its pose estimation (which shows its relation to camera) I can calculate required marker position (or camera position?) using marker points and rotation/translation vectors.</p>
<p>Now I'm stuck basically not understanding the math solution. Could you advise?</p>
<p><strong>UPDATE</strong></p>
<p>The following is a source image with detected markers. The white circles represent the desired position of marker that will be used in getPerspectiveTransform().
<img alt="source" src="/upfiles/14909619594485387.png"/></p>
<pre><code>Source corners: [479, 335; 530, 333; 528, 363; 475, 365]
Result corners: [479, 335; 529, 335; 529, 385; 479, 385]
</code></pre>
<p>The following is the result image, which is still distorted:</p>
<p><img alt="image description" src="/upfiles/14909622416010795.png"/></p>
http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=137055#post-id-137055I added source images and corner coordinates. Will read more on homography. Thanks!Fri, 31 Mar 2017 07:20:58 -0500http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=137055#post-id-137055Comment by tischenkoalex for <p>I need to warp the image to fix its perspective distortion based on detected marker. In other words - to get the plane where the marker lays become parallel to the camera plane.</p>
<p>In general it works for me, when I simply map points of perspective-distorted marker to its orthogonal position <a href="/upfiles/14908652376747118.png">(Sketch)</a> with getPerspectiveTranfrorm() and then warpPerspective(), which warps whole image:</p>
<p>The following are sample params for getPerspectiveTransform()</p>
<pre><code>src1 (100, 100) => dst1 (100, 100)
src2 (110, 190) => dst2 (100, 200)
src3: (190, 190) => dst3 (200, 200)
src4: (200, 100) => dst4 (200, 100)
</code></pre>
<p>The result looks OK, but not always, so I think that this way is wrong.</p>
<p>My assumption that since for detected marker I can get its pose estimation (which shows its relation to camera) I can calculate required marker position (or camera position?) using marker points and rotation/translation vectors.</p>
<p>Now I'm stuck basically not understanding the math solution. Could you advise?</p>
<p><strong>UPDATE</strong></p>
<p>The following is a source image with detected markers. The white circles represent the desired position of marker that will be used in getPerspectiveTransform().
<img alt="source" src="/upfiles/14909619594485387.png"/></p>
<pre><code>Source corners: [479, 335; 530, 333; 528, 363; 475, 365]
Result corners: [479, 335; 529, 335; 529, 385; 479, 385]
</code></pre>
<p>The following is the result image, which is still distorted:</p>
<p><img alt="image description" src="/upfiles/14909622416010795.png"/></p>
http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=137747#post-id-137747I switched to use ARUCO Board and it improved accuracy a lot. findHomography() and getPerspectiveTransform() provide the following result for me.Tue, 04 Apr 2017 03:33:05 -0500http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=137747#post-id-137747Answer by Eduardo for <p>I need to warp the image to fix its perspective distortion based on detected marker. In other words - to get the plane where the marker lays become parallel to the camera plane.</p>
<p>In general it works for me, when I simply map points of perspective-distorted marker to its orthogonal position <a href="/upfiles/14908652376747118.png">(Sketch)</a> with getPerspectiveTranfrorm() and then warpPerspective(), which warps whole image:</p>
<p>The following are sample params for getPerspectiveTransform()</p>
<pre><code>src1 (100, 100) => dst1 (100, 100)
src2 (110, 190) => dst2 (100, 200)
src3: (190, 190) => dst3 (200, 200)
src4: (200, 100) => dst4 (200, 100)
</code></pre>
<p>The result looks OK, but not always, so I think that this way is wrong.</p>
<p>My assumption that since for detected marker I can get its pose estimation (which shows its relation to camera) I can calculate required marker position (or camera position?) using marker points and rotation/translation vectors.</p>
<p>Now I'm stuck basically not understanding the math solution. Could you advise?</p>
<p><strong>UPDATE</strong></p>
<p>The following is a source image with detected markers. The white circles represent the desired position of marker that will be used in getPerspectiveTransform().
<img alt="source" src="/upfiles/14909619594485387.png"/></p>
<pre><code>Source corners: [479, 335; 530, 333; 528, 363; 475, 365]
Result corners: [479, 335; 529, 335; 529, 385; 479, 385]
</code></pre>
<p>The following is the result image, which is still distorted:</p>
<p><img alt="image description" src="/upfiles/14909622416010795.png"/></p>
http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?answer=141414#post-id-141414I have written in this answer some experimentations I did to understand more the concept of homography. Even if this is not really an answer of the original post, I hope it could also be useful to other people and it is a good way for me to summarize all the information I gathered. I have also added the necessary code to check and make the link between the theory and the practice.
----------
**What is the homography matrix?**
For the theory, just refer to a computer vision course (e.g. [Lecture 16: Planar Homographies](http://www.cse.psu.edu/~rtc12/CSE486/lecture16.pdf), ...) or book (e.g. [Multiple View Geometry in Computer Vision](http://www.robots.ox.ac.uk/~vgg/hzbook/), [Computer Vision: Algorithms and Applications](http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf), ...). Quickly, the planar homography relates the transformation between two planes (up to a scale):
![Homography](/upfiles/14924619058112717.png)
This planar transformation can be between:
- a planar object and the image plane (image from [here](https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf), p9):
![Homography transformation](/upfiles/14924633307636158.png)
- a planar surface viewed by two cameras (image from [here](http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf), p56 and [here](https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf), p10):
![Homography transformation2-1](/upfiles/14924623182758898.png)
![Homography transformation2-2](/upfiles/1492463474658705.png)
- a rotating camera around its axis of projection, equivalent to consider that the points are on a plane at infinity (image from [here](https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf), p11):
![Homography transformation3](/upfiles/1492463827855394.png)
----------
**How the homography can be useful?**
- Camera pose estimation with coplanar points (see [here](https://dsp.stackexchange.com/questions/2736/step-by-step-camera-pose-estimation-for-visual-tracking-and-planar-markers) or [here](https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf), p30), the homography matrix can be estimated using the DLT (Direct Linear Transform) algorithm
- Perspective removal, correction:
![Perspective correction](/upfiles/14924648196133987.png)
- Panorama stitching:
![Panorama stitching](/upfiles/1492465012534416.png)
----------
**Demo 1: perspective correction**
The function `findChessboardCorners()` returns the chessboard corners location (the left image is the source, the right image is the desired perspective view):
![findChessboardCorners](/upfiles/14924655536391477.jpg)
The homography matrix can be estimated with `findHomography()` or `getPerspectiveTransform()`:
H:
[0.3290339333220102, -1.244138808862929, 536.4769088231476;
0.6969763913334048, -0.08935909072571532, -80.34068504082408;
0.00040511729592961, -0.001079740100565012, 0.9999999999999999]
The first image can be warped to the desired perspective view using `warpPerspective()` (left: desired perspective view, right: left image warped):
![warpPerspective](/upfiles/14924659483785784.jpg)
----------
**Demo 2: compute the homography matrix from the camera displacement**
With the function `solvePnP()`, we can estimate the camera poses (`rvec1`, `tvec1` and `rvec2`, `tvec2`) for the two images and draw the corresponding object frames:
- Camera pose for the first camera: ![c1Mo](/upfiles/14924679694250619.png)
- Camera pose for the second camera: ![c2Mo](/upfiles/1492468042857871.png)
- Homogeneous transformation between the two cameras: ![c2Mc1](/upfiles/14924683699685346.png)
![solvePnP](/upfiles/14924663647969524.jpg)
It is then possible to use the camera poses information to compute the homography transformation related to a specific object plane:
![Homography Wikipedia](/upfiles/14924666482155581.png)
> By Homography-transl.svg: Per
> Rosengren derivative work: Appoose
> (Homography-transl.svg) [CC BY
> 3.0](http://creativecommons.org/licenses/by/3.0),
> via Wikimedia Commons
On this figure, `n` is the normal vector of the plane and `d` the distance between the camera frame and the plane along the plane normal. The [equation](https://en.wikipedia.org/wiki/Homography_(computer_vision)) to compute the homography from the camera displacement is:
![Homography from camera displacement](/upfiles/14924671363871545.png)
Where ![H_1to2](/upfiles/14924672345573342.png) is the homography matrix that maps the points in the first camera frame to the corresponding points in the second camera frame, ![R_1to2](/upfiles/14924675037600391.png) is the rotation matrix that represents the rotation between the two camera frames and ![t_1to2](/upfiles/1492467700677139.png) the translation vector between the two camera frames.
Here the normal vector `n` is the plane normal expressed in the camera frame 1 and can be computed as the cross product of 2 vectors (using 3 non collinear points that lie on the plane) or in our case directly with:
cv::Mat normal = (cv::Mat_<double>(3,1) << 0, 0, 1);
cv::Mat normal1 = R1*normal;
The distance `d` can be computed as the dot product between the plane normal and a point on the plane or by computing the [plane equation](http://mathworld.wolfram.com/Plane.html) and using the `D` coefficient:
cv::Mat origin(3,1,CV_64F,cv::Scalar(0));
cv::Mat origin1 = R1*origin + tvec1;
double d_inv1 = 1.0 / normal1.dot(-origin1);
The final homography matrix that can be used to warp the first image into the desired perspective view is (the same camera is used in both images here): ![KHK_inv](/upfiles/14924707131954318.png)
cv::Mat homography = cameraMatrix * (R_1to2-d_inv1*tvec_1to2*normal1.t()) * cameraMatrix.inv();
homography /= homography.at<double>(2,2);
The result is:
homography:
[0.416056997554822, -1.306889022302135, 553.7055454434186;
0.7917584236503302, -0.06341244862332501, -108.2770023399513;
0.000592635728708199, -0.00102065172420853, 0.9999999999999999]
With the same visual result (left: warp from `findHomography()`, right: warp from the homography computed from the camera displacement:
![warp compare](/upfiles/14924714266107164.jpg)
----------
**Demo 3: decompose the homography matrix to a camera displacement**
OpenCV 3 contains the function [`decomposeHomographyMat()`](http://docs.opencv.org/3.2.0/d9/d0c/group__calib3d.html#ga7f60bdff78833d1e3fd6d9d0fd538d92) which allows to decompose the homography matrix to a set or rotations, translations and plane normals:
std::vector<cv::Mat> Rs_decomp, ts_decomp, normals_decomp;
cv::decomposeHomographyMat(homography, cameraMatrix, Rs_decomp, ts_decomp, normals_decomp);
The "correct" results are:
rvec_1to2=[-0.09198300622505946, -0.5372581099787472, 1.310868859706331]
t_1to2=[0.1578091503401751, 0.005603438955404258, 0.1383378923943395]
normal1: [0.1973513036075573, -0.6283452083012302, 0.7524857222361636]
The four solutions are:
Rs_decomp[0]=[-0.09198300622506073, -0.5372581099787442, 1.310868859706334]
ts_decomp[0]=[-0.7747960949402362, -0.0275112223310486, -0.6791979969371286]
normals_decomp[0]=[-0.1973513036075609, 0.6283452083012311, -0.7524857222361622]
Rs_decomp[1]=[-0.09198300622506073, -0.5372581099787442, 1.310868859706334]
ts_decomp[1]=[0.7747960949402362, 0.0275112223310486, 0.6791979969371286]
normals_decomp[1]=[0.1973513036075609, -0.6283452083012311, 0.7524857222361622]
Rs_decomp[2]=[0.1053487857879288, -0.1561929289949728, 1.401356547596018]
ts_decomp[2]=[-0.4666552464032777, 0.1050033058302994, -0.9130076461351245]
normals_decomp[2]=[-0.3131715295480532, 0.842120625125061, -0.4390403692367126]
Rs_decomp[3]=[0.1053487857879288, -0.1561929289949728, 1.401356547596018]
ts_decomp[3]=[0.4666552464032777, -0.1050033058302994, 0.9130076461351245]
normals_decomp[3]=[0.3131715295480532, -0.842120625125061, 0.4390403692367126]
According to the documentation:
> At least two of the solutions may
> further be invalidated if point
> correspondences are available by
> applying positive depth constraint
> (all points must be in front of the
> camera).
The translation is recovered **up to a scale factor** (same conclusion in this [post](http://stackoverflow.com/a/35943205)) that corresponds in fact to the distance `d`. All the four solutions provide here a visually correct warping:
cv::Mat homography_decomp_original = computeHomography(Rs_decomp[i], ts_decomp[i], -1.0, normals_decomp[i]); //formula to compute H from the camera displacement
cv::Mat homography_decomp = cameraMatrix * homography_decomp_original * cameraMatrix.inv();
homography_decomp /= homography_decomp.at<double>(2,2);
The homography matrix reconstructed for the first solution is:
homography_decomp:
[0.4160569975548221, -1.306889022302135, 553.7055454434186;
0.7917584236503303, -0.06341244862332487, -108.2770023399513;
0.0005926357287081991, -0.00102065172420853, 1]
----------
Note: there is a minor difference between the Wikipedia source and the reference paper of `decomposeHomographyMat()` ([Deeper understanding of the homography decomposition for vision-based control](https://hal.inria.fr/inria-00174036v3/document)):
- `H = R - tn/d` on [Wikipedia](https://en.wikipedia.org/wiki/Homography_(computer_vision)) but `H = R + tn/d` in the paper
It looks like it is just a difference between my understanding or the convention used (maybe in the computation/sign of `d`?), to be checked.Mon, 17 Apr 2017 19:22:04 -0500http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?answer=141414#post-id-141414Comment by opencvr for <div class="snippet"><p>I have written in this answer some experimentations I did to understand more the concept of homography. Even if this is not really an answer of the original post, I hope it could also be useful to other people and it is a good way for me to summarize all the information I gathered. I have also added the necessary code to check and make the link between the theory and the practice.</p>
<hr/>
<p><strong>What is the homography matrix?</strong></p>
<p>For the theory, just refer to a computer vision course (e.g. <a href="http://www.cse.psu.edu/~rtc12/CSE486/lecture16.pdf">Lecture 16: Planar Homographies</a>, ...) or book (e.g. <a href="http://www.robots.ox.ac.uk/~vgg/hzbook/">Multiple View Geometry in Computer Vision</a>, <a href="http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf">Computer Vision: Algorithms and Applications</a>, ...). Quickly, the planar homography relates the transformation between two planes (up to a scale):
<img alt="Homography" src="/upfiles/14924619058112717.png"/></p>
<p>This planar transformation can be between:</p>
<ul>
<li>a planar object and the image plane (image from <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p9):</li>
</ul>
<p><img alt="Homography transformation" src="/upfiles/14924633307636158.png"/></p>
<ul>
<li>a planar surface viewed by two cameras (image from <a href="http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf">here</a>, p56 and <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p10):</li>
</ul>
<p><img alt="Homography transformation2-1" src="/upfiles/14924623182758898.png"/>
<img alt="Homography transformation2-2" src="/upfiles/1492463474658705.png"/></p>
<ul>
<li>a rotating camera around its axis of projection, equivalent to consider that the points are on a plane at infinity (image from <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p11):</li>
</ul>
<p><img alt="Homography transformation3" src="/upfiles/1492463827855394.png"/></p>
<hr/>
<p><strong>How the homography can be useful?</strong></p>
<ul>
<li>Camera pose estimation with coplanar points (see <a href="https://dsp.stackexchange.com/questions/2736/step-by-step-camera-pose-estimation-for-visual-tracking-and-planar-markers">here</a> or <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p30), the homography matrix can be estimated using the DLT (Direct Linear Transform) algorithm</li>
<li>Perspective removal, correction:
<img alt="Perspective correction" src="/upfiles/14924648196133987.png"/></li>
<li>Panorama stitching:
<img alt="Panorama stitching" src="/upfiles/1492465012534416.png"/></li>
</ul>
<hr/>
<p><strong>Demo 1: perspective correction</strong></p>
<p>The function <code>findChessboardCorners()</code> returns the chessboard corners location (the left image is the source, the right image is the desired perspective view):</p>
<p><img alt="findChessboardCorners" src="/upfiles/14924655536391477.jpg"/></p>
<p>The homography matrix can be estimated with <code>findHomography()</code> or <code>getPerspectiveTransform()</code>:</p>
<pre><code>H:
[0.3290339333220102, -1.244138808862929, 536.4769088231476;
0.6969763913334048, -0.08935909072571532, -80.34068504082408;
0.00040511729592961, -0.001079740100565012, 0.9999999999999999]
</code></pre>
<p>The first image can be warped to the desired perspective view using <code>warpPerspective()</code> (left: desired perspective view, right: left image warped):</p>
<p><img alt="warpPerspective" src="/upfiles/14924659483785784.jpg"/></p>
<hr/>
<p><strong>Demo 2: compute the homography matrix from the camera displacement</strong></p>
<p>With the function <code>solvePnP()</code>, we can estimate the camera poses (<code>rvec1</code>, <code>tvec1</code> and <code>rvec2</code>, <code>tvec2</code>) for the two images and draw the corresponding object frames:</p>
<ul>
<li>Camera pose for the first camera: <img alt="c1Mo" src="/upfiles/14924679694250619.png"/></li>
<li>Camera pose for the second camera: <img alt="c2Mo" src="/upfiles/1492468042857871.png"/></li>
<li>Homogeneous transformation between the two cameras: <img alt="c2Mc1" src="/upfiles/14924683699685346.png"/></li>
</ul>
<p><img alt="solvePnP" src="/upfiles/14924663647969524.jpg"/></p>
<p>It is then possible to use the camera poses information to compute the homography transformation related to a specific object plane:</p>
<p><img alt="Homography Wikipedia" src="/upfiles/14924666482155581.png"/></p>
<blockquote>
<p>By Homography-transl.svg: Per
Rosengren derivative work: Appoose
(Homography-transl.svg) <a href="http://creativecommons.org/licenses/by/3.0">CC BY
3.0</a>,
via Wikimedia Commons</p>
</blockquote>
<p>On this figure, <code>n</code> is the normal vector of the plane and <code>d</code> the distance between the camera frame and the plane along the plane normal. The <a href="https://en.wikipedia.org/wiki/Homography_(computer_vision)">equation</a> to compute the homography from the camera displacement is:</p>
<p><img alt="Homography from camera displacement" src="/upfiles/14924671363871545.png"/></p>
<p>Where <img alt="H_1to2" src="/upfiles/14924672345573342.png"/> is the homography matrix that maps the points in the first camera frame to the corresponding points in the second camera frame, <img alt="R_1to2" src="/upfiles/14924675037600391.png"/> is the rotation matrix that represents the rotation between the two camera frames and <img alt="t_1to2" src="/upfiles/1492467700677139.png"/> the translation vector between the two camera frames.</p>
<p>Here the normal vector <code>n</code> is the plane normal expressed in the camera frame 1 and can be computed as the cross product of 2 vectors (using 3 non collinear points that lie on the plane) or in ...</p></hr/></hr/></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=214022#post-id-214022could you look at this question [link text](https://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213999#post-id-213999) ?
where I have taken R,t as identity matrix, because camera is simply downward to the table where I placed object on different locations on table. is this right method? it is working most of the time.Fri, 07 Jun 2019 03:31:22 -0500http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=214022#post-id-214022Comment by Eduardo for <div class="snippet"><p>I have written in this answer some experimentations I did to understand more the concept of homography. Even if this is not really an answer of the original post, I hope it could also be useful to other people and it is a good way for me to summarize all the information I gathered. I have also added the necessary code to check and make the link between the theory and the practice.</p>
<hr/>
<p><strong>What is the homography matrix?</strong></p>
<p>For the theory, just refer to a computer vision course (e.g. <a href="http://www.cse.psu.edu/~rtc12/CSE486/lecture16.pdf">Lecture 16: Planar Homographies</a>, ...) or book (e.g. <a href="http://www.robots.ox.ac.uk/~vgg/hzbook/">Multiple View Geometry in Computer Vision</a>, <a href="http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf">Computer Vision: Algorithms and Applications</a>, ...). Quickly, the planar homography relates the transformation between two planes (up to a scale):
<img alt="Homography" src="/upfiles/14924619058112717.png"/></p>
<p>This planar transformation can be between:</p>
<ul>
<li>a planar object and the image plane (image from <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p9):</li>
</ul>
<p><img alt="Homography transformation" src="/upfiles/14924633307636158.png"/></p>
<ul>
<li>a planar surface viewed by two cameras (image from <a href="http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf">here</a>, p56 and <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p10):</li>
</ul>
<p><img alt="Homography transformation2-1" src="/upfiles/14924623182758898.png"/>
<img alt="Homography transformation2-2" src="/upfiles/1492463474658705.png"/></p>
<ul>
<li>a rotating camera around its axis of projection, equivalent to consider that the points are on a plane at infinity (image from <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p11):</li>
</ul>
<p><img alt="Homography transformation3" src="/upfiles/1492463827855394.png"/></p>
<hr/>
<p><strong>How the homography can be useful?</strong></p>
<ul>
<li>Camera pose estimation with coplanar points (see <a href="https://dsp.stackexchange.com/questions/2736/step-by-step-camera-pose-estimation-for-visual-tracking-and-planar-markers">here</a> or <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p30), the homography matrix can be estimated using the DLT (Direct Linear Transform) algorithm</li>
<li>Perspective removal, correction:
<img alt="Perspective correction" src="/upfiles/14924648196133987.png"/></li>
<li>Panorama stitching:
<img alt="Panorama stitching" src="/upfiles/1492465012534416.png"/></li>
</ul>
<hr/>
<p><strong>Demo 1: perspective correction</strong></p>
<p>The function <code>findChessboardCorners()</code> returns the chessboard corners location (the left image is the source, the right image is the desired perspective view):</p>
<p><img alt="findChessboardCorners" src="/upfiles/14924655536391477.jpg"/></p>
<p>The homography matrix can be estimated with <code>findHomography()</code> or <code>getPerspectiveTransform()</code>:</p>
<pre><code>H:
[0.3290339333220102, -1.244138808862929, 536.4769088231476;
0.6969763913334048, -0.08935909072571532, -80.34068504082408;
0.00040511729592961, -0.001079740100565012, 0.9999999999999999]
</code></pre>
<p>The first image can be warped to the desired perspective view using <code>warpPerspective()</code> (left: desired perspective view, right: left image warped):</p>
<p><img alt="warpPerspective" src="/upfiles/14924659483785784.jpg"/></p>
<hr/>
<p><strong>Demo 2: compute the homography matrix from the camera displacement</strong></p>
<p>With the function <code>solvePnP()</code>, we can estimate the camera poses (<code>rvec1</code>, <code>tvec1</code> and <code>rvec2</code>, <code>tvec2</code>) for the two images and draw the corresponding object frames:</p>
<ul>
<li>Camera pose for the first camera: <img alt="c1Mo" src="/upfiles/14924679694250619.png"/></li>
<li>Camera pose for the second camera: <img alt="c2Mo" src="/upfiles/1492468042857871.png"/></li>
<li>Homogeneous transformation between the two cameras: <img alt="c2Mc1" src="/upfiles/14924683699685346.png"/></li>
</ul>
<p><img alt="solvePnP" src="/upfiles/14924663647969524.jpg"/></p>
<p>It is then possible to use the camera poses information to compute the homography transformation related to a specific object plane:</p>
<p><img alt="Homography Wikipedia" src="/upfiles/14924666482155581.png"/></p>
<blockquote>
<p>By Homography-transl.svg: Per
Rosengren derivative work: Appoose
(Homography-transl.svg) <a href="http://creativecommons.org/licenses/by/3.0">CC BY
3.0</a>,
via Wikimedia Commons</p>
</blockquote>
<p>On this figure, <code>n</code> is the normal vector of the plane and <code>d</code> the distance between the camera frame and the plane along the plane normal. The <a href="https://en.wikipedia.org/wiki/Homography_(computer_vision)">equation</a> to compute the homography from the camera displacement is:</p>
<p><img alt="Homography from camera displacement" src="/upfiles/14924671363871545.png"/></p>
<p>Where <img alt="H_1to2" src="/upfiles/14924672345573342.png"/> is the homography matrix that maps the points in the first camera frame to the corresponding points in the second camera frame, <img alt="R_1to2" src="/upfiles/14924675037600391.png"/> is the rotation matrix that represents the rotation between the two camera frames and <img alt="t_1to2" src="/upfiles/1492467700677139.png"/> the translation vector between the two camera frames.</p>
<p>Here the normal vector <code>n</code> is the plane normal expressed in the camera frame 1 and can be computed as the cross product of 2 vectors (using 3 non collinear points that lie on the plane) or in ...</p></hr/></hr/></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=213946#post-id-213946If the camera plane is parallel to the planar object, you can describe the transformation using a [different model](http://szeliski.org/Book/drafts/SzeliskiBook_20100805_draft.pdf#subsection.2.1.2).
If `H` is identity, it means that there is no transformation, the "images are the same".Wed, 05 Jun 2019 15:30:31 -0500http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=213946#post-id-213946Comment by opencvr for <div class="snippet"><p>I have written in this answer some experimentations I did to understand more the concept of homography. Even if this is not really an answer of the original post, I hope it could also be useful to other people and it is a good way for me to summarize all the information I gathered. I have also added the necessary code to check and make the link between the theory and the practice.</p>
<hr/>
<p><strong>What is the homography matrix?</strong></p>
<p>For the theory, just refer to a computer vision course (e.g. <a href="http://www.cse.psu.edu/~rtc12/CSE486/lecture16.pdf">Lecture 16: Planar Homographies</a>, ...) or book (e.g. <a href="http://www.robots.ox.ac.uk/~vgg/hzbook/">Multiple View Geometry in Computer Vision</a>, <a href="http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf">Computer Vision: Algorithms and Applications</a>, ...). Quickly, the planar homography relates the transformation between two planes (up to a scale):
<img alt="Homography" src="/upfiles/14924619058112717.png"/></p>
<p>This planar transformation can be between:</p>
<ul>
<li>a planar object and the image plane (image from <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p9):</li>
</ul>
<p><img alt="Homography transformation" src="/upfiles/14924633307636158.png"/></p>
<ul>
<li>a planar surface viewed by two cameras (image from <a href="http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf">here</a>, p56 and <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p10):</li>
</ul>
<p><img alt="Homography transformation2-1" src="/upfiles/14924623182758898.png"/>
<img alt="Homography transformation2-2" src="/upfiles/1492463474658705.png"/></p>
<ul>
<li>a rotating camera around its axis of projection, equivalent to consider that the points are on a plane at infinity (image from <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p11):</li>
</ul>
<p><img alt="Homography transformation3" src="/upfiles/1492463827855394.png"/></p>
<hr/>
<p><strong>How the homography can be useful?</strong></p>
<ul>
<li>Camera pose estimation with coplanar points (see <a href="https://dsp.stackexchange.com/questions/2736/step-by-step-camera-pose-estimation-for-visual-tracking-and-planar-markers">here</a> or <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p30), the homography matrix can be estimated using the DLT (Direct Linear Transform) algorithm</li>
<li>Perspective removal, correction:
<img alt="Perspective correction" src="/upfiles/14924648196133987.png"/></li>
<li>Panorama stitching:
<img alt="Panorama stitching" src="/upfiles/1492465012534416.png"/></li>
</ul>
<hr/>
<p><strong>Demo 1: perspective correction</strong></p>
<p>The function <code>findChessboardCorners()</code> returns the chessboard corners location (the left image is the source, the right image is the desired perspective view):</p>
<p><img alt="findChessboardCorners" src="/upfiles/14924655536391477.jpg"/></p>
<p>The homography matrix can be estimated with <code>findHomography()</code> or <code>getPerspectiveTransform()</code>:</p>
<pre><code>H:
[0.3290339333220102, -1.244138808862929, 536.4769088231476;
0.6969763913334048, -0.08935909072571532, -80.34068504082408;
0.00040511729592961, -0.001079740100565012, 0.9999999999999999]
</code></pre>
<p>The first image can be warped to the desired perspective view using <code>warpPerspective()</code> (left: desired perspective view, right: left image warped):</p>
<p><img alt="warpPerspective" src="/upfiles/14924659483785784.jpg"/></p>
<hr/>
<p><strong>Demo 2: compute the homography matrix from the camera displacement</strong></p>
<p>With the function <code>solvePnP()</code>, we can estimate the camera poses (<code>rvec1</code>, <code>tvec1</code> and <code>rvec2</code>, <code>tvec2</code>) for the two images and draw the corresponding object frames:</p>
<ul>
<li>Camera pose for the first camera: <img alt="c1Mo" src="/upfiles/14924679694250619.png"/></li>
<li>Camera pose for the second camera: <img alt="c2Mo" src="/upfiles/1492468042857871.png"/></li>
<li>Homogeneous transformation between the two cameras: <img alt="c2Mc1" src="/upfiles/14924683699685346.png"/></li>
</ul>
<p><img alt="solvePnP" src="/upfiles/14924663647969524.jpg"/></p>
<p>It is then possible to use the camera poses information to compute the homography transformation related to a specific object plane:</p>
<p><img alt="Homography Wikipedia" src="/upfiles/14924666482155581.png"/></p>
<blockquote>
<p>By Homography-transl.svg: Per
Rosengren derivative work: Appoose
(Homography-transl.svg) <a href="http://creativecommons.org/licenses/by/3.0">CC BY
3.0</a>,
via Wikimedia Commons</p>
</blockquote>
<p>On this figure, <code>n</code> is the normal vector of the plane and <code>d</code> the distance between the camera frame and the plane along the plane normal. The <a href="https://en.wikipedia.org/wiki/Homography_(computer_vision)">equation</a> to compute the homography from the camera displacement is:</p>
<p><img alt="Homography from camera displacement" src="/upfiles/14924671363871545.png"/></p>
<p>Where <img alt="H_1to2" src="/upfiles/14924672345573342.png"/> is the homography matrix that maps the points in the first camera frame to the corresponding points in the second camera frame, <img alt="R_1to2" src="/upfiles/14924675037600391.png"/> is the rotation matrix that represents the rotation between the two camera frames and <img alt="t_1to2" src="/upfiles/1492467700677139.png"/> the translation vector between the two camera frames.</p>
<p>Here the normal vector <code>n</code> is the plane normal expressed in the camera frame 1 and can be computed as the cross product of 2 vectors (using 3 non collinear points that lie on the plane) or in ...</p></hr/></hr/></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=213921#post-id-213921what if my camera plane is parallel to image plane, can I assume then homography as identity matrix?Wed, 05 Jun 2019 05:27:32 -0500http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=213921#post-id-213921Comment by StevenPuttemans for <div class="snippet"><p>I have written in this answer some experimentations I did to understand more the concept of homography. Even if this is not really an answer of the original post, I hope it could also be useful to other people and it is a good way for me to summarize all the information I gathered. I have also added the necessary code to check and make the link between the theory and the practice.</p>
<hr/>
<p><strong>What is the homography matrix?</strong></p>
<p>For the theory, just refer to a computer vision course (e.g. <a href="http://www.cse.psu.edu/~rtc12/CSE486/lecture16.pdf">Lecture 16: Planar Homographies</a>, ...) or book (e.g. <a href="http://www.robots.ox.ac.uk/~vgg/hzbook/">Multiple View Geometry in Computer Vision</a>, <a href="http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf">Computer Vision: Algorithms and Applications</a>, ...). Quickly, the planar homography relates the transformation between two planes (up to a scale):
<img alt="Homography" src="/upfiles/14924619058112717.png"/></p>
<p>This planar transformation can be between:</p>
<ul>
<li>a planar object and the image plane (image from <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p9):</li>
</ul>
<p><img alt="Homography transformation" src="/upfiles/14924633307636158.png"/></p>
<ul>
<li>a planar surface viewed by two cameras (image from <a href="http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf">here</a>, p56 and <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p10):</li>
</ul>
<p><img alt="Homography transformation2-1" src="/upfiles/14924623182758898.png"/>
<img alt="Homography transformation2-2" src="/upfiles/1492463474658705.png"/></p>
<ul>
<li>a rotating camera around its axis of projection, equivalent to consider that the points are on a plane at infinity (image from <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p11):</li>
</ul>
<p><img alt="Homography transformation3" src="/upfiles/1492463827855394.png"/></p>
<hr/>
<p><strong>How the homography can be useful?</strong></p>
<ul>
<li>Camera pose estimation with coplanar points (see <a href="https://dsp.stackexchange.com/questions/2736/step-by-step-camera-pose-estimation-for-visual-tracking-and-planar-markers">here</a> or <a href="https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf">here</a>, p30), the homography matrix can be estimated using the DLT (Direct Linear Transform) algorithm</li>
<li>Perspective removal, correction:
<img alt="Perspective correction" src="/upfiles/14924648196133987.png"/></li>
<li>Panorama stitching:
<img alt="Panorama stitching" src="/upfiles/1492465012534416.png"/></li>
</ul>
<hr/>
<p><strong>Demo 1: perspective correction</strong></p>
<p>The function <code>findChessboardCorners()</code> returns the chessboard corners location (the left image is the source, the right image is the desired perspective view):</p>
<p><img alt="findChessboardCorners" src="/upfiles/14924655536391477.jpg"/></p>
<p>The homography matrix can be estimated with <code>findHomography()</code> or <code>getPerspectiveTransform()</code>:</p>
<pre><code>H:
[0.3290339333220102, -1.244138808862929, 536.4769088231476;
0.6969763913334048, -0.08935909072571532, -80.34068504082408;
0.00040511729592961, -0.001079740100565012, 0.9999999999999999]
</code></pre>
<p>The first image can be warped to the desired perspective view using <code>warpPerspective()</code> (left: desired perspective view, right: left image warped):</p>
<p><img alt="warpPerspective" src="/upfiles/14924659483785784.jpg"/></p>
<hr/>
<p><strong>Demo 2: compute the homography matrix from the camera displacement</strong></p>
<p>With the function <code>solvePnP()</code>, we can estimate the camera poses (<code>rvec1</code>, <code>tvec1</code> and <code>rvec2</code>, <code>tvec2</code>) for the two images and draw the corresponding object frames:</p>
<ul>
<li>Camera pose for the first camera: <img alt="c1Mo" src="/upfiles/14924679694250619.png"/></li>
<li>Camera pose for the second camera: <img alt="c2Mo" src="/upfiles/1492468042857871.png"/></li>
<li>Homogeneous transformation between the two cameras: <img alt="c2Mc1" src="/upfiles/14924683699685346.png"/></li>
</ul>
<p><img alt="solvePnP" src="/upfiles/14924663647969524.jpg"/></p>
<p>It is then possible to use the camera poses information to compute the homography transformation related to a specific object plane:</p>
<p><img alt="Homography Wikipedia" src="/upfiles/14924666482155581.png"/></p>
<blockquote>
<p>By Homography-transl.svg: Per
Rosengren derivative work: Appoose
(Homography-transl.svg) <a href="http://creativecommons.org/licenses/by/3.0">CC BY
3.0</a>,
via Wikimedia Commons</p>
</blockquote>
<p>On this figure, <code>n</code> is the normal vector of the plane and <code>d</code> the distance between the camera frame and the plane along the plane normal. The <a href="https://en.wikipedia.org/wiki/Homography_(computer_vision)">equation</a> to compute the homography from the camera displacement is:</p>
<p><img alt="Homography from camera displacement" src="/upfiles/14924671363871545.png"/></p>
<p>Where <img alt="H_1to2" src="/upfiles/14924672345573342.png"/> is the homography matrix that maps the points in the first camera frame to the corresponding points in the second camera frame, <img alt="R_1to2" src="/upfiles/14924675037600391.png"/> is the rotation matrix that represents the rotation between the two camera frames and <img alt="t_1to2" src="/upfiles/1492467700677139.png"/> the translation vector between the two camera frames.</p>
<p>Here the normal vector <code>n</code> is the plane normal expressed in the camera frame 1 and can be computed as the cross product of 2 vectors (using 3 non collinear points that lie on the plane) or in ...</p></hr/></hr/></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=141838#post-id-141838@Eduardo, could you push this content into a tutorial on the whole topic. It would be a waste to see this information disappear :/Wed, 19 Apr 2017 05:07:33 -0500http://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/?comment=141838#post-id-141838